anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
Our team is fighting night and day during this recruitment season to internships. As with many others, we have varied interests in the fields that we specialize in, and we spend a lot of time tailoring our Résumes to the specific position we are looking at. We believe that an automatic Résume will immensely simplify this process.
## What it does
The program takes two inputs: (1) our CV, a full list of all the projects we've done and (2) The job description of the job posting. The program will then output a new compiled resume that contains rewritten information that is most relevant to the position.
## How we built it
We designed this project to utilize the most recent AI technologies. We made use of word2vec to create word embeddings, which we store in a Convex database. Then, using Convex's built-in vector search, we compare the job postings with your list of projects and experiences, and output the 5 most relevant ones. Finally, as a last measure, we run the projects through an LLM to shape them to be a good fit for the job description.
## Challenges we ran into
We had a lot of challenges making in handling the difference cases of resumes and parsing the differences in it.
## Accomplishments that we're proud of
## What we learned
We learned all sorts of things from this project. Firstly, the power of vector embeddings and their various use cases with all sorts of media. We also learned a lot regarding the space of ML models out there that we can make use of. Lastly, we learned how to quickly run through documentations of relevant technologies and shape them to our needs.
## What's next for resumebuilder.ai
We managed to get the nearest work summaries that is associated with the job. Next, we plan to rebuild the resume such that we get pdfs formated nicely using latex | ## The Problem (Why did we build this?)
The current tech industry has been difficult to navigate for newcomers to tech. With competition becoming more fierce, students struggle to stand out in their job applications
## Our Problem Statement:
How might we help students kickstart their early careers in tech in the current industry?
## The Solution
To mitigate this problem, we decided to create a platform that combines all of this data, and created an all-in-one job board for CS internships.
Each job card includes:
• Company
• Position title
• Salary Range
• Visa sponsorship
• Leetcode problems (if available)
In addition, many struggle to get past the resume screening phase, so we also implemented a feature that optimizes resumes by uploading your resume, pasting in the job URL, and we provide an optimized resume for download
## How we built it
We built the backend on Go and the frontend on React, Bootstrap, HTML and CSS. The prototype was designed on Figma
We have created two separate servers: a React Front End servers that handles requests from the user, as well as Golang server that stores all the necessary data and responds to requests from the front end server. A significant part of this project was website scarping that involved parsing through various websites such as Levels.fyi and Github job pages to get the necessary data for the final job board
## Challenges we ran into
1. The diversity of websites poses a significant challenge when it comes to data scraping, particularly for the resume helper feature, as the structure of the sites is not guaranteed and can make scraping difficult.
2. The limited time frame of 24 hours presents a challenge in terms of creating a polished front-end user interface. CSS styling and formatting are time-consuming tasks that require significant attention to detail.
3. Collaborative work can lead to potential issues with Git, including conflicts and other problems, which can be challenging to resolve.
## What we learned
1. Through the course of this project, we gained a thorough understanding of web scraping techniques and strategies for parsing data from websites.
2. We also acquired knowledge and skills in file handling and management in Go.
3. We expanded our proficiency in React, particularly in the creation of high-quality, visually appealing components.
4. Additionally, we developed our ability to work efficiently and effectively in a fast-paced team environment, with an emphasis on perseverance and resilience when faced with challenges.
5. We also gained insights into best practices for UI/UX design and the importance of user-centric design.
## What's next for sprout.fyi
In the future, we want to launch a mobile app for sprout.fyi. In addition to internships, we also want to add new grad jobs to our job boards. | ## Inspiration
40 million people in the world are blind, including 20% of all people aged 85 or older. Half a million people suffer paralyzing spinal cord injuries every year. 8.5 million people are affected by Parkinson’s disease, with the vast majority of these being senior citizens. The pervasive difficulty for these individuals to interact with objects in their environment, including identifying or physically taking the medications vital to their health, is unacceptable given the capabilities of today’s technology.
First, we asked ourselves the question, what if there was a vision-powered robotic appliance that could serve as a helping hand to the physically impaired? Then we began brainstorming: Could a language AI model make the interface between these individual’s desired actions and their robot helper’s operations even more seamless? We ended up creating Baymax—a robot arm that understands everyday speech to generate its own instructions for meeting exactly what its loved one wants. Much more than its brilliant design, Baymax is intelligent, accurate, and eternally diligent.
We know that if Baymax was implemented first in high-priority nursing homes, then later in household bedsides and on wheelchairs, it would create a lasting improvement in the quality of life for millions. Baymax currently helps its patients take their medicine, but it is easily extensible to do much more—assisting these same groups of people with tasks like eating, dressing, or doing their household chores.
## What it does
Baymax listens to a user’s requests on which medicine to pick up, then picks up the appropriate pill and feeds it to the user. Note that this could be generalized to any object, ranging from food, to clothes, to common household trinkets, to more. Baymax responds accurately to conversational, even meandering, natural language requests for which medicine to take—making it perfect for older members of society who may not want to memorize specific commands. It interprets these requests to generate its own pseudocode, later translated to robot arm instructions, for following the tasks outlined by its loved one. Subsequently, Baymax delivers the medicine to the user by employing a powerful computer vision model to identify and locate a user’s mouth and make real-time adjustments.
## How we built it
The robot arm by Reazon Labs, a 3D-printed arm with 8 servos as pivot points, is the heart of our project. We wrote custom inverse kinematics software from scratch to control these 8 degrees of freedom and navigate the end-effector to a point in three dimensional space, along with building our own animation methods for the arm to follow a given path. Our animation methods interpolate the arm’s movements through keyframes, or defined positions, similar to how film editors dictate animations. This allowed us to facilitate smooth, yet precise, motion which is safe for the end user.
We built a pipeline to take in speech input from the user and process their request. We wanted users to speak with the robot in natural language, so we used OpenAI’s Whisper system to convert the user commands to text, then used OpenAI’s GPT-4 API to figure out which medicine(s) they were requesting assistance with.
We focused on computer vision to recognize the user’s face and mouth. We used OpenCV to get the webcam live stream and used 3 different Convolutional Neural Networks for facial detection, masking, and feature recognition. We extracted coordinates from the model output to extrapolate facial landmarks and identify the location of the center of the mouth, simultaneously detecting if the user’s mouth is open or closed.
When we put everything together, our result was a functional system where a user can request medicines or pills, and the arm will pick up the appropriate medicines one by one, feeding them to the user while making real time adjustments as it approaches the user’s mouth.
## Challenges we ran into
We quickly learned that working with hardware introduced a lot of room for complications. The robot arm we used was a prototype, entirely 3D-printed yet equipped with high-torque motors, and parts were subject to wear and tear very quickly, which sacrificed the accuracy of its movements. To solve this, we implemented torque and current limiting software and wrote Python code to smoothen movements and preserve the integrity of instruction.
Controlling the arm was another challenge because it has 8 motors that need to be manipulated finely enough in tandem to reach a specific point in 3D space. We had to not only learn how to work with the robot arm SDK and libraries but also comprehend the math and intuition behind its movement. We did this by utilizing forward kinematics and restricted the servo motors’ degrees of freedom to simplify the math. Realizing it would be tricky to write all the movement code from scratch, we created an animation library for the arm in which we captured certain arm positions as keyframes and then interpolated between them to create fluid motion.
Another critical issue was the high latency between the video stream and robot arm’s movement, and we spent much time optimizing our computer vision pipeline to create a near instantaneous experience for our users.
## Accomplishments that we're proud of
As first-time Hackathon participants, we are incredibly proud of the incredible progress we were able to make in a very short amount of time, proving to ourselves that with hard work, passion, and a clear vision, anything is possible. Our team did a fantastic job embracing the challenge of using technology unfamiliar to us, and stepped out of our comfort zones to bring our idea to life. Whether it was building the computer vision model, or learning how to interface the robot arm’s movements with voice controls, we ended up building a robust prototype which far surpassed our initial expectations. One of our greatest successes was coordinating our work so that each function could be pieced together and emerge as a functional robot. Let’s not overlook the success of not eating our hi-chews we were using for testing!
## What we learned
We developed our skills in frameworks we were initially unfamiliar with such as how to apply Machine Learning algorithms in a real-time context. We also learned how to successfully interface software with hardware - crafting complex functions which we could see work in 3-dimensional space. Through developing this project, we also realized just how much social impact a robot arm can have for disabled or elderly populations.
## What's next for Baymax
Envision a world where Baymax, a vigilant companion, eases medication management for those with mobility challenges. First, Baymax can be implemented in nursing homes, then can become a part of households and mobility aids. Baymax is a helping hand, restoring independence to a large disadvantaged group.
This innovation marks an improvement in increasing quality of life for millions of older people, and is truly a human-centric solution in robotic form. | losing |
## Inspiration
Our team wanted to create a method to reduce the hassle of having to accurately count the number of people entering and leaving a building. This problem has become increasingly important in the midst of COVID-19 when there are strict capacity limits indoors.
## What it does
The device acts as a bidirectional customer counter that senses the direction of movement in front of an ultrasonic sensor, records this information, and displays it on a screen as well as through python and email notifications. When a person walks in (right to left), the devices increases the count of number of customers who have entered from the door. In the opposite direction (customers exiting), the count decreases therefore giving the true value of how many people are inside. We also have capacity thresholds, for example, if the capacity is 20, then at 15 customers, there will be an email notification warning that you are approaching capacity. Then another notification once you reach capacity.
## How we built it
We developed the device using an Arduino, breadboard, potentiometer, ultrasonic sensor, and LCD screen as well as using python through serial communication. The ultrasonic sensor had two sides, the first recorded the amount of people passing by it in one direction and incremented, while the second recorded the number of people moving in the other direction and decremented. This sensor was connected to the LCD screen and Arduino, which was connected to a potentiometer on a breadboard.
## Challenges we ran into
One of the main challenges we ran into was fine tuning the sensor to ensure accurate readings were produced. This is especially difficult when using an ultrasonic sensor as they are very sensitive and can easily read false data. We also were unable to run the the send\_sms python script on of our partner's computer (the one with the device we developed) due to conflicts with environment variables. Therefore, we switched from SMS notifications to email notifications for the purpose of project showcase.
## Accomplishments that we're proud of
We are very proud of our final device which was able to successfully sense and count movement in different directions and display these findings on the screen. It also uses raw input data and sends the corresponding email notifications to the user.
## What we learned
One thing our team learned throughout the course of this project was how to send notifications from python to a phone through SMS. Through the use of Twilio's web API and Twilio Python helper library, we installed our dependency and were able to send SMS using Python.
## What's next for In Through the Out Door
We would like to make our model more compact with a custom PCB and casting. We would also like to work on including an ESP32 wireless microchip with full TCP/IP stack and microcontroller capabilities to integrate Wi-Fi and Bluetooth on the device. We will integrate using Python to send notifications to the user through a mobile application.
## Smart City Automation && Smartest Unsmart Hack
Our project qualifies for smart city automation, as it can be used as a system for many storeowners, who want a feasible solution that can count their store capacity and notify them as per COVID protocols that they are going over capacity. Our project also qualifies for smartest unsmart hack because our hack does not require any advanced learning frameworks or machine learning to achieve it's purpose. It uses Arduino and simple Python calling to count the capacity. | ## Inspiration
Essential workers are needed to fulfill tasks such as running restaurants, grocery shops, and travel services such as airports and train stations. Some of the tasks that these workers do include manual screening of customers entering trains and airports and checking if they are properly wearing masks. However, there have been frequent protest on the safety of these workers, with them being exposed to COVID-19 for prolonged periods and even potentially being harassed by those unsupportive of wearing masks. Hence, we wanted to find a solution that would prevent as many workers as possible from being exposed to danger. Additionally, we wanted to accomplish this goal while being environmentally-friendly in both our final design and process.
## What it does
This project is meant to provide an autonomous alternative to the manual inspection of masks by using computer technology to detect whether a user is wearing a mask properly, improperly, or not at all. To accomplish this, a camera records the user's face, and a trained machine learning algorithm determines whether the user is wearing a mask or not. To conserve energy and help the environment, an infrared sensor is used to detect nearby users, and shuts off the program and other hardware if no one is nearby. Depending on the result, a green LED light shines if the user is wearing a mask correctly while a red LED light shines and a buzzer sounds if it is not worn correctly. Additionally, if the user is not wearing a mask, the mask dispenser automatically activates to dispense a mask to the user's hands.
## How we built it
This project can de divided into two phases: the machine learning part and the physical hardware part. For the machine learning, we created a YOLOv5 algorithm with PyTorch to detect whether users are wearing a mask or not. To train the algorithm, a database of over 3000 pictures was used as the training data. Then, we used the computer camera to run the algorithm and categorize the resulting video feed into three categories with 0 to 100% confidence.
The physical hardware part consists of the infrared sensor prefacing the ML algorithm and the sensors and motors that act after obtaining the ML result. Both the sensors and motors were connected to a Raspberry Pi Pico microcontroller and controlled remotely through the computer. To control the sensors, MicroPython (RP2040) and Python were used to read the signal inputs, relay the signals between the Raspberry Pi and the computer, and finally perform sensor and motor outputs upon receiving results from the ML code. 3D modelled hardware was used alongside re-purposed recyclables to build the outer casings of our design.
## Challenges we ran into
The main challenge that the team ran into was to find a reliable method to relay signals between the Raspberry Pi Pico and the computer running the ML program. Originally, we thought that it would be possible to transfer information between the two systems through intermediary text files, but it turned out that the Pico was unable to manipulate files outside of its directory. Additionally, our subsequent idea of importing the Pico .py file into the computer failed as well. Thus, we had to implement USB serial connections to remotely modify the Pico script from within the computer.
Additionally, the wiring of the hardware components also proved to be a challenge, since caution must be exercised to prevent the project model from overheating. In many cases, this meant to use resistors when wiring the sensors and motor together with the breadboard. In essence, we had to be careful when testing our module and pay attention to any functional abnormalities and temperatures (which did happen once or twice!)
## Accomplishments that we're proud of
For many of us, we have only had experience in coding either hardware or software separately, either in classes or in other activities. Thus, the integration of the Pico Pi with the machine learning software proved to be a veritable challenge for us, since none of us were comfortable with it. With the help of mentors, we were proud of how we managed to combine our hardware and software skills together to form a coherent product with a tangible purpose. We were even more impressed of how this process was all learned and done in a short span of 24 hours.
## What we learned
From this project, we primarily learned how to integrate complex software such as machine learning and hardware together as a connected device. Since our team was new to these types of hackathons incorporating software and hardware together, building the project also proved to be a learning experience for us as a glimpse of how disciplines combining the two, such as robotics, function in real-life. Additionally, we also learned how to apply what we learned in class to real-life applications, since a good amount of information used in this project was from taught material, and it was satisfying to be able to visualize the importance of these concepts.
## What's next for AutoMask
Ideally, we would be able to introduce our physical prototype into the real world to realize our initial ambitions for this device. To successfully do so, we must first refine our algorithm bounds so that false positives and especially true negatives are minimized. Hence, a local application of our device would be our first move to obtain preliminary field results and to expand the training set as well for future calibrations. For this purpose, we could use our device for a small train station or a bus stop to test our device in a controlled manner.
Currently, AutoMask's low-fidelity prototype is only suited for a very specific type of mask dispenser. Our future goal is to make our model flexible to fit a variety of dispensers in a variety of situations. Thus, we must also refine our physical hardware to be industrially acceptable and mass producible to cover the large amount of applications this device potentially has. We want to accomplish this while maintaining our ecologically-friendly approach by continuing to use recycled and recyclable components. | ## Inspiration
**With the world producing more waste then ever recorded, sustainability has become a very important topic of discussion.** Whether that be social, environmental, or economic, sustainability has become a key factor in how we design products and how we plan for the future. Especially during the pandemic, we turned to becoming more efficient and resourceful with what we had at home. Thats where home gardens come in. Many started home gardens as a hobby or a cool way to grow your own food from the comfort of your own home. However, with the pandemic slowly coming to a close, many may no longer have the time to micromanage their plants, and those who are interested in starting this hobby may not have the patience. Enter *homegrown*, an easy way for people anyone interested in starting their own mini garden to manage their plants and enjoy the pleasures of gardening.
## What it does
*homegrown* monitors each individual plant, adjusted depending on the type of plant. Equipped with different sensors, *homegrown* monitors the plants health, whether that's it's exposure to light, moisture, or temperature. When it detects fluctuations in these levels, *homegrown* sends a text to the owner, alerting them about the plants condition and suggesting changes to alleviate these problems.
## How we built it
*homegrown* was build using python, an arduino, and other hardware components. The different sensors connected to the arduino take different measurements and record them. They are then sent as one json file to the python script where they data is then further parsed and sent by text to the user through the twilio api.
## Challenges we ran into
We originally planned on using CockroachDB as a data based but scrapped idea since dealing with initializing the database and trying to extract data out of it proved to be too difficult. We ended up using an arduino instead to send the data directly to a python script that would handle the data. Furthermore, ideation took quite a while because it was all out first times meeting each other.
## Accomplishments that we're proud of
Forming a team when we've all never met and have limited experience and still building something in the end was something that brought together each of our respective skills is something that we're proud of. Combining hardware and software was a first for some of us so we're proud of adapting quickly to cater to each others strengths
## What we learned
We learned more about python and its various libraries to build on each other and create more and more complex programs. We also learned about how different hardware components can interact with software components to increase functionality and allow for more possibilities.
## What's next for homegrown
*homegrown* has the possibility to grow bigger, not only in terms of the number of plants which it can monitor growth for, but also the amount of data if can take in surrounding the plant. With more data comes more functionality which allows for more thorough analysis of the plant's conditions to provide a better and more efficient growing experience for the plant and the user. | losing |
## Inspiration
We have all heard about the nutrition and health issues from those who surround us. Yes, adult obesity has plateaued since 2003, but it's at an extremely high rate. Two out of three U.S. adults are overweight or obese. If we look at diabetes, it's prevalent in 25.9% americans over 65. That's 11.8 million people! Those are the most common instances, let's not forget about people affected by high blood pressure, allergies, digestive or eating disorders—the list goes on. We've created a user friendly platform that utilizes Alexa to help users create healthy recipes tailored to their dietary restrictions. The voice interaction allows for a broad range of ages to learn how to use our platform. On top of that, we provide a hands free environment to ease multitasking and users are more inclined to follow the diet since it’s simple and quick to use.
## How we built it
The backend is built with Flask on Python, with the server containerized and deployed on AWS served over nginx and wsgi. We also built this with scale in mind as this should be able to scale to many millions of users, and by containerizing the server with docker and hosting it on AWS, scaling it horizontally is as easy as scaling it vertically, with a few clicks on the AWS dev console.
The front end is powered by bootstrap and Jinja (JavaScript Framework) that interfaces with a mySQL database on AWS through Flask’s object relational mapping.
All in all, Ramsay is a product built on sweat, pressure, lack of sleep and <3
## Challenges we ran into
The deployment pipeline for alexa is extremely cumbersome due to the fact that alexa has a separate dev console and debugging has to be done on the page. The way lambda handles code change is also extremely inefficient. It has taken a big toll on the development cycle and caused a lot of frustrating debugging times..
It was also very time consuming for us to manually scrape all the recipe and ingredients data from the web, because there no open source recipe API that satisfies our needs. Many of them are either costly or had rate limit restrictions on the endpoints for the free tier, which we are not content with because we wanted to provide a wide range of recipe selection for the user.
Scraping different sites gave us a lot of dirty data that required a lot of work to make it usable. We ended up using NLTK to employ noun and entity extraction to get meaningful data from a sea of garbage.
## Accomplishments that we're proud of
We managed to build out a Alexa/Lambda deployment pipeline that utilizes AWS S3 buckets and sshfs. The local source files are mounted on a remote S3 bucket that syncs with the Lambda server, enabling the developer to skip over the hassle of manually uploading the files to the lambda console everytime there is a change in the codebase.
We also built up a very comprehensive recipe database with over 10000 recipes and 3000 ingredients that allows the user to have tons of selection.
This is also the first Alexa app that we made that has a well thought out user experience and it works surprisingly well. For once Alexa is not super confused every time a user ask a question.
## What we learned:
We learnt how to web scrape implementing NLTK and BeautifulSoup python libraries. This was essential to create of database containing information about ingredients and recipe steps as well. We also became more proficient in using git and SQL. We are now git sergeants and SQL soldiers.
## What's next for Ramsay:
Make up for the sleep that we missed out on over the weekend :') | ## Inspiration
The idea for Refrigerator Ramsay came from a common problem we all face: food waste. So many times, we forget about ingredients in the fridge, or we don't know how to use them, and they go bad. We wanted to create something that could help us not only use those ingredients but also inspire us to try new recipes and reduce food waste in the process.
## What it does
Refrigerator Ramsay is an AI-powered kitchen assistant that helps users make the most of their groceries. You can take a picture of your fridge or pantry, and the AI will identify the ingredients. Then, it suggests recipes based on what you have. It also has an interactive voice feature, so you can ask for cooking tips or get help with recipes in real-time.
## How we built it
We used **Google Gemini AI** for image recognition to identify ingredients from photos. and then, to come up with creative meal ideas. We also added **Hume EVI**, which is an emotionally intelligent voice AI, to make the experience more interactive. This lets users chat with the AI to get tips, learn new cooking techniques, or personalize recipes.
## Challenges we ran into
One challenge we faced was working with **Hume EVI** for the first time. It was really fun to explore, but there was definitely a learning curve. We had to figure out how to make the voice assistant not just smart but also engaging, making sure it could deliver cooking tips naturally and understand what users were asking. It took some trial and error to get the right balance between information and interaction.
Another challenge was training the **Google Gemini AI** to be as accurate as possible when recognizing ingredients. It wasn't easy to ensure the AI could reliably detect different foods, especially when items were stacked or grouped together in photos. We had to spend extra time fine-tuning the model so that it could handle a variety of lighting conditions and product packaging.
## Accomplishments that we're proud of
We're proud of creating a solution that not only helps reduce food waste but also makes cooking more fun. The AI's ability to suggest creative recipes from random ingredients is really exciting. We're also proud of how user-friendly (and funny) the voice assistant turned out to be, making it feel like you're cooking with a friend.
## What we learned
We learned a lot about how AI can be used to solve everyday problems, like reducing food waste. Working with **Hume EVI** taught us about building conversational AI, which was new for us. It was really fun, but there was definitely a challenge in figuring out how to make the voice interaction feel natural and helpful at the same time.
We also learned about the importance of **training AI models**, especially with the **Gemini AI**. Getting the image recognition to accurately identify different ingredients required us to experiment a lot with the data and train the model to work in a variety of environments. This taught us that accuracy is key when it comes to creating a seamless user experience.
## What's next for Refrigerator Ramsay
Next, we want to improve the AI’s ability to recognize even more ingredients and maybe even offer nutritional advice. We’re also thinking about adding features where the AI can help you plan meals for the whole week based on what you have. Eventually, we’d love to partner with grocery stores to suggest recipes based on store deals, helping users save money too! | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | partial |
# Relive and Relearn
*Step foot into a **living photo album** – a window into your memories of your time spent in Paris.*
## Inspiration
Did you know that 70% of people worldwide are interested in learning a foreign language? However, the most effective learning method, immersion and practice, is often challenging for those hesitant to speak with locals or unable to find the right environment. We sought out to try and solve this problem by – even for experiences you yourself may not have lived; While practicing your language skills and getting personalized feedback, enjoy the ability to interact and immerse yourself in a new world!
## What it does
Vitre allows you to interact with a photo album containing someone else’s memories of their life! We allow you to communicate and interact with characters around you in those memories as if they were your own. At the end, we provide tailored feedback and an AI backed DELF (Diplôme d'Études en Langue Française) assessment to quantify your French capabilities. Finally, it allows for the user to make learning languages fun and effective; where users are encouraged to learn through nostalgia.
## How we built it
We built all of it on Unity, using C#. We leveraged external API’s to make the project happen.
When the user starts speaking, we used ChatGPT’s Whisper API to transform speech into text.
Then, we fed that text into co:here, with custom prompts so that it could role play and respond in character.
Meanwhile, we are checking the responses by using co:here rerank to check on the progress of the conversation, so we knew when to move on from the memory.
We store all of the conversation so that we can later use co:here classify to give the player feedback on their grammar, and give them a level on their french.
Then, using Eleven Labs, we converted co:here’s text to speech and played it for the player to simulate a real conversation.
## Challenges we ran into
VR IS TOUGH – but incredibly rewarding! None of our team knew how to use Unity VR and the learning curve sure was steep. C# was also a tricky language to get our heads around but we pulled through! Given that our game is multilingual, we ran into challenges when it came to using LLMs but we were able to use and prompt engineering to generate suitable responses in our target language.
## Accomplishments that we're proud of
Figuring out how to build and deploy on Oculus Quest 2 from Unity
Getting over that steep VR learning curve – our first time ever developing in three dimensions
Designing a pipeline between several APIs to achieve desired functionality
Developing functional environments and UI for VR
## What we learned
* 👾 An unfathomable amount of **Unity & C#** game development fundamentals – from nothing!
* 🧠 Implementing and working with **Cohere** models – rerank, chat & classify
* ☎️ C# HTTP requests in a **Unity VR** environment
* 🗣️ **OpenAI Whisper** for multilingual speech-to-text, and **ElevenLabs** for text-to-speech
* 🇫🇷🇨🇦 A lot of **French**. Our accents got noticeably better over the hours of testing.
## What's next for Vitre
* More language support
* More scenes for the existing language
* Real time grammar correction
* Pronunciation ranking and rating
* Change memories to different voices
## Credits
We took inspiration from the indie game “Before Your Eyes”, we are big fans! | ## Our Inspiration
We were inspired by apps like Duolingo and Quizlet for language learning, and wanted to extend those experiences to a VR environment. The goal was to gameify the entire learning experience and make it immersive all while providing users with the resources to dig deeper into concepts.
## What it does
EduSphere is an interactive AR/VR language learning VisionOS application designed for the new Apple Vision Pro. It contains three fully developed features: a 3D popup game, a multi-lingual chatbot, and an immersive learning environment. It leverages the visually compelling and intuitive nature of the VisionOS system to target three of the most crucial language learning styles: visual, kinesthetic, and literacy - allowing users to truly learn at their own comfort. We believe the immersive environment will make language learning even more memorable and enjoyable.
## How we built it
We built the VisionOS app using the Beta development kit for the Apple Vision Pro. The front-end and AR/VR components were made using Swift, SwiftUI, Alamofire, RealityKit, and concurrent MVVM design architecture. 3D Models were converted through Reality Converter as .usdz files for AR modelling. We stored these files on the Google Cloud Bucket Storage, with their corresponding metadata on CockroachDB. We used a microservice architecture for the backend, creating various scripts involving Python, Flask, SQL, and Cohere. To control the Apple Vision Pro simulator, we linked a Nintendo Switch controller for interaction in 3D space.
## Challenges we ran into
Learning to build for the VisionOS was challenging mainly due to the lack of documentation and libraries available. We faced various problems with 3D Modelling, colour rendering, and databases, as it was difficult to navigate this new space without references or sources to fall back on. We had to build many things from scratch while discovering the limitations within the Beta development environment. Debugging certain issues also proved to be a challenge. We also really wanted to try using eye tracking or hand gesturing technologies, but unfortunately, Apple hasn't released these yet without a physical Vision Pro. We would be happy to try out these cool features in the future, and we're definitely excited about what's to come in AR/VR!
## Accomplishments that we're proud of
We're really proud that we were able to get a functional app working on the VisionOS, especially since this was our first time working with the platform. The use of multiple APIs and 3D modelling tools was also the amalgamation of all our interests and skillsets combined, which was really rewarding to see come to life. | ## Inspiration
Students are often put into a position where they do not have the time nor experience to effectively budget their finances. This unfortunately leads to many students falling into debt, and having a difficult time keeping up with their finances. That's where wiSpend comes to the rescue! Our objective is to allow students to make healthy financial choices and be aware of their spending behaviours.
## What it does
wiSpend is an Android application that analyses financial transactions of students and creates a predictive model of spending patterns. Our application requires no effort from the user to input their own information, as all bank transaction data is synced in real-time to the application. Our advanced financial analytics allow us to create effective budget plans tailored to each user, and to provide financial advice to help students stay on budget.
## How I built it
wiSpend is build using an Android application that makes REST requests to our hosted Flask server. This server periodically creates requests to the Plaid API to obtain financial information and processes the data. Plaid API allows us to access major financial institutions' users' banking data, including transactions, balances, assets & liabilities, and much more. We focused on analysing the credit and debit transaction data, and applied statistical analytics techniques in order to identify trends from the transaction data. Based on the analysed results, the server will determine what financial advice in form of a notification to send to the user at any given point of time.
## Challenges I ran into
Integration and creating our data processing algorithm.
## Accomplishments that I'm proud of
This was the first time we as a group successfully brought all our individual work on the project and successfully integrated them together! This is a huge accomplishment for us as the integration part is usually the blocking factor from a successful hackathon project.
## What I learned
Interfacing the Android and Web server was a huge challenge but it allowed us as developers to find clever solutions by overcoming encountered roadblocks and thereby developing our own skills.
## What's next for wiSpend
Our first next feature would be to build a sophist acted budgeting app to assist users in their budgeting needs. We also plan on creating a mobile UI that can provide even more insights to users in form of charts, graphs, and infographics, as well as further developing our web platform to create a seamless experience across devices. | winning |
## Inspiration
As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities.
## What it does
FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless.
## How we built it
We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's.
## Challenges we ran into
IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url.
## Accomplishments that we're proud of
We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement.
## What's next for FaceLyft
While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases. | ## How we built it
The sensors consist of the Maxim Pegasus board and any Android phone with our app installed. The two are synchronized at the beginning, and then by moving the "tape" away from the "measure," we can get an accurate measure of distance, even for non-linear surfaces.
## Challenges we ran into
Sometimes high-variance outputs can come out of the sensors we made use of, such as Android gyroscopes. Maintaining an inertial reference frame from our board to the ground as it was rotated proved very difficult and required the use of quaternion rotational transforms. Using the Maxim Pegasus board was difficult as it is a relatively new piece of hardware, and thus, no APIs or libaries have been written for basic functions yet. We had to query for accelerometer and gyro data manually from internal IMU registers with I2C.
## Accomplishments that we're proud of
Full integration with the Maxim board and the flexibility to adapt the software to many different handyman-style use cases, e.g. as a table level, compass, etc. We experimented with and implemented various noise filtering techniques such as Kalman filters and low pass filters to increase the accuracy of our data. In general, working with the Pegasus board involved a lot of low-level read-write operations within internal device registers, so basic tasks like getting accelerometer data became much more complex than we were used to.
## What's next
Other possibilities were listed above, along with the potential to make even better estimates of absolute positioning in space through different statistical algorithms. | ## Inspiration
We're college students. After paying tuition, rent, and studying for all our courses, we're left with both little time and little money. That's why we're always on the look out for a deal. We noticed that Google Maps currently integrates ride-sharing services (Lyft, Uber...) when you look up a route. So we thought why not integrate deals for restaurants, stores, and other retail services? We were honestly surprised that Google hadn't already done this! So we did.
## What it does
We created an app that augments Google maps with deals. Whenever you search up a restaurant e.g."Popeyes" the coupons and deals are automatically aggregated and displayed! In addition, by leveraging the Twilio API we can send users reminds and links to deals. This offers a convenient tool for shoppers to quickly discover deals.
## How we built it
We used React for the frontend, and NodeJS for our backend. Our application is deployed on GCP App Engine (thanks for the free credits Google!) and we use Twilio's API to send reminders (thanks again Twilio, your organizers were extremely helpful and fast). We also planned on using Google's Cloud Vision API to automatically scrap deals from websites (this proved to be too much of a challenge, more on this later).
## Challenges we ran into
It's an understatement, but computer vision is hard. We dove head first into computer vision. None of us had experience with computer vision and we were naive in our initial planning, but there's a lesson in every failure. We gained hands on experience with Google's Cloud Vision API, and we are happy that we attempted it.
We learned that there really isn't a central resource for coupon deals, we had to go to each site or app to manually scrap the coupon data.
## Accomplishments that we're proud of
We're really proud that we finished our application, and got (almost) everything to work. We ran into some funny errors with our React deployment on App engine, and get request payloads for our backend, which proved to be very interesting to debug and fix.
## What we learned
Although we didn't complete our scraper we learned a great deal about computer vision and Google's Cloud Vision API. We also learned about Twilio's extremely easy to use sms messaging API. We also gained some general experience with provisioning and deploying on GCP (we had more experience with AWS but decided to go with Google because of Cloud Vision).
## What's next for FoodMaps
Coupon aggregation and parsing was a huge hassle for us. We would likely reach out to stores and partner with them to create a standardized API to scrape deals. Maybe even an affiliate link like what currently Amazon offers so we could get some kickbacks :). Further, we'd like to add analytics on what coupons users used, and even a poll for most wanted discounts, we believe these features would be extremely valuable for retailers.
Currently up on <https://constant-wonder-252304.appspot.com> as of 9/8/2019 (likely to be down after) | winning |
## Inspiration
iPonzi started off as a joke between us, but we decided that PennApps was the perfect place to make our dream a reality.
## What it does
The app requires the user to sign up using an email and/or social logins. After purchasing the application and creating an account, you can refer your friends to the app. For every person you refer, you are given $3, and the app costs $5. All proceeds will go to Doctors' Without Borders. A leader board of the most successful recruiters and the total amount of money donated will be updated.
## How I built it
Google Polymer, service workers, javascript, shadow-dom
## Challenges I ran into
* Learning a new framework
* Live deployment to firebase hosting
## Accomplishments that I'm proud of
* Mobile like experience offline
* App shell architecture and subsequent load times.
* Contributing to pushing the boundaries of web
## What I learned
* Don't put payment API's into production in 2 days.
* DOM module containment
## What's next for iPonzi
* Our first donation
* Expanding the number of causes we support by giving the user a choice of where their money goes.
* Adding addition features to the app
* Production | ## Inspiration
We have all been there, stuck on a task, with no one to turn to for help. We all love wikiHow but there isn't always a convenient article there for you to follow. So we decided to do something about it! What if we could leverage the knowledge of the entire internet to get the nicely formatted and entertaining tutorials we need. That's why we created wikiNow. With the power of Cohere's natural language processing and stable diffusion, we can combine the intelligence of millions of people to get the tutorials we need.
## What it does
wikiNow is a tool that can generate entire wikiHow articles to answer any question! A user simply has to enter a query and our tool will generate a step-by-step article with images that provides a detailed answer tailored to their exact needs. wikiNow enables users to find information more efficiently and to have a better understanding of the steps involved.
## How we built it
wikiNow was built using a combination of Cohere's natural language processing and stable diffusion algorithms. We trained our models on a large dataset of existing wikiHow articles and used this data to generate new articles and images that are specific to the user's query. The back-end was built using Flask and the front-end was created using Next.js.
## Challenges we ran into
One of the biggest challenges we faced was engineering the prompts that would generate the articles. We had to experiment with a lot of different methods before we found something that worked well with multi-layer prompts. Another challenge was creating a user interface that was both easy to use and looked good. We wanted to make sure that the user would be able to find the information they need without being overwhelmed by the amount of text on the screen.
Properly dealing with Flask concurrency and long-running network requests was another large challenge. For an average wiki page creation, we require ~20 cohere generate calls. In order to make sure the wiki page returns in a reasonable time, we spent a considerable amount of time developing asynchronous functions and multi-threading routines to speed up the process.
## Accomplishments that we're proud of
We're proud that we were able to create a tool that can generate high-quality articles. We're also proud of the user interface that we created, which we feel is both easy to use and visually appealing. The generated articles are both hilarious and informative, which was our main goal. We are also super proud of our optimization work. When running in a single thread synchronously, the articles can take up to *5 minutes* to generate. We have managed to bring that down to around **30 seconds**, which a near 10x improvement!
## What we learned
We learned a lot about using natural language processing and how powerful it can be in real world applications. We also learned a lot about full stack web development. For two of us, this was our first time working on a full stack web application, and we learned a lot about running back-end servers and writing a custom API's. We solved a lot of unique optimization and threading problems as well which really taught us a lot.
## What's next for wikiNow
In the future, we would like to add more features to wikiNow, such as the ability to generate articles in other languages and the ability to generate articles for other types of content, such as recipes or instructions. We would also like to make the articles more interactive so that users can ask questions and get clarification on the steps involved. It would also be handy to add the ability to cache previous user generated articles to make it easier for the project to scale without re-generating existing articles. | ## Inspiration
We wanted to build something that could help people who deal with anxiety and panic attacks. The generous sponsors at PennApps also offered us a lot of APIs, and we wanted to make the best use of them as well.
## What it does
The user can open the app and create a personal account, as well as log emergency contacts for them to call or text in an emergency (ex: panic attack). The user can track the amount of panic attacks they have as well. Finally, should the wish, the user can send their information to a medical professional by sending them a confidentiality document to sign.
## How we built it
The iOS app was built on Swift, using the DocuSign API. The website was made using HTML, CSS, and Javascript.
## Challenges we ran into
Most of the team mates were attending their first hackathon, and it was their first exposure to a lot of these new technologies (ex: it was Lisa's first time programming in Swift!).
## Accomplishments that we're proud of
Creating both an app to use and a website to market the app. Everyone used their front-end and back-end skills to make both platforms a reality!
## What we learned
Each of us learned something new about a language we didn’t have experience in, like Swift. Since we are all fairly new to hackathons, we learned a lot about working together in a team to create a new project.
## What's next for Still
Incorporating more security with the DocuSign API. Track one's medication more accurately using the CVS API and possibly send notifications to take medication. | partial |
## Inspiration
In the era of internet and social media, we need to process a large amount of information everyday. It would be useful to find a quick way to extract peoples' sentiments on certain topics.
## What it does
The web app uses the latest google natural language machine learning model to analyze data feeds from both social media and news website from around the world. Detailed sentiment analysis with bar chart and pie chart will be displayed in the result section.
Major use cases include gauging public opinion on a variety of topics such as events (eg. nwHacks), political candidates, new products, etc. without having to conduct formal surveys.
## How we built it
We used the google natural language API to analyze sentiments and the social media and news API to gather source data. We used the framework of html/python/flask.
## Challenges we ran into
The challenge was to integrate different APIs and make them talk with each other. Creating a web app from scratch was also an issue as none of us have web development experience.
## Accomplishments that we are proud of
The is our first hackathon and we are excited to get something done with a machine learning model.
## What's next for SNARL
We want to extend our analysis to include more parameters and add the option of selecting different sources. | ## Inspiration
We thought it would be cool to use machine learning tools on current events to get a sense of what people are thinking about a particular topic. Specifically, our interest in finance inspired us to create this tool to analyze stock tickers in real time.
## What it does
This app takes in any topic or keyword, sanitizes it, and uses an api to fetch an rss feed of news related to that topic. We then use an api to convert this data to json and grab the content from the news articles. We use a Bayes model ML algorithm to determine the positive or negative sentiment of each news article, and return the average score as a percentage from -100% to 100% as completely negative or positive respectively.
## How we built it
HTML5/CSS3 front-end. Back-end uses Javascript and JQuery on a serverless platform, utilizing multiple REST APIs to perform functions.
## Challenges we ran into
Fetching news articles was hard because there was no direct API. We had to grab the rss and convert it to JSON ourselves. There was also no easily available REST API for machine learning, so we just implemented our own Bayes algorithm based on existing positive and negative training data available online.
## Accomplishments that we're proud of
Creating an app that has a complex algorithm running in the back-end while simultaneously creating a very clean, user-friendly front-end
## What we learned
Machine Learning skills
## What's next for PRogativ
Improving the training model to make it more accurate. Using social media feeds as part of the public sentiment algorithm. | ## Inspiration
Patients usually have to go through multiple diagnosis before getting the right doctor. With the astonishing computational power we have today, we could use predictive analysis to suggest the patients' potential illness.
## What it does
Clow takes a picture of the patient's face during registration and run it through an emotion analysis algorithm. With the "scores" that suggest the magnitude of a certain emotional trait, Clow matches these data with the final diagnosis result given by the doctor to predict illnesses.
## How we built it
We integrated machine learning and emotion analysis algorithms from Microsoft Azure cloud services on our Ionic-based app to predict the trends. We "trained" our machine by pairing the "scores" of images of sick patients with its illness, allowing it to predict illnesses based on the "scores".
## Challenges we ran into
All of us are new to machine learning and this has proved to be a challenge to all of us. Fortunately, Microsoft's representative was really helpful and guided us through the process. We also had a hard time writing the code to upload the image taken from the camera to a cloud server in order to run it through Microsoft's emotion analysis API, since we have to encode the image before uploading it.
## Accomplishments that we're proud of
Learning a new skill over a weekend and deploy it on a working prototype ain't easy. We did that, not one but two skills, over a weekend. And it's machine learning and emotion analysis. And they are actually the main components that powers our product.
## What we learned
We all came in with zero knowledge of machine learning and now we are able to walk away with a good idea of what it is. Well, at least we can visualize it now, and we are excited to work with machine learning and unleash its potential in the future.
## What's next for Clow
Clow needs the support of medical clinics and hospitals in order to be deployed. As the correlation between emotion and illness is still relatively unproven, research studies have to be done in order to prove its effectiveness. It may not be effectively produce results in the beginning, but if Clow analyzes thousands of patients' emotion and illness, it can actually very accurately yield these results. | losing |
## Inspiration
**Handwriting is such a beautiful form of art that is unique to every person, yet unfortunately, it is not accessible to everyone.**
[Parkinson’s](www.parkinson.org/Understanding-Parkinsons/Statistics) affects nearly 1 million people in the United States and more than 6 million people worldwide. For people who struggle with fine motor skills, picking up a pencil and writing is easier said than done. *We want to change that.*
We were inspired to help people who find difficulty in writing, whether it be those with Parkinson's or anyone else who has lost the ability to write with ease. We believe anyone, whether it be those suffering terminal illnesses, amputated limbs, or simply anyone who cannot write easily, should all be able to experience the joy of writing!
## What it does
Hand Spoken is an innovative solution that combines the ease of writing with the beauty of an individual's unique handwriting.
All you need to use our desktop application is an old handwritten letter saved by you! Simply pick up your paper of handwriting (or handwriting of choice) and take a picture. After submitting the picture into our website database, you are all set. Then, simply speak into the computer either using a microphone or a voice technology device. The user of the desktop application will automatically see their text appear on the screen in their own personal handwriting font! They can then save their message for later use.
## How we built it
We created a desktop application using C# with Visual Studio's WinForm framework. Handwriting images uploaded to the application is sent via HTTP request to the backend, where a python server identifies each letter using pytesseract. The recognized letters are used to generate a custom font, which is saved to the server. Future audio files recorded by the frontend are also sent into the backend, at which point AWS Transcribe services are contacted, giving us the transcribed text. This text is then processed using the custom handwriting font, being eventually returned to the frontend, ready to be downloaded by the user.
## Challenges we ran into
One main challenge our team ran into was working with pytesseract. To overcome this obstacle, we made sure we worked collaboratively as a team to divide roles and learn how to use these exciting softwares.
## Accomplishments that we're proud of
We are proud of creating a usable and functional database that incorporates UX/UI design!
## What we learned
Not only did we learn lots about OCR (Optical Character Recognition) and AWS Transcribe services, but we learned how to collaborate effectively as a team and maximize each other's strengths.
## What's next for Hand Spoken
Building upon on our idea and creating accessibility **for all** through the use of technology! | ## Inspiration
Over **15% of American adults**, over **37 million** people, are either **deaf** or have trouble hearing according to the National Institutes of Health. One in eight people have hearing loss in both ears, and not being able to hear or freely express your thoughts to the rest of the world can put deaf people in isolation. However, only 250 - 500 million adults in America are said to know ASL. We strongly believe that no one's disability should hold them back from expressing themself to the world, and so we decided to build Sign Sync, **an end-to-end, real-time communication app**, to **bridge the language barrier** between a **deaf** and a **non-deaf** person. Using Natural Language Processing to analyze spoken text and Computer Vision models to translate sign language to English, and vice versa, our app brings us closer to a more inclusive and understanding world.
## What it does
Our app connects a deaf person who speaks American Sign Language into their device's camera to a non-deaf person who then listens through a text-to-speech output. The non-deaf person can respond by recording their voice and having their sentences translated directly into sign language visuals for the deaf person to see and understand. After seeing the sign language visuals, the deaf person can respond to the camera to continue the conversation.
We believe real-time communication is the key to having a fluid conversation, and thus we use automatic speech-to-text and text-to-speech translations. Our app is a web app designed for desktop and mobile devices for instant communication, and we use a clean and easy-to-read interface that ensures a deaf person can follow along without missing out on any parts of the conversation in the chat box.
## How we built it
For our project, precision and user-friendliness were at the forefront of our considerations. We were determined to achieve two critical objectives:
1. Precision in Real-Time Object Detection: Our foremost goal was to develop an exceptionally accurate model capable of real-time object detection. We understood the urgency of efficient item recognition and the pivotal role it played in our image detection model.
2. Seamless Website Navigation: Equally essential was ensuring that our website offered a seamless and intuitive user experience. We prioritized designing an interface that anyone could effortlessly navigate, eliminating any potential obstacles for our users.
* Frontend Development with Vue.js: To rapidly prototype a user interface that seamlessly adapts to both desktop and mobile devices, we turned to Vue.js. Its flexibility and speed in UI development were instrumental in shaping our user experience.
* Backend Powered by Flask: For the robust foundation of our API and backend framework, Flask was our framework of choice. It provided the means to create endpoints that our frontend leverages to retrieve essential data.
* Speech-to-Text Transformation: To enable the transformation of spoken language into text, we integrated the webkitSpeechRecognition library. This technology forms the backbone of our speech recognition system, facilitating communication with our app.
* NLTK for Language Preprocessing: Recognizing that sign language possesses distinct grammar, punctuation, and syntax compared to spoken English, we turned to the NLTK library. This aided us in preprocessing spoken sentences, ensuring they were converted into a format comprehensible by sign language users.
* Translating Hand Motions to Sign Language: A pivotal aspect of our project involved translating the intricate hand and arm movements of sign language into a visual form. To accomplish this, we employed a MobileNetV2 convolutional neural network. Trained meticulously to identify individual characters using the device's camera, our model achieves an impressive accuracy rate of 97%. It proficiently classifies video stream frames into one of the 26 letters of the sign language alphabet or one of the three punctuation marks used in sign language. The result is the coherent output of multiple characters, skillfully pieced together to form complete sentences
## Challenges we ran into
Since we used multiple AI models, it was tough for us to integrate them seamlessly with our Vue frontend. Since we are also using the webcam through the website, it was a massive challenge to seamlessly use video footage, run realtime object detection and classification on it and show the results on the webpage simultaneously. We also had to find as many opensource datasets for ASL as possible, which was definitely a challenge, since with a short budget and time we could not get all the words in ASL, and thus, had to resort to spelling words out letter by letter. We also had trouble figuring out how to do real time computer vision on a stream of hand gestures of ASL.
## Accomplishments that we're proud of
We are really proud to be working on a project that can have a profound impact on the lives of deaf individuals and contribute to greater accessibility and inclusivity. Some accomplishments that we are proud of are:
* Accessibility and Inclusivity: Our app is a significant step towards improving accessibility for the deaf community.
* Innovative Technology: Developing a system that seamlessly translates sign language involves cutting-edge technologies such as computer vision, natural language processing, and speech recognition. Mastering these technologies and making them work harmoniously in our app is a major achievement.
* User-Centered Design: Crafting an app that's user-friendly and intuitive for both deaf and hearing users has been a priority.
* Speech Recognition: Our success in implementing speech recognition technology is a source of pride.
* Multiple AI Models: We also loved merging natural language processing and computer vision in the same application.
## What we learned
We learned a lot about how accessibility works for individuals that are from the deaf community. Our research led us to a lot of new information and we found ways to include that into our project. We also learned a lot about Natural Language Processing, Computer Vision, and CNN's. We learned new technologies this weekend. As a team of individuals with different skillsets, we were also able to collaborate and learn to focus on our individual strengths while working on a project.
## What's next?
We have a ton of ideas planned for Sign Sync next!
* Translate between languages other than English
* Translate between other sign languages, not just ASL
* Native mobile app with no internet access required for more seamless usage
* Usage of more sophisticated datasets that can recognize words and not just letters
* Use a video image to demonstrate the sign language component, instead of static images | ## Inspiration
More money, more problems.
Lacking an easy, accessible, and secure method of transferring money? Even more problems.
An interesting solution to this has been the rise of WeChat Pay, allowing for merchants to use QR codes and social media to make digital payments.
But where does this leave people without sufficient bandwidth? Without reliable, adequate Wi-Fi, technologies like WeChat Pay and Google Pay simply aren't options. People looking to make money transfers are forced to choose between bloated fees or dangerously long wait times.
As designers, programmers, and students, we tend to think about how we can design tech. But how do you design tech for that negative space? During our research, we found of the people that lack adequate bandwidth, 1.28 billion of them have access to mobile service. This ultimately led to our solution: **Money might not grow on trees, but Paypayas do.** 🍈
## What it does
Paypaya is an SMS chatbot application that allows users to perform simple and safe transfers using just text messages.
Users start by texting a toll free number. Doing so opens a digital wallet that is authenticated by their voice. From that point, users can easily transfer, deposit, withdraw, or view their balance.
Despite being built for low bandwidth regions, Paypaya also has huge market potential in high bandwidth areas as well. Whether you are a small business owner that can't afford a swipe machine or a charity trying to raise funds in a contactless way, the possibilities are endless.
Try it for yourself by texting +1-833-729-0967
## How we built it
We first set up our Flask application in a Docker container on Google Cloud Run to streamline cross OS development. We then set up our database using MongoDB Atlas. Within the app, we also integrated the Twilio and PayPal APIs to create a digital wallet and perform the application commands. After creating the primary functionality of the app, we implemented voice authentication by collecting voice clips from Twilio to be used in Microsoft Azure's Speaker Recognition API.
For our branding and slides, everything was made vector by vector on Figma.
## Challenges we ran into
Man. Where do we start. Although it was fun, working in a two person team meant that we were both wearing (too) many hats. In terms of technical problems, the PayPal API documentation was archaic, making it extremely difficult for us figure out how to call the necessary functions. It was also really difficult to convert the audio from Twilio to a byte-stream for the Azure API. Lastly, we had trouble keeping track of conversation state in the chatbot as we were limited by how the webhook was called by Twilio.
## Accomplishments that we're proud of
We're really proud of creating a fully functioning MVP! All of 6 of our moving parts came together to form a working proof of concept. All of our graphics (slides, logo, collages) are all made from scratch. :))
## What we learned
Anson - As a first time back end developer, I learned SO much about using APIs, webhooks, databases, and servers. I also learned that Jacky falls asleep super easily.
Jacky - I learned that Microsoft Azure and Twilio can be a pain to work with and that Google Cloud Run is a blessing and a half. I learned I don't have the energy to stay up 36 hours straight for a hackathon anymore 🙃
## What's next for Paypaya
More language options! English is far from the native tongue of the world. By expanding the languages available, Paypaya will be accessible to even more people. We would also love to do more with financial planning, providing a log of previous transactions for individuals to track their spending and income. There are also a lot of rough edges and edge cases in the program flow, so patching up those will be important in bringing this to market. | partial |
## Inspiration
We often find downtimes where we want to do some light exercises. Throwing a ball is perfect, but it's not easy to find another person to throw the ball to. We realized that we wanted to make a machine that can substitute the role of a partner... hence the creation of *PitchPartner*!
## What it does
PitchPartner is a motorized machine designed to catch and return a tennis ball automatically. When receiving the ball, the machine keeps a counter of how many throws were made. It then feeds the ball to the bike wheel propeller to be launched back! PitchPartner is portable, making it great to use out in an open field!
## How we built it
The launcher is made with a bike tire connected to a drill that provides high rpm and torque. It's housed in a wooden frame that was nailed together. The catcher is a net (webbing) augmented with Raspberry Pi and sensors, namely the ultrasonic sensor for the speed detection and the button for the counter.
## Challenges we ran into
We initially wanted to make a frisbee thrower and catcher, but due to the complexity arising from the asymmetrical nature of the frisbee (top down), we changed the *trajectory* of our project to use the tennis ball! Because our machine has fast, moving parts, we didn't feel comfortable to potentially have the frisbee be fed into the launcher upside down and thus causing safety problems.
Some of us are relative beginners to coding for Raspberry Pi (python), so we spend a lot of time to get the counter displayed on the 7-segment display. We were able to made it work when the programmer described in detail what type of problem he is encountering. We were able to draw knowledge from another programming language, C, to fix the problem.
## Accomplishments that we're proud of
This was our first online hackathon! Our team was able to distribute work even though the hardware is built at one place. We played to our strengths and made quick decisions, enabling us to finish the PitchPartner on time!
## What we learned
Our team members come from two different cities! One thing we learned is to bounce our ideas and problems off of each other because we all have different strengths and weaknesses. Clear explanation of the problems we faced helped us move closer to finding a solution.
## What's next for PitchPartner
Installing a DC brushless motor can make the design even more compact and stable, not to mention true autonomy of the machine. We are also interested in having the machine change directions, thus capable of throwing left and right. Lastly, because we bought a used bike to extract its wheel, we think it would be very cool to incorporate the rest of the bike body to a steering system for the PitchPartner! | ## Inspiration
We wanted to create something that helped other people. We had so many ideas, yet couldn't stick to one. Luckily, we ended up talking to Phoebe(?) from Hardware and she talked about how using textiles would be great in a project. Something clicked, and we started brainstorming ideas. It ended up with us coming up with this project which could help a lot of people in need, including friends and family close to us.
## What it does
Senses the orientation of your hand, and outputs either a key press, mouse move, or a mouse press. What it outputs is completely up to the user.
## How we built it
Sewed a glove, attached a gyroscopic sensor, wired it to an Arduino Uno, and programmed it in C# and C++.
## Challenges we ran into
Limited resources because certain hardware components were out of stock, time management (because of all the fun events!), Arduino communication through the serial port
## Accomplishments that we're proud of
We all learned new skills, like sewing, coding in C++, and programming with the Arduino to communicate with other languages, like C#. We're also proud of the fact that we actually fully completed our project, even though it's our first hackathon.
## What we learned
~~how 2 not sleep lolz~~
Sewing, coding, how to wire gyroscopes, sponsors, DisguisedToast winning Hack the North.
## What's next for this project
We didn't get to add all the features we wanted, both to hardware limitations and time limitations. Some features we would like to add are the ability to save and load configs, automatic input setup, making it wireless, and adding a touch sensor to the glove. | ## Inspiration
We took inspiration from our experience of how education can be hard. Studies conducted by EdX show that classes that teach quantitative subjects like Mathematics and Physics tend to receive lower ratings from students in terms of engagement and educational capacity than their qualitative counterparts. Of all advanced placement tests, AP Physics 1 receives on average the lowest scores year after year, according to College Board statistics. The fact is, across the board, many qualitative subjects are just more difficult to teach, a fact that is compounded by the isolation that came with remote working, as a result of the COVID-19 pandemic. So, we would like to find a way to promote learning in a fun way.
In keeping with the theme of Ctrl + Alt + Create, we took inspiration from another educational game from the history of computing. In 1991, Microsoft released a programming language and environment called QBASIC to teach first time programmers how to code. One of the demo programs they released with this development environment was a game called Gorillas, an artillery game where two players can guess the velocity and angle in order to try to hit their opponents. We decided to re-imagine this iconic little program from the 90s into a modern networked webgame, designed to teach students kinematics and projectile motion.
## What it does
The goal of our project was to create an educational entertainment game that allows students to better engage in qualitative subjects. We wanted to provide a tool for instructors for both in-classroom and remote education and provide a way to make education more accessible for students attending remotely. Specifically, we focused on introductory high school physics, one of the most challenging subjects to tackle. Similar to Kahoot, teachers can setup a classroom or lobby for students to join in from their devices. Students can join in either as individuals, or as a team. Once a competition begins, students use virtual tape measures to find distances in their surroundings, determining how far their opponent is and the size of obstacles that they need to overcome. Based on these parameters, they can then try out an appropriate angle and calculate an initial velocity to fire their projectiles. Although there is no timer, students are incentivized to work quickly in order to fire off their projectiles before their opponents. Students have a limited number of shots as well, incentivizing them to double-check their work wisely.
## How we built it
We built this web app using HTML, CSS, and Javascript. Our team split up into a Graphics Team and Logics Team. The Logics Team implemented the Kinematics and the game components of this modern recreation of QBASIC Gorillas. The Graphics Team created designs and programmed animations to represent the game logic as well as rendering the final imagery. The two teams came together to make sure everything worked well together.
## Challenges we ran into
We ran into many challenges which include time constraints and our lack of knowledge about certain concepts. We later realized we should have spent more time on planning and designing the game before splitting into teams because it caused problems in miscommunication between the teams about certain elements of the game. Due to time constraints, we did not have time to implement a multiplayer version of the game.
## Accomplishments that we're proud of
The game logically works in single player game. We are proud that we were able to logically implement the entire game, as well as having all the necessary graphics to show its functionality.
## What we learned
We learned the intricacies of game design and game development. Most of us have usually worked with more information-based websites and software technologies. We learned how to make a webapp game from scratch. We also improved our HTML/CSS/Javascript knowledge and our concepts of MVC.
## What's next for Gorillamatics
First we would like to add networking to this game to better meet the goals of increasing connectivity in the classroom as well as sparking a love for Physics in a fun way. We would also like to have better graphics. For the long term, we are planning on adding different obstacles to make different kinematics problems. | losing |
## Inspiration
Many people join their first hackathons worried and don't know what to expect. We created Hack Help to help people come up with ideas for their hacks and to provide them with resources and tips.
## What it does
Hack Help uses openAI to generate specific Hackathon project ideas based on the experience levels of the team, the team's strengths, the theme of the hackathon, and the amount of time they have to build it. It also provides hackers with resources and tips for hackathons and learning how to code in the same place.
## How we built it
We built it using HTML, CSS, JavaScript, and Node.js.
## Challenges we ran into
Figuring out how to connect the chatbot to the API was the main struggle. It took several hours yo resolve the issue, but we got there in the end!
## Accomplishments that we're proud of
We successfully got the program to both export and import information from an external source. We also are pretty proud of our website design.
## What we learned
We learned a lot about Node.js, which is something we haven't had too much experience with in the past. We also learned a lot more about backend and frontend development.
## What's next for Hack Help
Next, we plan on expanding the resource section to provide more specific and targeted. We also plan on expanding this part of the website to make it more interactive.
## Domain Name
hackhelpin.tech | ## Inspiration
We find it hard to attend hackathons when we are without a team. Some teams may lack talent in specific areas, such as front-end, or mobile development.
## What it does
Hackers are able to create profiles containing their skills, interests, and other relevant information. They are given a smooth user interface where they can view other teams, submissions, and possibly request to join them. Likewise, teams looking to recruit extra talent can draw from a pool of unmatched participants.
## How we built it
Back-end is implemented using node.js and mongoDB, and the front-end is designed with a combination of AngularJS, jQuery, JavaScript, and styled with Semantic-UI.
## Challenges we ran into
Integrating AngularJS with Semantic-UI, as it does not contain many compatible components and is still in a development phase. We also had trouble with initially deploying the back-end to a production.
## Accomplishments that we're proud of
Clean, smooth, responsive user inferface.
## What we learned
Semantic-UI, nodeJS backend.
## What's next for hackTogether
The ability to vote on projects, more connectivity between users, add funding functionality to projects. | ## Inspiration
We wanted to create a device that ease the life of people who have disabilities and with AR becoming mainstream it would only be proper to create it.
## What it does
Our AR Headset converts speech to text and then displays it realtime on the monitor to allow the user to read what the other person is telling them making it easier for the first user as he longer has to read lips to communicate with other people
## How we built it
We used IBM Watson API in order to convert speech to text
## Challenges we ran into
We have attempted to setup our system using the Microsoft's Cortana and the available API but after struggling to get the libraries ti work we had to resort to using an alternative method
## Accomplishments that we're proud of
Being able to use the IBM Watson and unity to create a working prototype using the Kinect as the Web Camera and the Oculus rift as the headset thus creating an AR headset
## What we learned
## What's next for Hear Again
We want to make the UI better, improve the speed to text recognition and transfer our project over to the Microsoft Holo Lens for the most nonintrusive experience. | losing |
## Inspiration
Much of our inspiration came from [this SMBC comic](http://www.smbc-comics.com/comic/calculating) and Zach Weinersmith's offer of funding for the idea, "even in stupid form [sic]." We recognize that this comment was likely made in jest, however it was a wonderful idea for a hack and we trusted our ability to implement it in a hackathon setting.
## What it does
Our machine is a shoulder-mounted projector, to be used by a individual. It uses a distance sensor and servo motors to adjust its angle, bringing the projection orthogonal to and focused on the wall behind. From this, a series of equations fly across the wall behind the speaker, lending gravitas to their words or thoughtful silence. This hasn't previously been achieved with our level of portability, meaning that now anyone may have the freedom to appear smarter than they really are.
## How we built it
The main structure of the device is laser-cut medium density fiberboard, with two servo motors to control the projector orientation and another servo to adjust the projector's focus. The servos are controlled by an Arduino Uno, with the projected image computed by a computer on the speaker's person.
## Challenges we ran into
Originally, we had planned to use a camera to detect the corners of the projection and fancy math to determine the exact amount we needed to turn the servos. Around 11:45 last night, we had a revelation. That revelation was… that when a camera and projector are aligned, the image will always appear the same to the camera!
The math that Sahil had spent hours working on, trying to get into a useful state for analysis? Shriyash's wrestling with OpenCV to measure the angles of the corners? Absolutely useless.
And yet, we were determined to meet Zach's challenge. The one thing that remained useful in the wake of this tragedy was Misha's device/structure. The only hardware change we made was adding an ultrasonic distance sensor to replace the functionality we had planned for the camera.
## Accomplishments that we're proud of
Stonks!
We had a fully functional demo, ready *before* hacking ended.
## What we learned
After seeing that nobody else had tried something similar, we should have thought more carefully into exactly why that might be. Instead, we assumed the idea was novel and didn't consider there could be a problem until after we had wasted more than 24 hours on something that could never work.
## What's next for Projected Reality
Stonks! (but up this time) | ## Inspiration
We enjoyed playing the computer party game [*Keep Talking and Nobody Explodes*](http://www.keeptalkinggame.com/) with our friends and decided that a real-life implementation would be more accessible and interesting. It's software brought to life.
## What it does
Each randomly generated "bomb" has several modules that must be defused in order to win the game. Here's the catch: only one person can see and interact with the bomb. The other players have the bomb defusal manual to defuse the bomb and must act as "experts," communicating quickly with the bomb defuser. And you only have room for three errors.
Puzzle-solving, communication, and interpretation skills will be put to the test as players race the five-minute clock while communicating effectively. Here are the modules we built:
* **Information Display** *Sometimes, information is useful.* In this display module, we display the time remaining and the serial number of the bomb. How can you use this information?
* **Simple Wires** *Wires are the basis of all hardware hacks. But sometimes, you have to pull them out.* A schematic is generated, instructing players to set up a variety of colored wires into six pins. There's only one wire to pull out, but which one? Only the "experts" will know, following a series of conditional statements.
* **The Button** *One word. One LED. One button.*
Decode this strange combination and figure out if the button saying "PRESS" should be pressed, or if you should hold it down and light up another LED.
* **Password** *The one time you wouldn't want a correct horse battery.* Scroll through letters with buttons on an LCD display, in hopes of stumbling upon an actual word, then submit it.
* **Simon Says** *The classic childhood toy and perfect Arduino hack, but much, much crueler.* Follow along the flashing LEDs and repeat the pattern - but you must map it to the correct pattern first.
## How we built it
We used six Arduino Unos, with one for each module and one for a central processor to link all of the modules together. Each module is independent, except for two digital outputs indicating the number of strikes to the central processor. On breadboards, we used LEDs, LCD displays, and switches to provide a simple user interface.
## Challenges we ran into
Reading the switches on the Simon Says module, interfacing all of the Arduinos together
## Accomplishments that we're proud of
Building a polished product in a short period of time that made use of our limited resources
## What we learned
How to use Arduinos, the C programming language, connecting digital and analog components
## What's next for Keep Talking Arduino
More modules, packaging and casing for modules, more options for players | ## Inspiration
3D Printing offers quick and easy access to a physical design from a digitized mesh file. Transferring a physical model back into a digitized mesh is much less successful or accessible in a desktop platform. We sought to create our own desktop 3D scanner that could generate high fidelity, colored and textured meshes for 3D printing or including models in computer graphics. The build is named after our good friend Greg who let us borrow his stereocamera for the weekend, enabling this project.
## How we built it
The rig uses a ZED stereocamera driven by a ROS wrapper to take stereo images at various known poses in a spiral which is executed with precision by two stepper motors driving a leadscrew elevator and a turn table for the model to be scanned. We designed the entire build in a high detail CAD using Autodesk Fusion 360, 3D printed L-brackets and mounting hardware to secure the stepper motors to the T-slot aluminum frame we cut at the metal shop at Jacobs Hall. There are also 1/8th wood pieces that were laser cut at Jacobs, including the turn table itself. We designed the power system around an Arduino microcontroller and and an Adafruit motor shield to drive the steppers. The Arduino and the ZED camera are controlled by python over a serial port and a ROS wrapper respectively to automate the process of capturing the images used as an input to OpenMVG/MVS to compute dense point clouds and eventually refined meshes.
## Challenges we ran into
We ran into a few minor mechanical design issues that were unforeseen in the CAD, luckily we had access to a 3D printer throughout the entire weekend and were able to iterate quickly on the tolerancing of some problematic parts. Issues with the AccelStepper library for Arduino used to simultaneously control the velocity and acceleration of 2 stepper motors slowed us down early Sunday evening and we had to extensively read the online documentation to accomplish the control tasks we needed to. Lastly, the complex 3D geometry of our rig (specifically rotation and transformation matrices of the cameras in our defined world coordinate frame) slowed us down and we believe is still problematic as the hackathon comes to a close.
## Accomplishments that we're proud of
We're proud of the mechanical design and fabrication, actuator precision, and data collection automation we achieved in just 36 hours. The outputted point clouds and meshes are still be improved. | losing |
## Inspiration
Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use.
## What it does and how we built it
The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well.
## Challenges we ran into
This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem.
## Accomplishments that we're proud of
Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device.
## What's next for PulseGrip
* working to improve sensors, adding more games, seeing how we can help people
We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here.
## Check it out
<https://hacks.pulsegrip.design>
<https://github.com/PulseGrip> | ## Inspiration
More than **2 million** people in the United States are affected by diseases such as ALS, brain or spinal cord injuries, cerebral palsy, muscular dystrophy, multiple sclerosis, and numerous other diseases that impair muscle control. Many of these people are confined to their wheelchairs, some may be lucky enough to be able to control their movement using a joystick. However, there are still many who cannot use a joystick, eye tracking systems, or head movement-based systems.
Therefore, a brain-controlled wheelchair can solve this issue and provide freedom of movement for individuals with physical disabilities.
## What it does
BrainChair is a neurally controlled headpiece that can control the movement of a motorized wheelchair. There is no using the attached joystick, just simply think of the wheelchair movement and the wheelchair does the rest!
## How we built it
The brain-controlled wheelchair allows the user to control a wheelchair solely using an OpenBCI headset. The headset is an Electroencephalography (EEG) device that allows us to read brain signal data that comes from neurons firing in our brain. When we think of specific movements we would like to do, those specific neurons in our brain will fire. We can collect this EEG data through the Brainflow API in Python, which easily allows us to stream, filter, preprocess the data, and then finally pass it into a classifier.
The control signal from the classifier is sent through WiFi to a Raspberry Pi which controls the movement of the wheelchair. In our case, since we didn’t have a motorized wheelchair on hand, we used an RC car as a replacement. We simply hacked together some transistors onto the remote which connects to the Raspberry Pi.
## Challenges we ran into
* Obtaining clean data for training the neural net took some time. We needed to apply signal processing methods to obtain the data
* Finding the RC car was difficult since most stores didn’t have it and were closed. Since the RC car was cheap, its components had to be adapted in order to place hardware pieces.
* Working remotely made designing and working together challenging. Each group member worked on independent sections.
## Accomplishments that we're proud of
The most rewarding aspect of the software is that all the components front the OpenBCI headset to the raspberry-pi were effectively communicating with each other
## What we learned
One of the most important lessons we learned is effectively communicating technical information to each other regarding our respective disciplines (computer science, mechatronics engineering, mechanical engineering, and electrical engineering).
## What's next for Brainchair
To improve BrainChair in future iterations we would like to:
Optimize the circuitry to use low power so that the battery lasts months instead of hours. We aim to make the OpenBCI headset not visible by camouflaging it under hair or clothing. | ## Inspiration
MISSION: Our mission is to create an intuitive and precisely controlled arm for situations that are tough or dangerous for humans to be in.
VISION: This robotic arm application can be used in the medical industry, disaster relief, and toxic environments.
## What it does
The arm imitates the user in a remote destination. The 6DOF range of motion allows the hardware to behave close to a human arm. This would be ideal in environments where human life would be in danger if physically present.
The HelpingHand can be used with a variety of applications, with our simple design the arm can be easily mounted on a wall or a rover. With the simple controls any user will find using the HelpingHand easy and intuitive. Our high speed video camera will allow the user to see the arm and its environment so users can remotely control our hand.
## How I built it
The arm is controlled using a PWM Servo arduino library. The arduino code receives control instructions via serial from the python script. The python script is using opencv to track the user's actions. An additional feature use Intel Realsense and Tensorflow to detect and track user's hand. It uses the depth camera to locate the hand, and use CNN to identity the gesture of the hand out of the 10 types trained. It gave the robotic arm an additional dimension and gave it a more realistic feeling to it.
## Challenges I ran into
The main challenge was working with all 6 degrees of freedom on the arm without tangling it. This being a POC, we simplified the problem to 3DOF, allowing for yaw, pitch and gripper control only. Also, learning the realsense SDK and also processing depth image was an unique experiences, thanks to the hardware provided by Dr. Putz at the nwhacks.
## Accomplishments that I'm proud of
This POC project has scope in a majority of applications. Finishing a working project within the given time frame, that involves software and hardware debugging is a major accomplishment.
## What I learned
We learned about doing hardware hacks at a hackathon. We learned how to control servo motors and serial communication. We learned how to use camera vision efficiently. We learned how to write modular functions for easy integration.
## What's next for The Helping Hand
Improve control on the arm to imitate smooth human arm movements, incorporate the remaining 3 dof and custom build for specific applications, for example, high torque motors would be necessary for heavy lifting applications. | winning |
## Inspiration
The inspiration for PhishNet stemmed from a growing frequency of receiving email phishing scams. As we rely on our emails for important information, such as with our career, academics, and so on, my team and I often encountered suspicious emails that raised doubts about their legitimacy. We realized the importance of having a tool that could analyze the trustworthiness of emails and help users make informed decisions about whether to engage with them or not.
## What it does
PhishNet allows users to upload their exported emails onto the website, which will then scan the file information and give the user a rating or percentage of how trustworthy or legitimate the email senders are, how suspicious they or the links they send are, and as well as checking if any similar scam has been reported.
## How we built it
### Technologies Used
* React for frontend development
* Python for backend
* GoDaddy for domain name
* Auth0 for account sign in/sign up authentication (Unfinished)
* Procreate for mockups/planning
* Canva for logo and branding
### Development Process
1. **Frontend Design**: We started by sketching out the user interface and designing the user experience flow through Procreate and Canva. React was instrumental in creating a sleek and responsive frontend design.
2. **Backend Development**: Using Python, we built the backend infrastructure for handling file uploads, parsing email data, and communicating with the machine learning models.
3. **Unfinished: Sign In Authentication**: Although we were unable to finish its full functionality, we used Auth0 for our sign in and sign up options, in order to provide users with the security they needed as even if it is just uploading emails, there is no denying the website needs to keep the privacy of each user.
## Challenges we ran into
1. **Data Preprocessing**: Cleaning and preprocessing email data to extract relevant features for analysis posed a significant challenge. We had to handle various data formats and ensure consistency in feature extraction.
2. **File Uploading/Input**: We had to try several different libraries/open source code/ alternatives in general that would help us not only provide a clean, efficient file upload functionality, but also one we could use to check for user input validation and respond accordingly.
3. **Finishing Everything**: We took a lot of time to finalise our thoughts and pick the theme we wanted to explore for this year's hackathon. However, that also let to us underestimating how much time we were taking up unknowingly. I think that taught us to be more aware, and conscious of our time.
## Accomplishments that we're proud of
* Creating a user-friendly interface for easy interaction.
* Handling complex data preprocessing tasks efficiently.
* Working as only a two-person team, which the each of us taking on new roles.
## What we learned
Throughout the development process of PhishNet, we gained valuable insights into email security, phishing tactics, and data analysis. We honed our skills in frontend and backend development, as well as machine learning integration. Additionally, we learned about the importance of user feedback and iterative development in creating a robust application.
## What's next for PhishNet | ## Inspiration
We just like technology. I’ve killed many plants over the year and I felt that technology could help save them. People tend to think technology is antithesis of nature. We wanted show people that technology and nature can sometimes have a symbiotic relationship.
## What it does
Pot Buddy helps users take care of their plants by moving the plant towards adequate sunlight, notifying the user when the temperature is not appropriate, and reminding the user to either water his or her plant or hold off on watering said plant. It also allows the user to log information about multiple plants and visualize certain trends about the data of the plant. There is also Google Home integration where the user can find out information about the plant.
## How we built it
In the main base for Pot Buddy, we set up a smart body that is able to hold a variety technology. The metal frame on treads holds four photoresistors, a moisture sensor and a temperature sensor, along with an Arduino board which was used to perform lower level processes like reading the sensors and finding the light and driving motors. It also had a speaker which was used to say Google Text to Speech (GTTS) attached to a Raspberry Pi 3 which was used to calculate higher order logic like whether a value was in a particular threshold and merge all of the sensor data to Firebase.
We chose React Native development for our companion application because of the ease and its cross-platform capabilities. Also, with easy to use frameworks like Expo, app development becomes fun!
Using Google Actions and Dialogflow, we were able to access the information logged and become more user friendly using the Google Home.
All of the designs such as the logo and the pictures of the plants were designed using Adobe Illustrator.
All of the information that was obtained by Pot Buddy and retrieved by either the mobile application or the Google Home was stored into FireBase as it was a real-time database that was capable to integrate well with our many different platforms.
## Challenges we ran into
One of the main challenges that we faced was overcoming our lack of rest throughout the entire event, especially since our entire group had to arrive from out of state for this hackathon. We also struggled with getting so many moving parts working together with the limited resources that we had. Lastly, the most difficult problem was troubleshooting all of the loose wires and sensors so that they were able to work properly.
## Accomplishments that we're proud of
We were particularly proud of creating an entire system of completely different platforms that was able to be combined seamlessly and effectively. In fact, the amount of different parts working together allows Pot Buddy to function efficiently. Additionally, we were proud of our combined ability to focus on this project, proved by the fact that we only slept a combined total of roughly 20 hours throughout the entire event.
## What we learned
We were able to expose ourselves to very new platforms and features such as Adobe Illustrator, Google Actions and Database Modeling, as well as gain valuable experience and new knowledge of other familiar platforms such as React Navigation. We also learned how to better organize our code as it was necessary to do so when working in such a large scale with multiple people. Additionally, we learned how to build a complete system.
## What's next for Pot Buddy
Along with global domination via army of cyborg plants, Pot Buddy will continue towards this goal by having particular options per plant and more Google Home commands to make retrieving data more efficient. | ## Inspiration
We think improving cybersecurity does not always entail passively anticipating possible attacks. It is an equally valid strategy to go on the offensive against the transgressors. Hence, we employed the strategy of the aggressors against themselves --- by making what's basically a phishing bank app that allows us to gather information about potentially stolen phones.
## What it does
Our main app, Bait Master, is a cloud application linked to Firebase. Once the user finishes the initial setup, the app will disguise itself as a banking application with fairly convincing UI/UX with fake bank account information. Should the phone be ever stolen or password-cracked, the aggressor will likely be tempted to take a look at the obvious bank information. When they open the app, they fall for the phishing bait. The app will discreetly take several pictures of the aggressor's face from the front camera, as well as uploading location/time information periodically in the background to Firebase. The user can then check these information by logging in to our companion app --- Trap Master Tracker --- using any other mobile device with the credentials they used to set up the main phishing app, where we use Google Cloud services such as Map API to display the said information.
## How we built it
Both the main app and the companion app are developed in Java Android using Android Studio. We used Google's Firebase as a cloud platform to store user information such as credentials, pictures taken, and location data. Our companion app is also developed in Android and uses Firebase, and it uses Google Cloud APIs such as Map API to display information.
## Challenges we ran into
1) The camera2 library of Android is very difficult to use. Taking a picture is one thing --- but taking a photo secretly without using the native camera intent and to save it took us a long time to figure out. Even now, the front camera configuration sometimes fails in older phones --- we are still trying to figure that out.
2) The original idea was to use Twilio to send SMS messages to the back-up phone number of the owner of the stolen phone. However, we could not find an easy way to implement Twilio in Android Studio without hosting another server, which we think will hinder maintainability. We eventually decided to opt out of this idea as we ran out of time.
## Accomplishments that we're proud of
I think we really pushed the boundary of our Android dev abilities by using features of Android that we did not even know existed. For instance, the main Bait Master app is capable of morphing its own launcher to acquire a new icon as well as a new app name to disguise itself as a banking app. Furthermore, discreetly taking pictures without any form of notification and uploading them is technically challenging, but we pulled it off nonetheless. We are really proud of the product that we built at the end of this weekend.
## What we learned
Appearances can be misleading. Don't trust everything that you see. Be careful when apps ask for access permission that it shouldn't use (such as camera and location).
## What's next for Bait Master
We want to add more system-level mobile device management feature such as remote password reset, wiping sensitive data, etc. We also want to make the app more accessible by adding more disguise appearance options, as well as improving our client support by making the app more easy to understand. | losing |
## Inspiration
With the cost of living increasing yearly and inflation at an all-time high, people need financial control more than ever. The problem is the investment field is not beginner friendly, especially with it's confusing vocabulary and an abundance of concepts creating an environment detrimental to learning. We felt the need to make a clear, well-explained learning environment for learning about investing and money management, thus we created StockPile.
## What it does
StockPile provides a simulation environment of the stock market to allow users to create virtual portfolio's in real time. With relevant information and explanations built into the UX, the complex world of investments is explained in simple words, one step at a time. Users can set up multiple portfolios to try different strategies, learn vocabulary by seeing exactly where the terms apply, and access articles tailored to their actions from the simulator using AI based recommendation engines.
## How we built it
Before starting any code, we planned and prototyped the application using Figma and also fully planned a backend architecture. We started our project using React Native for a mobile app, but due to connection and network issues while collaborating, we moved to a web app that runs on the phone using React.
## Challenges we ran into
Some challenges we faced was creating a minimalist interface without the loss of necessary information, and incorporating both learning and interaction simultaneously. We also realized that we would not be able to finish much of our project in time, so we had to single out what to focus on to make our idea presentable.
## Accomplishments that we're proud of
We are proud of our interface, the depth of which we fleshed out our starter concept, and the ease of access of our program.
## What we learned
We learned about
* Refining complex ideas into presentable products
* Creating simple and intuitive UI/UX
* How to use react native
* Finding stock data from APIs
* Planning backend architecture for an application
## What's next for StockPile
Next up for StockPile would be to actually finish coding the app, preferably in a mobile version over a web version. We would also like to add the more complicated views, such as explanations for candle charts, market volume charts, etc. in our app.
## How StockPile approaches it's challenges:
#### Best Education Hack
Our entire project is based around encouraging, simplifying and personalizing the learning process. We believe that everyone should have access to a learning resource that adapts to them while providing them with a gentle yet complete introduction to investing.
#### MLH Best Use of Google Cloud
Our project uses some google services at it's core.
- GCP App Engine - We can use app engine to host our react frontend and some of our backend.
- GCP Cloud Functions - We can use Cloud Functions to quickly create microservices for different servies, such as backend for fetching stock chart data from FinnHub.
- GCP Compute Engine - To host a CMS for the learn page content, and to host instance of CockroachDB
- GCP Firebase Authentication to authenticate users securely.
- GCP Recommendations AI - Used with other statistical operations to analyze a user's portfolio and present them with articles/tutorials best suited for them in the learn section.
#### MLH Best Use of CockroachDB
CockroachDB is a distributed SQL database - one that can scale. We understand that buying/selling stocks is transactional in nature, and there is no better solution that using a SQL database. Addditionally, we can use CockroachDB as a timeseries database - this allows us to effectively cache stock price data so we can optimze costs of new requests to our stock quote API. | ## Inspiration
With the coming of the IoT age, we wanted to explore the addition of new experiences in our interactions with physical objects and facilitate crossovers from the digital to the physical world. Since paper is a ubiquitous tool in our day to day life, we decided to try to push the boundaries of how we interact with paper.
## What it does
A user places any piece of paper with text/images on it on our clipboard and they can now work with the text on the paper as if it were hyperlinks. Our (augmented) paper allows users to physically touch keywords and instantly receive Google search results. The user first needs to take a picture of the paper being interacted with and place it on our enhanced clipboard and can then go about touching pieces of text to get more information.
## How I built it
We used ultrasonic sensors with an Arduino to determine the location of the user's finger. We used the Google Cloud API to preprocess the paper contents. In order to map the physical (ultrasonic data) with the digital (vision data), we use a standardized 1x1 inch token as a 'measure of scale' of the contents of the paper.
## Challenges I ran into
So many challenges! We initially tried to use a RFID tag but later figured that SONAR works better. We struggled with Mac-Windows compatibility issues and also struggled a fair bit with the 2D location and detection of the finger on the paper. Because of the time constraint of 24 hours, we could not develop more use cases and had to resort to just one.
## What I learned
We learned to work with the Google Cloud Vision API and interface with hardware in Python. We learned that there is a LOT of work that can be done to augment paper and similar physical objects that all of us interact with in the daily world.
## What's next for Augmented Paper
Add new applications to enhance the experience with paper further. Design more use cases for this kind of technology. | ## Inspiration
Success in financial technology used to be completely proportional to the amount of data a company owned. Though the amount of data you have still remains a key factor to financial success, companies are needing to find new ways to apply their data, in order to see increasingly beneficial results.
Our inspiration is rooted in the idea that data is not only about how much you have, but about how well you use what you have. StockPile is an application that focuses on taking data to another level, where companies and individuals will be able to use deeper analysis to gain a competitive advantage in their fields.
## What it does
StockPile is a web-based application that provides a unique spin on managing stock portfolios. By using sentiment analysis, StockPile can plot data about how a company is being reported about in social media (for the purposes of this hack-a-thon, we will pull information from the New York Times). This data will provide a further insight into the potential of success or failure a company may have. The data will be plotted along with each company's stock trends for an easier analysis.
The ultimate goal is to provide the users with a new tool for analyzing stock trends. Since social media can have a serious impact on business, it is logical to think that the positive or negative articles being written about certain companies can have an impact on their shares.
StockPile also provides the ability to simulate the buying and selling of stocks, and to manage a personal portfolio of one's stock data. The application encourages the use of storing a user's stock portfolio (including real dollar amounts), and allowing the user to observe sentiment analysis trends to provide assistance in deciding when to buy or sell stocks.
## How We built it
The implementation of StockPile was divided across the team. One member focused on processing the data through the Node.js server, while the other member focused on implementing front-end logic in JavaScript and EJS templating.
On the back-end, Express routes were set up to provide access to many different views and handle user sessions. Then, the focus shifted to gather data by interacting with the external APIs. New York Times API was used to gather news articles on each company, then these were fed into the Dandelion API for Sentiment Analysis. Once the data was retrieved, it was parsed and put into clean formats for use on the front-end. The server required use of many different "node modules" or npms, in order to drive the development and make data processing easier to manage.
On the front-end, the implementation of Data Tables created an organized way to store and view stock information. It also provided simple access to the sentiment analysis data specific for each stock. Then, the buying and selling of stocks were implemented along with user profiles to keep track of portfolios for each user. The Chart.js library was used to visualize our results, providing a plot of sentiment data, along with the Yahoo! Finance stock trend information.
In terms of technology, Node.js was used as the server, with primarily JavaScript/JQuery as the client. EJS (Embedded JavaScript) was used as a templating engine for different views, and mixed nicely with the CSS of Semantic UI.
## Challenges We ran into
Publicly accessible free APIs caused several issues during the development of this project. Originally, we had chosen to obtain our sentiment analysis information from IBM Watson's AlchemyNews API. This API was perfect for what we wanted to accomplish because it queried it's own storage of news articles, without us having to retrieve the articles ourselves. However, the problem was that the API limit was exceeded after only a few requests, making it a frustrating process to work around. In the end, we ended up using two separate APIs to complete the task, which required a little extra effort.
## Accomplishments that We're proud of
Going into this project, we were not certain that the results we obtained were going to in-fact show accurate results. It was indeed a gamble to assume that we would see stock prices contain a correlation to news articles. We are proud to say that after implementing our application, we were correct to assume that the stock prices would indeed fluctuate with the sentiment analysis retrieved from news articles. This was a huge bonus for us to know that our implementation is indeed useful and correct.
## What We learned
Financial technology has not been a focus of ours, therefore we feel we learned a lot about the financial technology sector. We also learned a lot about different types of sentiment analysis, as we had to go through many external APIs before we found one that suited our needs, after our original API fell through. This was a unique experience, as these APIs certainly provide an advanced way to write code.
## What's next for StockPile
* Implement different types of cognitive analysis to be plotted and observe their trends against stock data
* Implement real-time stock trading to centralize the task of buying and selling with the tools that make the process more efficient
* Implement machine learning to recognize further trends against stock information
* Implement cognitive assistance to help suggest stocks to buy/sell based on stock information and sentimental analysis | partial |
### Inspiration
The way research is funded is harmful to science — researchers seeking science funding can be big losers in the equality and diversity game. We need a fresh ethos to change this.
### What it does
Connexsci is a grant funding platform that generates exposure to undervalued and independent research through graph-based analytics. We've built a proprietary graph representation across 250k research papers that allows for indexing central nodes with highest value-driving research. Our grant marketplace allows users to leverage these graph analytics and make informed decisions on scientific public funding, a power which is currently concentrated in a select few government organizations. Additionally, we employ quadratic funding, a fundraising model that democratizes impact of contributions that has seen mainstream success through <https://gitcoin.co/>.
### How we built it
To gain unique insights on graph representations of research papers, we leveraged Cohere's NLP suite. More specifically, we used Cohere's generate functionality for entity extraction and fine-tuned their small language model with our custom research paper dataset for text embeddings. We created self-supervised training examples where we fine-tuned Cohere's model using extracted key topics given abstracts using entity extraction. These training examples were then used to fine-tune a small language model for our text embeddings.
Node prediction was achieved via a mix of document-wise cosine similarity, and other adjacency matrices that held rich information regarding authors, journals, and domains.
For our funding model, we created a modified version of the Quadratic Funding model. Unlike the typical quadratic funding systems, if the subsidy pool is not big enough to make the full required payment to every project, we can divide the subsidies proportionately by whatever constant makes the total add up to the subsidy pool's budget. For a given scenario, for example, a project dominated the leaderboard with an absolute advantage. The team then gives away up to 50% of their matching pool distribution so that every other project can have a share from the round, and after that we can see an increase of submissions.
The model is then implemented to our Bounty platform where organizers/investors can set a "goal" or bounty for a certain group/topic to be encouraged to research in a specific area of academia. In turn, this allows more researchers of unpopular topics to be noticed by society, as well as allow for advancements in the unpopular fields.
### Challenges we ran into
The entire dataset broke down in the middle of the night! Cohere also gave trouble with semantic search, making it hard to train our exploration model.
### Accomplishments that we're proud of
Parsing 250K+ publications and breaking it down to the top 150 most influential models. Parsing all ML outputs on to a dynamic knowledge graph. Building an explorable knowledge graph that interacts with the bounty backend.
### What's next for Connex
Integrating models directly on the page, instead of through smaller microservices. | ## Inspiration
Our team was determined to challenge a major problem in society, and create a practical solution. It occurred to us early on that **false facts** and **fake news** has become a growing problem, due to the availability of information over common forms of social media. Many initiatives and campaigns recently have used approaches, such as ML fact checkers, to identify and remove fake news across the Internet. Although we have seen this approach become evidently better over time, our group felt that there must be a way to innovate upon the foundations created from the ML.
In short, our aspirations to challenge an ever-growing issue within society, coupled with the thought of innovating upon current technological approaches to the solution, truly inspired what has become ETHentic.
## What it does
ETHentic is a **betting platform** with a twist. Rather than preying on luck, you play against the odds of truth and justice. Users are given random snippets of journalism and articles to review, and must determine whether the information presented within the article is false/fake news, or whether it is legitimate and truthful, **based on logical reasoning and honesty**.
Users must initially trade in Ether for a set number of tokens (0.30ETH = 100 tokens). One token can be used to review one article. Every article that is chosen from the Internet is first evaluated using an ML model, which determines whether the article is truthful or false. For a user to *win* the bet, they must evaluate the same choice as the ML model. By winning the bet, a user will receive a $0.40 gain on bet. This means a player is very capable of making a return on investment in the long run.
Any given article will only be reviewed 100 times by any unique user. Once the 100 cap has been met, the article will retire, and the results will be published to the Ethereum blockchain. The results will include anonymous statistics of ratio of truth:false evaluation, the article source, and the ML's original evaluation. This data is public, immutable, and has a number of advantages. All results going forward will be capable of improving the ML model's ability to recognize false information, by comparing the relationship of assessment to public review, and training the model in a cost-effective, open source method.
To summarize, ETHentic is an incentivized, fun way to educate the public about recognizing fake news across social media, while improving the ability of current ML technology to recognize such information. We are improving the two current best approaches to beating fake news manipulation, by educating the public, and improving technology capabilities.
## How we built it
ETHentic uses a multitude of tools and software to make the application possible. First, we drew out our task flow. After sketching wireframes, we designed a prototype in Framer X. We conducted informal user research to inform our UI decisions, and built the frontend with React.
We used **Blockstack** Gaia to store user metadata, such as user authentication, betting history, token balance, and Ethereum wallet ID in a decentralized manner. We then used MongoDB and Mongoose to create a DB of articles and a counter for the amount of people who have viewed any given article. Once an article is added, we currently outsourced to Google's fact checker ML API to generate a true/false value. This was added to the associated article in Mongo **temporarily**.
Users who wanted to purchase tokens would receive a Metamask request, which would process an Ether transfer to an admin wallet that handles all the money in/money out. Once the payment is received, our node server would update the Blockstack user file with the correct amount of tokens.
Users who perform betting receive instant results on whether they were correct or wrong, and are prompted to accept their winnings from Metamask.
Everytime the Mongo DB updates the counter, it checks if the count = 100. Upon an article reaching a count of 100, the article is removed from the DB and will no longer appear on the betting game. The ML's initial evaluation, the user results, and the source for the article are all published permanently onto an Ethereum blockchain. We used IPFS to create a hash that linked to this information, which meant that the cost for storing this data onto the blockchain was massively decreased. We used Infuria as a way to get access to IPFS without needing a more heavy package and library. Storing on the blockchain allows for easy access to useful data that can be used in the future to train ML models at a rate that matches the user base growth.
As for our brand concept, we used a green colour that reminded us of Ethereum Classic. Our logo is Lady Justice - she's blindfolded, holding a sword in one hand and a scale in the other. Her sword was created as a tribute to the Ethereum logo. We felt that Lady Justice was a good representation of what our project meant, because it gives users the power to be the judge of the content they view, equipping them with a sword and a scale. Our marketing website, ethergiveawayclaimnow.online, is a play on "false advertising" and not believing everything you see online, since we're not actually giving away Ether (sorry!). We thought this would be an interesting way to attract users.
## Challenges we ran into
Figuring out how to use and integrate new technologies such as Blockstack, Ethereum, etc., was the biggest challenge. Some of the documentation was also hard to follow, and because of the libraries being a little unstable/buggy, we were facing a lot of new errors and problems.
## Accomplishments that we're proud of
We are really proud of managing to create such an interesting, fun, yet practical potential solution to such a pressing issue. Overcoming the errors and bugs with little well documented resources, although frustrating at times, was another good experience.
## What we learned
We think this hack made us learn two main things:
1) Blockchain is more than just a cryptocurrency tool.
2) Sometimes even the most dubious subject areas can be made interesting.
The whole fake news problem is something that has traditionally been taken very seriously. We took the issue as an opportunity to create a solution through a different approach, which really stressed the lesson of thinking and viewing things in a multitude of perspectives.
## What's next for ETHentic
ETHentic is looking forward to the potential of continuing to develop the ML portion of the project, and making it available on test networks for others to use and play around with. | ## Inspiration
The volume of academic literature is exploding. In machine learning alone, the number of papers on arXiv doubles every 24 months. As former research interns, we've experienced firsthand the overwhelming challenge of sifting through hundreds of dense, complex papers to find relevant information. This process is not just time-consuming; it's a significant bottleneck in the research workflow, particularly in fast-moving fields like machine learning. We realized that the tools researchers use to navigate this sea of information haven't kept pace with the exponential growth of published research.
## What it does
Scholar-Link is a groundbreaking tool that revolutionizes how researchers interact with scientific literature. At its core, it creates a visual force-directed graph of research papers, leveraging the principles of co-citation and bibliographic coupling to reveal hidden connections and trends.
### Key features include:
* **Interactive Visual Graph**: Papers are represented as nodes, with connections based on co-citation (papers cited together) and bibliographic coupling (papers sharing references). This allows researchers to visually explore the landscape of their field, finding inspiration or papers they might otherwise miss
* **Trend Analysis**: Keyword and time/activity analysis in a graphical representation allows researchers to easily spot emerging trends and hot topics in their field.
* **AI-Powered Insights**: Our specialized AI Chatbot can quickly skim through papers, providing concise summaries of key points, methodologies, and findings
* **Customizable Filters**: Researchers can filter the graph based on various parameters like publication date, citation count, or specific keywords.
## How we built it
We built Scholar-Link using a combination of cutting-edge technologies and novel applications of established bibliometric principles:
* **Natural Language Processing**: We implemented advanced NLP algorithms to analyze paper content, extract key information, and generate summaries.
* **Graph Theory**: We applied complex graph algorithms to create and optimize the force-directed graph based on co-citation and bibliographic coupling data.
* **Front-end Development:** We created an intuitive, responsive user interface using modern web technologies to ensure a smooth user experience.
* **Back-end Infrastructure**: We built a scalable backend to handle large volumes of data and real-time graph computations, leveraging concurrency and multithreading to boost performance
* **Data Collection**: We developed robust web scraping tools to gather paper metadata from various academic databases and repositories.
## Challenges we ran into
* **Data Volume**: Handling and processing the sheer volume of academic papers was a significant challenge. A lot of time and effort was spent figuring out multithreading to optimize processing
* **Graph Optimization**: Ensuring the graph produced correct results, even with massive amounts of nodes, required complex optimization algorithms
* **AI Accuracy**: Implementing an AI summarizer that could accurately capture the essence of complex academic papers across various fields was particularly challenging.
## What we learned
This project deepened our understanding of bibliometrics, graph theory, and natural language processing. We also gained invaluable insights into the needs and pain points of researchers across various disciplines.
## What's next for Scholar-Link
We're excited about the potential of Scholar-Link to transform academic research.
Our next steps include:
* Expanding our database to cover more academic fields and publications.
* Further optimizing our algorithms to analyze larger datasets and deliver more comprehensive results
* Incorporating machine learning to provide personalized paper recommendations.
* Developing collaboration features to allow researchers to share and annotate graphs.
* Creating APIs to integrate our tool with existing research management software.
* Exploring applications in other knowledge-intensive fields beyond academia, such as patent analysis or market research.
Scholar-Link is more than just a tool; it's a new way of seeing and understanding the vast landscape of human knowledge. We're committed to continually improving and expanding our platform to meet the evolving needs of researchers worldwide. | winning |
Our Team's First Hackathon - Treehacks 2024
## Inspiration
Our inspiration for this project was from the memories we recalled when sharing food and recipes with our families. As students, we wanted to create the prototype for an app that could help bring this same joyful experience to others, whether it be sharing a simple recipe to potentially meeting new friends.
## What it does
Our app, although currently limited in functionality, is intended to quickly share recipes while also connecting people through messaging capabilities. Users would be able to create a profile and share their recipes with friends and even strangers. An explore tab and search tab were drafted to help users find new cuisine they otherwise may have never tasted.
## How we built it
We designed the layout of our app in Figma and then implemented the front end and back end in xCode using Swift. The backend was primarily done with Firebase to allow easier account creation and for cloud storage. Through this, we created various structs and fields to allow user input to be registered and seen on other accounts as well. Through the use of Figma, we were able to prototype various cells that would be embedded through a table view for easy scrolling. These pages were created from scratch and with the help of stock-free icons taken from TheNounProject.
## Challenges we ran into
While we were able to create a front end and a back end, we were unable to successfully merge the two leaving us with an incomplete project. For our first hackathon, we were easily ambitious but eager to attempt this. It was our first time doing both the front end and back end with no experience from either teammate. We ended up spending a lot of time putting together the UI, which led to complications with learning SwiftUI. A lot of time was spent learning how to navigate Swift and also looking into integrating our backend. Unfortunately, we were unable to make a functioning app that lined up with our goals.
Alongside our development issues, we ran into problems with GitHub, where pulling new commits would erase our progress on xCode to an earlier version. We were unable to fix this issue and found it easier to start a new repository, which should have been unnecessary but our lack of experience and time led us down this route. We ended up linking the old repository so all commits can still be seen, yet, it was still a challenge learning how to commit and work through various branches.
## Accomplishments that we're proud of
We are proud of completing our first hackathons. We are a team composed entirely of newcomers who didn’t know what we were going to do for our project. We met Friday evening with absolutely no idea in mind, yet we were able to come up with an ambitious idea that was amazing to work on. In addition to this, none of us had any prior experience with Swift, but we managed to get some elements working regardless. Firebase was learned and navigated through as well, even if both ends never ended up meeting. The fact that we were able to accomplish this much in such a short amount of time while learning this new material was truly astonishing. We are proud of the progress we made and our ability to follow through on a project regardless of how ambitious the project was. In the end, we're glad we committed to the journey and exposed ourselves to many elements of app development.
## What we learned
We learned that preparation is key. Before coming here, we didn’t have an idea for our project making the first few hours we had dedicated to brainstorming what we were going to do. Additionally, if we knew we were going to make an iOS app, we would have probably looked at some sort of Swift tutorials in preparation for our project. We underestimated the complexity of turning our UI into a functional app and it brought us many complications. We did not think much of it as it was our first hackathon, but now that we got exposure to what it entails, we are ready for the next one and eager to compete.
## What's Next for Culinary Connections
We hope to fully flesh out this app eventually! We believe this has a lot of potential and have identified a possible user base. We think food is a truly special part of culture and the universal human experience and as such, we feel it is important to spread this magic. A lot of people find comfort in cooking and being able to meet like-minded people through food is truly something special. In just 36 hours, we managed to create a UI, a potential backend, and a skeleton of a frontend. Given more time, with our current passion, we can continue to learn Swift and turn this into a fully-fledged app that could be deployed! We can already think of potential updates such as adding the cooking lessons with a chatbot that remembers your cooking recipes and techniques. It would specialize in identifying mistakes that could occur when cooking and would work to hone a beginner's skills! We hope to continue building upon this as we hope to bring others through food while bringing ourselves new skills as well. | ## Inspiration
We were inspired to make an application which checks news reliability since we noticed the ever increasing amount of misinformation in our digital world.
## What it does
Simply input the news headline into the input box and press submit. The algorithm will then classify it as a reliable, or unreliable source.
## How we built it
We used Tensorflow with the Keras API to build a model with word embeddings in 32 dimensional space. Then we used a dense layer to make the final model predictions Finally, we used flask to connect the model with the front end HTML and CSS code.
## Challenges we ran into
The most challenging part in building the model was dealing with over fitting because of limited initial training data. This caused the model to perform in training, but not work as well in testing.
## Accomplishments that we're proud of
We are most proud of using data augmentation to increase the size of our training set which resulted in the model having 97% accuracy on validation. Our team was also very organized, respectful of each other, and most importantly, we all had a really fun time!
## What we learned
We learned a lot about flask as our team had nearly no experience with technology. We also learned how to integrate our machine learning model into a website!
## What's next for NewsDetectives
We will continue to make our model more accurate, and to make more intricate and interactive website designs! | ## Inspiration
We saw that lots of people were looking for a team to work with for this hackathon, so we wanted to find a solution
## What it does
I helps developers find projects to work, and helps project leaders find group members.
By using the data from Github commits, it can determine what kind of projects a person is suitable for.
## How we built it
We decided on building an app for the web, then chose a graphql, react, redux tech stack.
## Challenges we ran into
The limitations on the Github api gave us alot of troubles. The limit on API calls made it so we couldnt get all the data we needed. The authentication was hard to implement since we had to try a number of ways to get it to work. The last challenge was determining how to make a relationship between the users and the projects they could be paired up with.
## Accomplishments that we're proud of
We have all the parts for the foundation of a functional web app. The UI, the algorithms, the database and the authentication are all ready to show.
## What we learned
We learned that using APIs can be challenging in that they give unique challenges.
## What's next for Hackr\_matchr
Scaling up is next. Having it used for more kind of projects, with more robust matching algorithms and higher user capacity. | losing |
## Inspiration
Keep yourself hydrated!
## What it does
Tracks the amount of water you drank.
## How we built it
We had a Bluetooth connected to the Arduino computer and sent the collected data through the app.
## Challenges we ran into
The initial setup was quite difficult, we weren't sure what tools we were going to use as finding the limitations of these tools could be quite challenging and unexpected.
## Accomplishments that we're proud of
We have successfully managed to combine mechanical, electrical, and software aspect of the project. We are also proud that different university students from different cultural/technical background has gathered up as a team and successfully initiated the project.
## What we learned
We have learned how to maximize the usage of various tools, such as how to calculate the amount of water has been traveled through the tube using the volumetric flow rate and time taken, how to send sensor data to the app, and how to build an app that receives such data while providing intuitive user experience.
## What's next for Hydr8
Smaller component to fit in a bottle, more sensors to increase the accuracy of the tracked data. More integration with the app would also be a huge improvement. | ## Inspiration
memes have become a cultural phenomenon and a huge recreation for many young adults including ourselves. for this hackathon, we decided to connect the sociability aspect of the popular site "twitter", and combine it with a methodology of visualizing the activity of memes in various neighborhoods. we hope that through this application, we can create a multicultural collection of memes, and expose these memes, trending from popular cities to a widespread community of memers.
## What it does
NWMeme is a data visualization of memes that are popular in different parts of the world. Entering the application, you are presented with a rich visual of a map with Pepe the frog markers that mark different cities on the map that has dank memes. Pepe markers are sized by their popularity score which is composed of retweets, likes, and replies. Clicking on Pepe markers will bring up an accordion that will display the top 5 memes in that city, pictures of each meme, and information about that meme. We also have a chatbot that is able to reply to simple queries about memes like "memes in Vancouver."
## How we built it
We wanted to base our tech stack with the tools that the sponsors provided. This started from the bottom with CockroachDB as the database that stored all the data about memes that our twitter web crawler scrapes. Our web crawler was in written in python which was Google gave an advanced level talk about. Our backend server was in Node.js which CockroachDB provided a wrapper for hosted on Azure. Calling the backend APIs was a vanilla javascript application which uses mapbox for the Maps API. Alongside the data visualization on the maps, we also have a chatbot application using Microsoft's Bot Framework.
## Challenges we ran into
We had many ideas we wanted to implement, but for the most part we had no idea where to begin. A lot of the challenge came from figuring out how to implement these ideas; for example, finding how to link a chatbot to our map. At the same time, we had to think of ways to scrape the dankest memes from the internet. We ended up choosing twitter as our resource and tried to come up with the hypest hashtags for the project.
A big problem we ran into was that our database completely crashed an hour before the project was due. We had to redeploy our Azure VM and database from scratch.
## Accomplishments that we're proud of
We were proud that we were able to use as many of the sponsor tools as possible instead of the tools that we were comfortable with. We really enjoy the learning experience and that is the biggest accomplishment. Bringing all the pieces together and having a cohesive working application was another accomplishment. It required lots of technical skills, communication, and teamwork and we are proud of what came up.
## What we learned
We learned a lot about different tools and APIs that are available from the sponsors as well as gotten first hand mentoring with working with them. It's been a great technical learning experience. Asides from technical learning, we also learned a lot of communication skills and time boxing. The largest part of our success relied on that we all were working on parallel tasks that did not block one another, and ended up coming together for integration.
## What's next for NWMemes2017Web
We really want to work on improving interactivity for our users. For example, we could have chat for users to discuss meme trends. We also want more data visualization to show trends over time and other statistics. It would also be great to grab memes from different websites to make sure we cover as much of the online meme ecosystem. | ## Inspiration
We were brainstorming ideas where we came across an application that tracked a users financial balance and displayed their status through a friendly visual representation. As a group, our aim was to create an invention that would help benefit the health of our users. With that being said, as our minds were running hard thinking of ideas our hydration was running thin. Thus, an idea came: why not keep track of water intake and promote the user to stay hydrated through a friendly visual interface.
## What it does
Chug2Puff is a visualization tool to help promote users to keep track of their daily water intake. This application allows the user to input a variety of factors - such as age, gender, weight and exercise activity -and will output a reasonable recommended target of how much water the user should consume. By giving the user a friendly interface that challenges them to meet their target we hope that Chug2Puff will promote a healthier lifestyle in individuals and make them more aware of how much water they are consuming.
## How we built it
This application was coded in MATLAB using their user interface tool. Finding this tool reduced developing time as it contained drag and drop tools to create the GUI. We attempted to use QT Python initially for the GUI but encountered several road blocks.
## Challenges we ran into
The biggest challenge we ran into was figuring out which language to code this tool on. As beginners we were unsure of which language would be friendly to create a GUI and so we decided to try with QT python. That turned out to be very difficult to install and use and so we switched platforms to MATLAB.
## Accomplishments that we're proud of
As beginner hackers we are extremely proud of Chug2Puff. We love how friendly our User Interface is, and how simple it is to keep track of water intake.We also really like our graphic design for the tool!
## What we learned
We learned how to crease a User Interface in MATLAB through their GUIDE tool, which proved to be quite simple to use!
## What's next for Chug2Puff
Chug2Puff has plans to move into the hardware world. That's right. Hardware. We plan to integrate Chug2Puff with a piece of hardware that would be attached to your reusable water bottle. This piece of hardware would track how many bottles of water you drink, and communicate with Chug2Puff without having the user to input data each time they consume water. | partial |
## About the Project
NazAR is an educational tool that automatically creates interactive visualizations of math word problems in AR, requiring nothing more than an iPhone.
## Behind the Name
*Nazar* means “vision” in Arabic, which symbolizes the driving goal behind our app – not only do we visualize math problems for students, but we also strive to represent a vision for a more inclusive, accessible and tech-friendly future for education. And, it ends with AR, hence *NazAR* :)
## Inspiration
The inspiration for this project came from each of our own unique experiences with interactive learning. As an example, we want to showcase two of the team members’ experiences, Mohamed and Rayan’s. Mohamed Musa moved to the US when he was 12, coming from a village in Sudan where he grew up and received his primary education. He did not speak English and struggled until he had an experience with a teacher that transformed his entire learning experience through experiential and interactive learning. From then on, applying those principles, Mohamed was able to pick up English fluently within a few months and reached the top of his class in both science and mathematics. Rayan Ansari had worked with many Syrian refugee students on a catch-up curriculum. One of his students, a 15 year-old named Jamal, had not received schooling since Kindergarten and did not understand arithmetic and the abstractions used to represent it. Intuitively, the only means Rayan felt he could effectively teach Jamal and bridge the connection would be through physical examples that Jamal could envision or interact with. From the diverse experiences of the team members, it was glaringly clear that creating an accessible and flexible interactive learning software would be invaluable in bringing this sort of transformative experience to any student’s work. We were determined to develop a platform that could achieve this goal without having its questions pre-curated or requiring the aid of a teacher, tutor, or parent to help provide this sort of time-intensive education experience to them.
## What it does
Upon opening the app, the student is presented with a camera view, and can press the snapshot button on the screen to scan a homework problem. Our computer vision model then uses neural network-based text detection to process the scanned question, and passes the extracted text to our NLP model.
Our NLP text processing model runs fully integrated into Swift as a Python script, and extracts from the question a set of characters to create in AR, along with objects and their quantities, that represent the initial problem setup. For example, for the question “Sally has twelve apples and John has three. If Sally gives five of her apples to John, how many apples does John have now?”, our model identifies that two characters should be drawn: Sally and John, and the setup should show them with twelve and three apples, respectively.
The app then draws this setup using the Apple RealityKit development space, with the characters and objects described in the problem overlayed. The setup is interactive, and the user is able to move the objects around the screen, reassigning them between characters. When the position of the environment reflects the correct answer, the app verifies it, congratulates the student, and moves onto the next question. Additionally, the characters are dynamic and expressive, displaying idle movement and reactions rather than appearing frozen in the AR environment.
## How we built it
Our app relies on three main components, each of which we built from the ground up to best tackle the task at hand: a computer vision (CV) component that processes the camera feed into text: an NLP model that extracts and organizes information about the initial problem setup; and an augmented-reality (AR) component that creates an interactive, immersive environment for the student to solve the problem.
We implemented the computer vision component to perform image-to-text conversion using the Apple’s Vision framework model, trained on a convolutional neural network with hundreds of thousands of data points. We customize user experience with a snapshot button that allows the student to position their in front of a question and press it to capture an image, which is then converted to a string, and passed off to the NLP model.
Our NLP model, which we developed completely from scratch for this app, runs as a Python script, and is integrated into Swift using a version of PythonKit we custom-modified to configure for iOS. It works by first tokenizing and lemmatizing the text using spaCy, and then using numeric terms as pivot points for a prioritized search relying on English grammatical rules to match each numeric term to a character, an object and a verb (action). The model is able to successfully match objects to characters even when they aren’t explicitly specified (e.g. for Sally in “Ralph has four melons, and Sally has six”) and, by using the proximate preceding verb of each numeric term as the basis for an inclusion-exclusion criteria, is also able to successfully account for extraneous information such as statements about characters receiving or giving objects, which shouldn’t be included in the initial setup. Our model also accounts for characters that do not possess any objects to begin with, but who should be drawn in the display environment as they may receive objects as part of the solution to the question. It directly returns filenames that should be executed by the AR code.
Our AR model functions from the moment a homework problem is read. Using Apple’s RealityKit environment, the software determines the plane of the paper in which we will anchor our interactive learning space. The NLP model passes objects of interest which correspond to particular USDZ assets in our library, as well as a vibrant background terrain. In our testing, we used multiple models for hand tracking and gesture classification, including a CoreML model, a custom SDK for gesture classification, a Tensorflow model, and our own gesture processing class paired with Apple’s hand pose detection library. For the purposes of Treehacks, we figured it would be most reasonable to stick with touchscreen manipulation, especially for our demo that utilizes the iPhone device itself without being worn with a separate accessory. We found this to also provide better ease of use when interacting with the environment and to be most accessible, given hardware constraints (we did not have a HoloKit Apple accessory nor the upcoming Apple AR glasses).
## Challenges we ran into
We ran into several challenges while implementing our project, which was somewhat expected given the considerable number of components we had, as well as the novelty of our implementation.
One of the first challenges we had was a lack of access to wearable hardware, such as HoloKits or HoloLenses. We decided based on this, as well as a desire to make our app as accessible and scalable as possible without requiring the purchase of expensive equipment by the user, to be able to reach as many people who need it as possible.
Another issue we ran into was with hand gesture classification. Very little work has been done on this in Swift environments, and there was little to no documentation on hand tracking available to us. As a result, we wrote and experimented with several different models, including training our own deep learning model that can identify gestures, but it took a toll on our laptop’s resources. At the end we got it working, but are not using it for our demo as it currently experiences some lag. In the future, we aim to run our own gesture tracking model on the cloud, which we will train on over 24,000 images, in order to provide lag-free hand tracking.
The final major issue we encountered was the lack of interoperability between Apple’s iOS development environment and other systems, for example with running our NLP code, which requires input from the computer vision model, and has to pass the extracted data on to the AR algorithm. We have been continually working to overcome this challenge, including by modifying the PythonKit package to bundle a Python interpreter alongside the other application assets, so that Python scripts can be successfully run on the end machine. We also used input and output to text files to allow our Python NLP script to more easily interact with the Swift code.
## Accomplishments we're proud of
We built our computer vision and NLP models completely from the ground up during the Hackathon, and also developed multiple hand-tracking models on our own, overcoming the lack of documentation for hand detection in Swift.
Additionally, we’re proud of the novelty of our design. Existing models that provide interactive problem visualization all rely on custom QR codes embedded with the questions that load pre-written environments, or rely on a set of pre-curated models; and Photomath, the only major app that takes a real-time image-to-text approach, lacks support for word problems. In contrast, our app integrates directly with existing math problems, and doesn’t require any additional work on the part of students, teachers or textbook writers in order to function.
Additionally, by relying only on an iPhone and an optional HoloKit accessory for hand-tracking which is not vital to the application (which at a retail price of $129 is far more scalable than VR sets that typically cost thousands of dollars), we maximize accessibility to our platform not only in the US, but around the world, where it has the potential to complement instructional efforts in developing countries where educational systems lack sufficient resources to provide enough one-on-one support to students. We’re eager to have NazAR make a global impact on improving students’ comfortability and experience with math in coming years.
## What we learned
* We learnt a lot from building the tracking models, which haven’t really been done for iOS and there’s practically no Swift documentation available for.
* We are truly operating on a new frontier as there is little to no work done in the field we are looking at
* We will have to manually build a lot of different architectures as a lot of technologies related to our project are not open source yet. We’ve already been making progress on this front, and plan to do far more in the coming weeks as we work towards a stable release of our app.
## What's next for NazAR
* Having the app animate the correct answer (e.g. Bob handing apples one at a time to Sally)
* Animating algorithmic approaches and code solutions for data structures and algorithms classes
* Being able to automatically produce additional practice problems similar to those provided by the user
* Using cosine similarity to automatically make terrains mirror the problem description (e.g. show an orchard if the question is about apple picking, or a savannah if giraffes are involved)
* And more! | ## Inspiration
Although each of us came from different backgrounds, we each share similar experiences/challenges during our high school years: it was extremely hard to visualize difficult concepts, much less understand the the various complex interactions. This was most prominent in chemistry, where 3D molecular models were simply nonexistent, and 2D visualizations only served to increase confusion. Sometimes, teachers would use a combination of Styrofoam balls, toothpicks and pens to attempt to demonstrate, yet despite their efforts, there was very little effect. Thus, we decided to make an application which facilitates student comprehension by allowing them to take a picture of troubling text/images and get an interactive 3D augmented reality model.
## What it does
The app is split between two interfaces: one for text visualization, and another for diagram visualization. The app is currently functional solely with Chemistry, but can easily be expanded to other subjects as well.
If the text visualization is chosen, an in-built camera pops up and allows the user to take a picture of the body of text. We used Google's ML-Kit to parse the text on the image into a string, and ran a NLP algorithm (Rapid Automatic Keyword Extraction) to generate a comprehensive flashcard list. Users can click on each flashcard to see an interactive 3D model of the element, zooming and rotating it so it can be seen from every angle. If more information is desired, a Wikipedia tab can be pulled up by swiping upwards.
If diagram visualization is chosen, the camera remains perpetually on for the user to focus on a specific diagram. An augmented reality model will float above the corresponding diagrams, which can be clicked on for further enlargement and interaction.
## How we built it
Android Studio, Unity, Blender, Google ML-Kit
## Challenges we ran into
Developing and integrating 3D Models into the corresponding environments.
Merging the Unity and Android Studio mobile applications into a single cohesive interface.
## What's next for Stud\_Vision
The next step of our mobile application is increasing the database of 3D Models to include a wider variety of keywords. We also aim to be able to integrate with other core scholastic subjects, such as History and Math. | ## 💫 Inspiration
It all started when I found VCR tapes of when I was born! I was simply watching the videos fascinated with how much younger everyone looked when I noticed someone unknown, present in the home videos, helping my mom!
After asking my mom, I found out there used to be a program where Nurses/Caretakers would actually make trips to their homes, teaching them how to take care of the baby, and helping them maneuver the first few months of motherhood!
And so, I became intrigued. Why haven't I heard of this before? Why does it not exist anymore? I researched at the federal, provincial and municipal levels to uncover a myriad of online resources available to first-time mothers/parents which aren't well known, and we decided, let's bring it back, better than ever!
## 👶🏻 What BabyBloom does
BabyBloom, is an all-in-one app that targets the needs of first-time mothers in Canada! It provides a simple interface to browse a variety of governmental resources, filtered based off your residential location, and a partnering service with potential caregivers and nurses to help you navigate your very first childbirth.
## 🔨 How we built it
We’re always learning and trying new things! For this app, we aimed to implement an MVC (Model, View, Controller) application structure, and focus on the user's experience and the potential for this project. We've opted for a mobile application to facilitate ease for mothers to easily access it through their phones and tablets. Design-wise, we chose a calming purple monochromatic scheme, as it is one of the main colours associated with pregnancy!
## 😰 Challenges we ran into
* Narrowing the features we intend to provide!
* Specifying the details and specs that we would feed the algorithm to choose the best caregiver for the patient.
* As the app scaled in the prototype, developing the front-end view was becoming increasingly heavier.
## 😤 Accomplishments that we're proud of
This is the first HackTheNorth for many of us, as well as the first time working with people we are unfamiliar with, so we're rather proud of how well we coordinated tasks, communicated ideas and solidified our final product! We're also pretty happy about all the various workshops and events we attended, and the amazing memories we've created.
## 🧠 What we learned
We learned…
* How to scale our idea for the prototype
* How to use AI to create connections between 2 entities
* Figma tips and Know-how to fast-track development
* An approach to modularize solutions
## 💜 What's next for BabyBloom
We can upgrade our designs to full implementation potentially using Flutter due to its cross-platform advantages, and researching the successful implementations in other countries, with their own physical hubs dedicated to mothers during and after their pregnancy! | winning |
## Inspiration
Since the beginning of the hackathon, all of us were interested in building something related to helping the community. Initially we began with the idea of a trash bot, but quickly realized the scope of the project would make it unrealistic. We eventually decided to work on a project that would help ease the burden on both the teachers and students through technologies that not only make learning new things easier and more approachable, but also giving teachers more opportunities to interact and learn about their students.
## What it does
We built a Google Action that gives Google Assistant the ability to help the user learn a new language by quizzing the user on words of several languages, including Spanish and Mandarin. In addition to the Google Action, we also built a very PRETTY user interface that allows a user to add new words to the teacher's dictionary.
## How we built it
The Google Action was built using the Google DialogFlow Console. We designed a number of intents for the Action and implemented robust server code in Node.js and a Firebase database to control the behavior of Google Assistant. The PRETTY user interface to insert new words into the dictionary was built using React.js along with the same Firebase database.
## Challenges we ran into
We initially wanted to implement this project by using both Android Things and a Google Home. The Google Home would control verbal interaction and the Android Things screen would display visual information, helping with the user's experience. However, we had difficulty with both components, and we eventually decided to focus more on improving the user's experience through the Google Assistant itself rather than through external hardware. We also wanted to interface with Android things display to show words on screen, to strengthen the ability to read and write. An interface is easy to code, but a PRETTY interface is not.
## Accomplishments that we're proud of
None of the members of our group were at all familiar with any natural language parsing, interactive project. Yet, despite all the early and late bumps in the road, we were still able to create a robust, interactive, and useful piece of software. We all second guessed our ability to accomplish this project several times through this process, but we persevered and built something we're all proud of. And did we mention again that our interface is PRETTY and approachable?
Yes, we are THAT proud of our interface.
## What we learned
None of the members of our group were familiar with any aspects of this project. As a result, we all learned a substantial amount about natural language processing, serverless code, non-relational databases, JavaScript, Android Studio, and much more. This experience gave us exposure to a number of technologies we would've never seen otherwise, and we are all more capable because of it.
## What's next for Language Teacher
We have a number of ideas for improving and extending Language Teacher. We would like to make the conversational aspect of Language Teacher more natural. We would also like to have the capability to adjust the Action's behavior based on the student's level. Additionally, we would like to implement a visual interface that we were unable to implement with Android Things. Most importantly, an analyst of students performance and responses to better help teachers learn about the level of their students and how best to help them. | ## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens. | ## Inspiration
We took inspiration from the multitude of apps that help to connect those who are missing to those who are searching for their loved ones and others affected by natural disaster, especially flooding. We wanted to design a product that not only helped to locate those individuals, but also to rescue those in danger. Through the combination of these services, the process of recovering after natural disasters is streamlined and much more efficient than other solutions.
## What it does
Spotted uses a drone to capture and send real-time images of flooded areas. Spotted then extracts human shapes from these images and maps the location of each individual onto a map and assigns each victim a volunteer to cover everyone in need of help. Volunteers can see the location of victims in real time through the mobile or web app and are provided with the best routes for the recovery effort.
## How we built it
The backbone of both our mobile and web applications is HERE.com’s intelligent mapping API. The two APIs that we used were the Interactive Maps API to provide a forward-facing client for volunteers to get an understanding of how an area is affected by flood and the Routing API to connect volunteers to those in need in the most efficient route possible. We also used machine learning and image recognition to identify victims and where they are in relation to the drone. The app was written in java, and the mobile site was written with html, js, and css.
## Challenges we ran into
All of us had a little experience with web development, so we had to learn a lot because we wanted to implement a web app that was similar to the mobile app.
## Accomplishments that we're proud of
Accomplishment: We are most proud that our app can collect and stores data that is available for flood research and provide real-time assignment to volunteers in order to ensure everyone is covered in the shortest time
## What we learned
We learned a great deal about integrating different technologies including XCode, . We also learned a lot about web development and the intertwining of different languages and technologies like html, css, and javascript.
## What's next for Spotted
Future of Spotted: We think the future of Spotted is going to be bright! Certainly, it is tremendously helpful for the users, and at the same time, the program improves its own functionality as data available increases. We might implement a machine learning feature to better utilize the data and predict the situation in target areas. What's more, we believe the accuracy of this prediction function will grow exponentially as data size increases. Another important feature is that we will be developing optimization algorithms to provide a real-time most efficient solution for the volunteers. Other future development might be its involvement with specific charity groups and research groups and work on specific locations outside US. | partial |
## Inspiration
Bookshelves are worse than fjords to navigate. There is too much choice, and indecision hits when trying to pick out a cool book at a library or bookstore. Why isn’t there an easy way to compare the ratings of different books from just the spine? That’s where BookBud comes in. Paper books are a staple part of our lives - everyone has a bookshelf, hard to find them, very manual organisation
## What it does
Bookbud is Shazam but for books. Bookbud allows users to click on relevant text relating to their book in a live video stream while they scan the shelves. Without needing to go through the awkward process of googling long book titles or finding the right resource, readers can quickly find useful information on their books.
## How we built it
We built it from the ground up using Swift. The first component involves taking in camera camera input. We then implement Apple’s Vision ML framework to retrieve the text recognised within the scene. This text is passed into the second component that deals with calling the Google Books API to retrieve the data to be displayed.
## Challenges we ran into
We ran into an unusual bug in the process of combining the two halves of our project. The first half was the OCR piece that takes in a photo of a bookshelf and recognises text such as title, author and publisher, and the second half was the piece that speaks directly to the Google client to retrieve details such as average rating, maturity\_level and reviews from text. More generally, we ran into compatibility issues as Apple recently shifted from the pseudo-deprecated UIKit to SwiftUI and this required many hours of tweaking to finally ensure the different components played well together.
We also initially tried to separate each book’s spine from a bookshelf can be tackled easily through openCV but we did not initially code in objective c++ so it was not compatible with the rest of our code.
## Accomplishments that we're proud of
We were able to successfully learn how to use and implement Apple Vision ML framework to run OCR on camera input to extract a book title. We also successfully interacted with the Google API to retrieve average ratings and title for a book, integrating the two into an interface.
## What we learned
For 3 of 4 on the team, it was the first time working with Swift or mobile app development. This proved to be a steep learning curve, but one that was extremely rewarding. Not only was simulation a tool we drew on extensively in our process, but we also learned about different objects and syntax that Swift uses compared to C.
## What's next for BookBud
There are many technical details BookBud could improve on:
Improved UI
Basic improvements and features include immediately prompting a camera,
Booklovers need an endearing UI. Simple, intuitive - but also stylish and chic.
Create a recommendation system of books for the reader depending on the books that readers have looked at/wanted more information on in the past or their past reading history
Do this in AR, instead of having it be a photo, overlaying each book with a color density that corresponds to the rating or even the “recommendation score” of each book.
Image Segmentation through Bounding Boxes
Automatically detect all books in the live stream and suggest which book has the highest recommendation score.
Create a ‘find your book’ feature that allows you to find a specific book amidst the sea of books in a bookshelf.
More ambitious applications…
Transfer AR overlay of the bookshelf into a metaversal library of people and their books. Avid readers can join international rooms to give book recommendations and talk about their interpretations of material in a friendly, communal fashion.
I can imagine individuals wanting NFTs of the bookshelves of celebrities, their families, and friends. There is a distinct intellectual flavor of showing what is on your bookshelf.
NFT book?
Goodreads is far superior to Google Books, so hopefully they start issuing developer keys again! | ## Inspiration
I was inspired to create this app by my aging Chinese grandparents, who are trying to learn English by reading books. I noticed them struggle with certain words and lean in squinting to read the next paragraph. I decided that I could create a better way.
## What it does
Spectacle is an upgrade for your reading glasses. Through OCR, you can take a picture (or choose from your camera roll) of any text in any language, be it book, newspaper, or otherwise, and convert it into accessible text (everything is in Verdana, the most readable screen font). In addition, through English word frequency analysis, we link definitions to the harder words in the text so you can follow along. Everything is customizable: the font size, the difficulty of the text, and even the language the text translates to. Being able to switch back to native Traditional Chinese to solidify their understanding of the text is a blessing for English language learners like my grandparents.
With this list of features:
* Translate images to any language
* From any language
* English word difficulty detector and easy-to-access definitions
* Replace hard words with easier ones inline.
* Accessibility: font and font size
Spectacle can help younger people and non-native English speakers learn English vocabulary as well as help older/near-sighted people read without straining their eyes.
## How I built it
Spectacle is built on top of Expo.io, a convenient framework for coding in React-native yet supporting iOS and Android alike. I decided this would be best as a mobile app because people love to read everywhere, so Expo was definitely a good choice. I used various Google Cloud ML services, including vision and NLP, to render and process the text. Additionally, I used Google Cloud Translate to translate text into other languages. For word frequencies and definitions, I combined the Words.ai API with my own algorithm to determine which words were considered "difficult".
## Challenges I ran into
Although I had used Expo.io a little in the past, this was my first big project with the framework, so it was challenging to go through the documentation and see what React-native features were supported and what weren't. The same can be said for the Google Cloud Platform. Before this, I had only deployed a Node.js app to the Google App Engine, so getting into it and using all these APIs was definitely tough.
## Accomplishments that I'm proud of
I'm proud that I got through the challenges I listed above and made a beautiful app (along with my team of course) that I will be proud to show to my grandparents. I'm also proud that I set lofty, yet realistic goals for the app and managed to meet them. Most of the time, when my team goes to a hackathon, we end up trying to add too many features and have an unfinished product by the time it's over, so I'm very glad we didn't let it happen this time.
## What I learned
I learned a lot about Google Cloud Platform, Expo.io, and React-native, as well as how to put them together in (maybe not the best) but a working way.
## What's next for Spectacle
I want to add the ability to save images/text for later, so that you can essentially store some reading material for later, and pull it up whenever you want. I also want to further upgrade the hard word detection algorithm that I made. | ## City Bins Roamer : An AI multiplayer game for sustainable cities!
# What it does
We're using Martello's geospatial data to make a Pac-Man-like game taking place across the playing board of Montreal's streets.
The goal: if you play as 'garbage': try to escape from the Intelligent System of Bins!!
If you play as 'bins' (audience) : try to collect the garbage!
The idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is the player's goal. Try to reach it before the bins 'eat' you! :open\_mouth: But the garbage also has to avoid the place on the map that is the Audience's goal!
The goal of the Audience is to prevent Player from reaching his goal. By placing bins, they try to push player towards the Audience's goal.
# How we built it + Implementation
With Python, Javascript, json, CSV, Bing Maps and a lot of frustration.
Because bins' signals are sometimes weak and noisy, we are using Martello's database which is useful in decision making (which provider should we trust more locally, and how much should we trust signal and how much our previous knowledge (somehow similar to AI concepts like particle filters etc.)
By Rest API we could retrieve information about the city's map structure which is passed to pygame framework. All algorithms (navigation, AI's game style) are implemented from scratch.
Therefore: Microsoft Bing Maps (+Rest API) + Python pygame + Flask + AI
# Accomplishments that we're proud of
Parsing the Json file, being able to understand and analyze its data, and map it to Bing Maps, first touch with JS, Flask.
Best multiplayer with AI game ever!
City's bins Roamer : multiplayer game with AI for sustainable cities!
Goal: if you play as 'garbage' try to escape from Intelligent System of Bins!!If you play as 'bins' (audience) try to collect garbage!Idea: Player (garbage) can navigate around the city of Montreal (Microsoft Bing Maps). There is one place at the map that it is player's goal. They try to reach it before bins 'eat' you. But also, it has to avoid audience's goal!
Goal of audience is to prevent Player to reach player's goal. By placing bins they try to push player towards audience's goal.
For the future:
Combine everything:
* combine ability to make decision (about the signal) with navigation's algorithms.
* combine Bing Maps with pygame (style, retrieve data from map to have streets' layout etc.)
* combine by Flask data from Martello to Bing Maps so that they can contain information about signals' strentgh. | losing |
## Inspiration
As avid readers, we wanted a tool to track our reading metrics. As a child, one of us struggled with concentrating and focusing while reading. Specifically, there was a strong tendency to zone out. Our app provides the ability for a user to track their reading metrics and also quantify their progress in improving their reading skills.
## What it does
By incorporating Ad Hawk’s eye-tracking hardware into our build, we’ve developed a reading performance tracker system that tracks and analyzes reading patterns and behaviours, presenting dynamic second-by-second updates delivered to your phone through our app.
These metrics are calculated through our linear algebraic models, then provided to our users in an elegant UI interface on their phones. We provide an opportunity to identify any areas of potential improvement in a user’s reading capabilities.
## How we built it
We used the Ad Hawk hardware and backend to record the eye movements. We used their Python SDK to collect and use the data in our mathematical models. From there, we outputted the data into our Flutter frontend which displays the metrics and data for the user to see.
## Challenges we ran into
Piping in data from Python to Flutter during runtime was slightly frustrating because of the latency issues we faced. Eventually, we decided to use the computer's own local server to accurately display and transfer the data.
## Accomplishments that we're proud of
Proud of our models to calculate the speed of reading, detection of page turns and other events that were recorded simply through changes of eye movement.
## What we learned
We learned that Software Development in teams is best done by communicating effectively and working together with the same final vision in mind. Along with this, we learned that it's extremely critical to plan out small details as well as broader ones to ensure plan execution occurs seamlessly.
## What's next for SeeHawk
We hope to add more metrics to our app, specifically adding a zone-out tracker which would record the number of times a user "zones out". | ## Inspiration
Every musician knows that moment of confusion, that painful silence as onlookers shuffle awkward as you frantically turn the page of the sheet music in front of you. While large solo performances may have people in charge of turning pages, for larger scale ensemble works this obviously proves impractical. At this hackathon, inspired by the discussion around technology and music at the keynote speech, we wanted to develop a tool that could aid musicians.
Seeing AdHawks's MindLink demoed at the sponsor booths, ultimately give us a clear vision for our hack. MindLink, a deceptively ordinary looking pair of glasses, has the ability to track the user's gaze in three dimensions, recognizes events such as blinks and even has an external camera to display the user's view. Blown away by the possibility and opportunities this device offered, we set out to build a hands-free sheet music tool that simplifies working with digital sheet music.
## What it does
Noteation is a powerful sheet music reader and annotator. All the musician needs to do is to upload a pdf of the piece they plan to play. Noteation then displays the first page of the music and waits for eye commands to turn to the next page, providing a simple, efficient and most importantly stress-free experience for the musician as they practice and perform. Noteation also enables users to annotate on the sheet music, just as they would on printed sheet music and there are touch controls that allow the user to select, draw, scroll and flip as they please.
## How we built it
Noteation is a web app built using React and Typescript. Interfacing with the MindLink hardware was done on Python using AdHawk's SDK with Flask and CockroachDB to link the frontend with the backend.
## Challenges we ran into
One challenge we came across was deciding how to optimally allow the user to turn page using eye gestures. We tried building regression-based models using the eye-gaze data stream to predict when to turn the page and built applications using Qt to study the effectiveness of these methods. Ultimately, we decided to turn the page using right and left wink commands as this was the most reliable technique that also preserved the musicians' autonomy, allowing them to flip back and forth as needed.
Strategizing how to structure the communication between the front and backend was also a challenging problem to work on as it is important that there is low latency between receiving a command and turning the page. Our solution using Flask and CockroachDB provided us with a streamlined and efficient way to communicate the data stream as well as providing detailed logs of all events.
## Accomplishments that we're proud of
We're so proud we managed to build a functioning tool that we, for certain, believe is super useful. As musicians this is something that we've legitimately thought would be useful in the past, and granted access to pioneering technology to make that happen was super exciting. All while working with a piece of cutting-edge hardware technology that we had zero experience in using before this weekend.
## What we learned
One of the most important things we learnt this weekend were the best practices to use when collaborating on project in a time crunch. We also learnt to trust each other to deliver on our sub-tasks and helped where we could. The most exciting thing that we learnt while learning to use these cool technologies, is that the opportunities are endless in tech and the impact, limitless.
## What's next for Noteation: Music made Intuitive
Some immediate features we would like to add to Noteation is to enable users to save the pdf with their annotations and add landscape mode where the pages can be displayed two at a time. We would also really like to explore more features of MindLink and allow users to customize their control gestures. There's even possibility of expanding the feature set beyond just changing pages, especially for non-classical musicians who might have other electronic devices to potentially control. The possibilities really are endless and are super exciting to think about! | ## Inspiration
Our inspiration came from all the new AI tools to assist in studying. We had previously seen a note-taking Chrome extension that can read and summarize slides, and we thought "How could we make this even more convenient?". Nothing is more convenient than using your eyes, so we decided to make a note-taking application that uses eye tracking and eye gestures to record and summarize notes.
## What it does
Studeye was designed to allow users to take notes and look up definitions in a moment's time. The user's eyes are tracked by MindLink glasses, and the gaze is mapped to the user's screen in order to determine what is being read. The user can then wink or use other facial gestures to take notes using AI tools.
## How we built it
We built this project using Next.js and TailwindCSS for the website, for the actual gaze tracking, we used Python and the Adhawk SDK.
## Challenges we ran into
Our largest challenge was undoubtedly the lack of documentation for the Adhawk SDK. This made working with the MindLink glasses an incredibly taxing task, and more or less required us to learn through trial and error. In addition, the glasses began behaving abnormally on the last night. We were unable to receive a replacement, nor any help, as the Adhawk team had unfortunately already left at this point.
## Accomplishments that we're proud of
We're proud of the fact that we have a working front end, as well as all of the AI components. Also given the complexity of the eye tracking, we're proud of the progress that we did make.
## What we learned
We learned the utmost importance of documentation when choosing what libraries to use for a project. We completely lacked the resources to figure out how to fully build our idea in the time given, our mistake was not fully realizing this and adjusting early on.
## What's next for Studeye
The next step for Studyeye is to completely polish the eye tracking so that it is ready for deployment. | winning |
## Inspiration
In a world full of so much info but limited time as we are so busy and occupied all the time, we wanted to create a tool that helps students make the most of their learning—quickly and effectively. We imagined a platform that could turn complex material into digestible, engaging content tailored for a fast-paced generation.
## What it does
Lemme Learn More (LLM) transforms study materials into bite-sized TikTok-style trendy and attractive videos or reels, flashcards, and podcasts. Whether it's preparing for exams or trying to stay informed on the go, LLM breaks down information into formats that match how today's students consume content. If you're an avid listener of podcasts during commute to work - this is the best platform for you.
## How we built it
We built LLM using a combination of AI-powered tools like OpenAI for summaries, Google TTS for podcasts, and pypdf2 to pull data from PDFs. The backend runs on Flask, while the frontend is React.js, making the platform both interactive and scalable. Also used fetch.ai ai agents to deploy on test net - blockchain.
## Challenges we ran into
Due to a highly-limited time, we ran into a deployment challenge where we faced some difficulty setting up on Heroku cloud service platform. It was an internal level issue wherein we were supposed to change config files - I personally spent 5 hours on that - my team spent some time as well. Could not figure it out by the time hackathon ended : we decided not to deploy today.
In brainrot generator module, audio timing could not match with captions. This is something for future scope.
One of the other biggest challenges was integrating sponsor Fetch.ai agentverse AI Agents which we did locally and proud of it!
## Accomplishments that we're proud of
Biggest accomplishment would be that we were able to run and integrate all of our 3 modules and a working front-end too!!
## What we learned
We learned that we cannot know everything - and cannot fix every bug in a limited time frame. It is okay to fail and it is more than okay to accept it and move on - work on the next thing in the project.
## What's next for Lemme Learn More (LLM)
Coming next:
1. realistic podcast with next gen TTS technology
2. shorts/reels videos adjusted to the trends of today
3. Mobile app if MVP flies well! | ## Inspiration
Today, anything can be learned on the internet with just a few clicks. Information is accessible anywhere and everywhere- one great resource being Youtube videos. However accessibility doesn't mean that our busy lives don't get in the way of our quest for learning.
TLDR: Some videos are too long, and so we didn't watch them.
## What it does
TLDW - Too Long; Didn't Watch is a simple and convenient web application that turns Youtube and user-uploaded videos into condensed notes categorized by definition, core concept, example and points. It saves you time by turning long-form educational content into organized and digestible text so you can learn smarter, not harder.
## How we built it
First, our program either takes in a youtube link and converts it into an MP3 file or prompts the user to upload their own MP3 file. Next, the audio file is transcribed with Assembly AI's transcription API. The text transcription is then fed into Co:here's Generate, then Classify, then Generate again to summarize the text, organize by type of point (main concept, point, example, definition), and extract key terms. The processed notes are then displayed on the website and coded onto a PDF file downloadable by user. The Python backend built with Django is connected to a ReactJS frontend for an optimal user experience.
## Challenges we ran into
Manipulating Co:here's NLP APIs to generate good responses was certainly our biggest challenge. With a lot of experimentation *(and exploration)* and finding patterns in our countless test runs, we were able to develop an effective note generator. We also had trouble integrating the many parts as it was our first time working with so many different APIs, languages, and frameworks.
## Accomplishments that we're proud of
Our greatest accomplishment and challenge. The TLDW team is proud of the smooth integration of the different APIs, languages and frameworks that ultimately permitted us to run our MP3 file through many different processes and coding languages Javascript and Python to our final PDF product.
## What we learned
Being the 1st or 2nd Hackathon of our First-year University student team, the TLDW team learned a fortune of technical knowledge, and what it means to work in a team. While every member tackled an unfamiliar API, language or framework, we also learned the importance of communication. Helping your team members understand your own work is how the bigger picture of TLDW comes to fruition.
## What's next for TLDW - Too Long; Didn't Watch
Currently TLDW generates a useful PDF of condensed notes in the same order as the video. For future growth, TLDW hopes to grow to become a platform that provides students with more tools to work smarter, not harder. Providing a flashcard option to test the user on generated definitions, and ultimately using the Co-here Api to also read out questions based on generated provided examples and points. | ## Inspiration
The inspiration for DigiSpotter came from within our team members, who recently started going to the gym in the past year. We agreed that starting out in the gym is hard without having a personal coach or gym partner that is willing to train you. DigiSpotter aims to solve this issue by being your electronic gym partner that can keep track of your workout and check on your form in real time to ensure you are training safely and optimally.
## What it does
DigiSpotter uses your phone's camera to create a skeletal model of yourself as you are performing an exercise and will compare it across various parameters to the optimal form. If you are doing something wrong DigiSpotter will let you know after each set. It can detect errors such as suboptimal range of motion, incorrect extension, and incorrect positioning of body parts. It also counts for you as you are working out, and will automatically start a rest timer after each set. All you have to do is leave your phone in front of you as if you are taking a video of yourself working out. The results of your workout are saved to your account in the app's database to track improvements in mobility and general gym progress.
## How we built it
We created this app using Swift for the backend and SwiftUI for the frontend. We are using ARKit for iPhone to help us create our position tracking model for various positions in the body as running natively drastically increases the performance of the app and the accuracy of the tracking. Using our position tracking model, we can calculate the relative angles of body parts and determine their deviation from an optimal angle.
## Challenges we ran into
ARKit is designed more for creating virtual avatars from motion capture data and not interpreting movement between relative body parts, and such in order to create a skeleton that fits our needs we needed to make many changes from any other tracking library.
## Accomplishments that we're proud of
We are able to calculate deviation from an optimal squat and relay the information to the user.
## What we learned
All of us were completely new to developing for iOS so we had many challenges attempting to figure out the conventions of Swift and how we can interact with our app.
## What's next for DigiSpotter
We would like to add as many exercises as we can to the App and we want to further expand how much historical data we can collect so our users can have a more detailed view of how they have improved over time. | partial |
## Inspiration
To spread the joy of swag collecting.
## What it does
A Hack the North Simulator, specifically of the Sponsor Bay on the second floor. The player will explore sponsor booths, collecting endless amounts of swag along the way. A lucky hacker may stumble upon the elusive, exclusive COCKROACH TROPHY, or a very special RBC GOLD PRIZE!
## How we built it
Unity, Aseprite, Cakewalk
## What we learned
We learned the basics of Unity and git, rigidbody physics, integrating audio into Unity, and the creation of game assets. | ## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | ## Inspiration
We got inspired by Flash Sonar (a mathod that helps peopla to see via sound waves) but it took months to learn so We developed Techno Sonar a guide for blind people.
## What it does
Uses ultrasonic sensors to detect objects and inform the user based on the size and distance of the object
If object is high it informs the user via sound but if it is a low object informs with vibration that is send behind the calves. User can personalize the range of the sensors and sound level via mobile application.
## How we built it
We used arduino circuits because it is cheaper, common and easier to use so the price is low. We used dart language to create a cross platform mobile application.
## Challenges we ran into
We thought *How can a blind person use mobile application?* so I added AI. voice assistant that is making the user be able to control application via talking.
## Accomplishments that we're proud of
We developed a mobile application with it's own voice assistant and we made the product better and also cheaper compared to the old versions. We designed it compitable with any clothes so it can be integrated to any clothe. It is easy to use and maden by commom pieces so it is cheap.
## What we learned
We learned a lot about sound processing systems and gained experience about coding.
## What's next for Techno Sonar
Techno Sonar will be attended to a lots of competitions to get what it deserve that way at one day it will be helpfull to plenty of blind people. | winning |
## Inspiration
The inspiration for our project came from three of our members being involved with Smash in their community. From one of us being an avid competitor, one being an avid watcher and one of us who works in an office where Smash is played quite frequently, we agreed that the way Smash Bro games were matched and organized needed to be leveled up. We hope that this becomes a frequently used bot for big and small organizations alike.
## How it Works
We broke the project up into three components, the front end made using React, the back end made using Golang and a middle part connecting the back end to Slack by using StdLib.
## Challenges We Ran Into
A big challenge we ran into was understanding how exactly to create a bot using StdLib. There were many nuances that had to be accounted for. However, we were helped by amazing mentors from StdLib's booth. Our first specific challenge was getting messages to be ephemeral for the user that called the function. Another adversity was getting DM's to work using our custom bot. Finally, we struggled to get the input from the buttons and commands in Slack to the back-end server. However, it was fairly simple to connect the front end to the back end.
## The Future for 'For Glory'
Due to the time constraints and difficulty, we did not get to implement a tournament function. This is a future goal because this would allow workspaces and other organizations that would use a Slack channel to implement a casual tournament that would keep the environment light-hearted, competitive and fun. Our tournament function could also extend to help hold local competitive tournaments within universities. We also want to extend the range of rankings to have different types of rankings in the future. One thing we want to integrate into the future for the front end is to have a more interactive display for matches and tournaments with live updates and useful statistics. | ## Inspiration
We wanted to get better at sports, but we don't have that much time to perfect our moves.
## What it does
Compares your athletic abilities to other users by building skeletons of both people and showing you where you can improve.
Uses ML to compare your form to a professional's form.
# Tells you improvements.
## How I built it
We used OpenPose to train a dataset we found online and added our own members to train for certain skills. Backend was made in python which takes the skeletons and compares them to our database of trained models to see how you preform. The skeleton for both videos are combined side by side in a video and sent to our react frontend.
## Challenges I ran into
Having multiple libraries out of date and having to compare skeletons.
## Accomplishments that I'm proud of
## What I learned
## What's next for trainYou | ## Inspiration
This is our first hackathon, so none of us knew what to make at the beginning. We all liked the suggestion of a discord bot, and Sebastian had used the gspread API before, so we tried to think of ways to combine the two ideas. With these two ideas in mind, Eric remembered some online tournaments he played in recently, and knew a lot of them used google sheets to track tournament brackets, as well as how much of a pain it can be sometimes. So we decided to make a discord bot that would easily allow people to make and update tournaments.
## What it does
Bracket Bot’s signature feature is the ability to create and display a tournament bracket for the members of a discord server. This bracket is updated in real-time on a google sheets document which the bot links to. The sheet is automatically formatted such that anyone can save it as a PDF as any time to have a visualization of the tournament in the form of a document.
## How we built it
We used python to create our bot with the help of Discord and Google Sheets APIs. We also hosted our bot on Heroku.
## Challenges we ran into
Debugging/testing our code was a major difficulty as we had to figure out how our several complex and interconnected components worked together to result in errors. Besides, we spent so much time trying to learn new libraries and implement them in our work.
## Accomplishments that we're proud of
We are proud of the fact that we were able to bring our idea to completion despite this being our very first hackathon. We are also proud because we learned APIs that were partially or completely unfamiliar to us and managed to use them effectively.
## What we learned
The key lessons we learned are how to collaborate on a project with others, manage our time, learn documentation for different APIs and host programs on a database.
## What's next for Bracket Bot
The major feature we plan on including in the future is generating a new sheet each time a user wants to create a tournament, ensuring that multiple tournaments can occur simultaneously. | winning |
## Inspiration
We were inspired to create this due to the fact that we personally have trouble keeping up with our supplements.
## What it does
Sends SMS reminders to users to take their medicine
## How we built it
Using Node JS as the backend, with the database powered by Google Cloud SQL, and the SMS messaging powered by Twillio. We also used React JS on the front-end.
## Challenges we ran into
Connecting the front-end and back-end.
## Accomplishments that we're proud of
Creating a fully functional backend, with SMS text messages being able to be sent every second while the server is running. Also creating a flushed out front-end that streamlines user experience
## What we learned
Follow a proper software development method, to manage time more effectively and create a better product.
## What's next for DoseTracker
Finish the app! | ## Inspiration
When you are prescribed medication by a Doctor, it is crucial that you complete the dosage cycle in order to ensure that you recover fully and quickly. Unfortunately forgetting to take your medication is something that we have all done. Failing to run the full course of medicine often results in a delayed recovery and leads to more suffering through the painful and annoying symptoms of illness. This has inspired us to create Re-Pill. With Re-Pill, you can automatically generate scheduling and reminders to take you medicine by simply uploading a photo of your prescription.
## What it does
A user uploads an image of their prescription which is then processed by image to text algorithms that extract the details of the medication. Data such as the name of the medication, its dosage, and total tablets is stored and presented to the user. The application synchronizes with google calendar and automatically sets reminders for taking pills into the user's schedule based on the dosage instructions on the prescription. The user can view their medication details at any time by logging into Re-Pill.
## How we built it
We built the application using the Python web framework Flask. Simple endpoints were created for login, registration, and viewing of the user's medication. User data is stored in Google Cloud's Firestore. Images are uploaded and sent to a processing endpoint through a HTTP Request which delivers the medication information. Reminders are set using the Google Calendar API.
## Challenges we ran into
We initially struggled to figure out the right tech-stack to use for building the app. We struggled with Android development before settling for a web-app. One big challenge we faced was to merge all the different part of our application into one smoothly running product. Another challenge was finding a method to inform/notify the user of his/her medication time through a web-based application.
## Accomplishments that we're proud of
There are a couple of things that we are proud of. One of them is how well our team was able to communicate with one another. All team members knew what the other members were working on and the work was divided in such a manner that all teammates worked on the projects using his/her strengths. One important accomplishment is that we were able to overcome a huge time constraint and come up with a prototype of an idea that has potential to change people's lives.
## What we learned
We learned how to set up and leverage Google API's, manage non-relational databases and process image to text using various python libraries.
## What's next for Re-Pill
The next steps for Re-Pill would be to move to a mobile environment and explore useful features that we can implement. Building a mobile application would make it easier for the user to stay connected with the schedules and upload prescription images at a click of a button using the built in camera. Some features we hope to explore are creating automated activities, such as routine appointment bookings with the family doctor and monitoring dietary considerations with regards to stronger medications that might conflict with a patients diet. | ## **Inspiration**
Ever had to wipe your hands constantly to search for recipes and ingredients while cooking?
Ever wondered about the difference between your daily nutrition needs and the nutrition of your diets?
Vocal Recipe is an integrated platform where users can easily find everything they need to know about home-cooked meals! Information includes recipes with nutrition information, measurement conversions, daily nutrition needs, cooking tools, and more! The coolest feature of Vocal Recipe is that users can access the platform through voice control, which means they do not need to constantly wipe their hands to search for information while cooking. Our platform aims to support healthy lifestyles and make cooking easier for everyone.
## **How we built Vocal Recipe**
Recipes and nutrition information is implemented by retrieving data from Spoonacular - an integrated food and recipe API.
The voice control system is implemented using Dasha AI - an AI voice recognition system that supports conversation between our platform and the end user.
The measurement conversion tool is implemented using a simple calculator.
## **Challenges and Learning Outcomes**
One of the main challenges we faced was the limited trials that Spoonacular offers for new users. To combat this difficulty, we had to switch between team members' accounts to retrieve data from the API.
Time constraint is another challenge that we faced. We do not have enough time to formulate and develop the whole platform in just 36 hours, thus we broke down the project into stages and completed the first three stages.
It is also our first time using Dasha AI - a relatively new platform which little open source code could be found. We got the opportunity to explore and experiment with this tool. It was a memorable experience. | losing |
## Inspiration
We wanted to come up with a means of evaluating Goldman Sachs' GIR forecasting data against the market, and also against other possible predictors. This placed a heavy emphasis on meaningful visualizations of the given Marquee API data.
## What it does
This project is composed of a series of Python scripts. It is divided into three components.
1. **Visualization.** This script takes in a ticker symbol and a start and end date (within the 5-year range for which we have GIR data for), and plots the four provided GS metrics—financial returns, growth, multiple, and integrated—against each update within the specified time range. This allows visually developing a sense of change in forward-looking guidance for any particular security over any supported period of time.
2. **Sentiment analysis.** In order to make this portion most interesting, we found the two securities with the highest variance in integrated score, and attempted to determine how well the sentiment of the Q&A portions of the quarterly earnings calls with shareholders predicted the GIR metrics. We obtained the Q&A portions of the earnings calls between the 1st quarter of 2012 and the 4th quarter of 2016, cleaned this data, and performed sentiment analysis to determine the likelihood of the Q&A section of the call being positive. We then graphed this probability against the number of quarters since the start of 2012. For the two securities in question, we found remarkable variance in this probability.
3. **K-means clustering and PCA.** We used the four metrics provided by the Marquee API (financial returns score, growth score, multiple score, and integrated score) to perform unsupervised learning on the dataset. First we used PCA to project the dataset from 4 dimensions into 3 to allow for easier visualization. Then, we performed k-means clustering to cluster the lower-dimensional data into 4 clusters so as to give potential clients a better understanding of the diversity in the dataset. The lower-dimensional clustering allows the user the ability to understand the similarities and differences among different stocks along axes which correspond to the greatest variance in the dataset.
## How we built it
We used Python as the primary programming language for all components, with the help of libraries such as numpy, pandas, and matplotlib. We made use of a number of APIs, including Marquee for the GS data, and APIs for sentiment analysis.
## Challenges we ran into
We ran into a great deal of difficulty finding an API that would give us the historical price of a given security at a particular time. The IEX API has a particularly limiting free option that makes rigorous data analysis comparing GS forecasting to actual market movement difficult, so we instead focused this portion of the project on comparing GS forecasting to other possible methods.
Cleaning data was a recurring challenge, as well. Initially, we spent a good amount of time working with the Marquee API to get the GSID to map to a ticker symbol and a company name. Ultimately, we were able to create a large local CSV file with all of the API data, so we can adopt any of our scripts to be fully local if API calls fail. | # ObamaChart
It all began three weeks ago when Charlie, Gerry, and Gene convinced Hector to join them in the hackathon.
The idea was not born until late in the wee hours of the night when everyone was fed up with Charlie's "one way photo encryption" idea and instead wanted something more useful. Thus we decided visualize data over time and settled on Barack Obama as a viable candidate for our experimentation.
We used Microsoft's Bing Search to find the images, along with their Emotions API via Microsoft Cognitive Services. We used python as our backend for the number crunching. Then we fed all our pickled info over to a csv file in order to add it to a Google Spreadsheet. From there, we found it easy to use Google's Google Chart API to make different plots of our data.
However, our team did not underestimated the importance of user experience. Charlie and Gene worked together to display the data in the most easy-to-understand and the most easily-accessible way. To do this, they used Bootstrap to format the charts and the website, and deployed the website using Microsoft Azure.
We tried to compare Obama's emotions to different values such as S&P, and approval rate. Unfortunately, no matter how much we tried, we didn't see any correlations between the data (this could be due to Emotion API's inclination to classify a face as happy or neutral more than anything else). Of course, if there is one trend we managed to see, it's that Obama's happiness has dropped sharply this past month and there has been a slight rise in his neutrality, contempt and anger.
Overall this was a great project that we all enjoyed working on and we hope you all can enjoy looking at the data we've gotten for you. | ## Inspiration:
Many people may find it difficult to understand the stock market, a complex system where ordinary people can take part in a company's success. This applies to those newly entering the adult world, as well as many others that haven't had the opportunity to learn. We want not only to introduce people to the importance of the stock market, but also to teach them the importance of saving money. When we heard that 44% of Americans have fewer than $400 in emergency savings, we felt compelled to take this mission to heart, with the increasing volatility of the world and of our environment today.
## What it does
With Prophet Profit, ordinary people can invest easily in the stock market. There's only one step - to input the amount of money you wish to invest. Using data and rankings provided by Goldman Sachs, we automatically invest the user's money for them. Users can track their investments in relation to market indicators such as the S&P 500, as well as see their progress toward different goals with physical value, such as being able to purchase an electric generator for times of emergency need.
## How we built it
Our front end is entirely built on HTML and CSS. This is a neat one-page scroller that allows the user to navigate by simply scrolling or using the navigation bar at the top. Our back end is written in JavaScript, integrating many APIs and services.
APIs that we used:
-Goldman Sachs Marquee
-IEX Cloud
Additional Resources:
-Yahoo Finance
## Challenges we ran into
The biggest challenge was the limited scope of the Goldman Sachs Marquee GIR Factor Profile Percentiles Mini API that we wanted to use. Although the data provided was high quality and useful, we had difficulties trying to put together a portfolio with the small amount of data provided. For many of us, it was also our first times using many of the tools and technologies that we employed in our project.
## Accomplishments that we're proud of
We're really, really proud that we were able to finish on time to the best of our abilities!
## What we learned
Through exploring financial APIs deeply, we not only learned about using the APIs, but also more about the financial world as a whole. We're glad to have had this opportunity to learn skills and gain knowledge outside the fields we typically work in.
## What's next for Prophet Profit
We'd love to use data for the entire stock market with present-day numbers instead of the historical data that we were limited to. This would improve our analyses and allow us to make suggestions to users in real-time. If this product were to realize, we'd need the ability to handle and trade with large amounts of money as well. | losing |
## Inspiration
Waste Management: Despite having bins with specific labels, people often put waste into wrong bins which lead to unnecessary plastic/recyclables in landfills.
## What it does
Uses Raspberry Pi, Google vision API and our custom classifier to categorize waste and automatically sorts and puts them into right sections (Garbage, Organic, Recycle). The data collected is stored in Firebase, and showed with respective category and item label(type of waste) on a web app/console. The web app is capable of providing advanced statistics such as % recycling/compost/garbage, your carbon emissions as well as statistics on which specific items you throw out the most (water bottles, bag of chips, etc.). The classifier is capable of being modified to suit the garbage laws of different places (eg. separate recycling bins for paper and plastic).
## How We built it
Raspberry pi is triggered using a distance sensor to take the photo of the inserted waste item, which is identified using Google Vision API. Once the item is identified, our classifier determines whether the item belongs in recycling, compost bin or garbage. The inbuilt hardware drops the waste item into the correct section.
## Challenges We ran into
Combining IoT and AI was tough. Never used Firebase. Separation of concerns was a difficult task. Deciding the mechanics and design of the bin (we are not mechanical engineers :D).
## Accomplishments that We're proud of
Combining the entire project. Staying up for 24+ hours.
## What We learned
Different technologies: Firebase, IoT, Google Cloud Platform, Hardware design, Decision making, React, Prototyping, Hardware
## What's next for smartBin
Improving the efficiency. Build out of better materials (3D printing, stronger servos). Improve mechanical movement. Add touch screen support to modify various parameters of the device. | ## Inspiration
As university students, we and our peers have found that our garbage and recycling have not been taken by the garbage truck for some unknown reason. They give us papers or stickers with warnings, but these get lost in the wind, chewed up by animals, or destroyed because of the weather. For homeowners or residents, the lack of communication is frustrating because we want our garbage to be taken away and we don't know why it wasn't. For garbage disposal workers, the lack of communication is detrimental because residents do not know what to fix for the next time.
## What it does
This app allows garbage disposal employees to communicate to residents about what was incorrect with how the garbage and recycling are set out on the street. Through a checklist format, employees can select the various wrongs, which are then compiled into an email and sent to the house's residents.
## How we built it
The team built this by using a Python package called **Kivy** that allowed us to create a GUI interface that can then be packaged into an iOS or Android app.
## Challenges we ran into
The greatest challenge we faced was the learning curve that arrived when beginning to code the app. All team members had never worked on creating an app, or with back-end and front-end coding. However, it was an excellent day of learning.
## Accomplishments that we're proud of
The team is proud of having a working user interface to present. We are also proud of our easy to interactive and aesthetic UI/UX design.
## What we learned
We learned skills in front-end and back-end coding. We also furthered our skills in Python by using a new library, Kivy. We gained skills in teamwork and collaboration.
## What's next for Waste Notify
Further steps for Waste Notify would likely involve collecting data from Utilities Kingston and the city. It would also require more back-end coding to set up these databases and ensure that data is secure. Our target area was University District in Kingston, however, a further application of this could be expanding the geographical location. However, the biggest next step is adding a few APIs for weather, maps and schedule. | ## Inspiration
We were really excited to hear about the self-driving bus Olli using IBM's Watson. However, one of our grandfather's is rather forgetful due to his dementia, and because of this would often forget things on a bus if he went alone. Memory issues like this would prevent him, and many people like him, from taking advantage of the latest advancements in public transportation, and prevent him from freely traveling even within his own community.
To solve this, we thought that Olli and Watson could work to take pictures of luggage storage areas on the bus, and if it detected unattended items, alert passengers, so that no one would forget their stuff! This way, individuals with memory issues like our grandparents can gain mobility and be able to freely travel.
## What it does
When the bus stops, we use a light sensitive resistor on the seat to see if someone is no longer sitting there, and then use a camera to take a picture of the luggage storage area underneath the seat.
We send the picture to IBM's Watson, which checks to see if the space is empty, or if an object is there.
If Watson finds something, it identifies the type of object, and the color of the object, and vocally alerts passengers of the type of item that was left behind.
## How we built it
**Hardware**
Arduino - Senses whether there is someone sitting based on a light sensitive resistor.
Raspberry Pi - Processes whether it should take a picture, takes the picture, and sends it to our online database.
**Software**
IBM's IoT Platform - Connects our local BlueMix on Raspberry Pi to our BlueMix on the Server
IBM's Watson - to analyze the images
Node-RED - The editor we used to build our analytics and code
## Challenges we ran into
Learning IBM's Bluemix and Node-Red were challenges all members of our team faced. The software that ran in the cloud and that ran on the Raspberry Pi were both coded using these systems. It was exciting to learn these languages, even though it was often challenging.
Getting information to properly reformat between a number of different systems was challenging. From the 8-bit Arduino, to the 32-bit Raspberry Pi, to our 64-bit computers, to the ultra powerful Watson cloud, each needed a way to communicate with the rest and lots of creative reformatting was required.
## Accomplishments that we're proud of
We were able to build a useful internet of things application using IBM's APIs and Node-RED. It solves a real world problem and is applicable to many modes of public transportation.
## What we learned
Across our whole team, we learned:
* Utilizing APIs
* Node-RED
* BlueMix
* Watson Analytics
* Web Development (html/ css/ js)
* Command Line in Linux | winning |
## Inspiration
One of our team members, Ray, recently visited Cambridge, MA (from Canada!) and realized that their trash cans were essentially nonexistent. The few that did exist there weren't exactly performing well enough to organize trash efficiently and conscientiously. That's why we decided to choose the focus of our project on GarbaDoor -- a smart trash can that is able to organize trash by leveraging machine learning and computer vision to classify and categorize objects in order to automate the sorting process for different types of rubbish.
GarbaDoor was inspired by our goal is to interest in building smarter cities with the power of AI, Google Cloud, and big data. In tackling one of the biggest problems in our world today -- food waste and energy conservation -- we are posed with the question of how can we not only incentivize people and systems to promote good environmental practices but also how we can create systems to automate and expedite processes.
The smart garbage can automatically detect and sort the garbage thrown into it. In doing so, this eliminates one of the biggest issues in garbage collection: recyclable garbage contamination. According to The Environmental Protection Agency, the average recycling contamination rate is 25% -- meaning that every 1 out of 4 items thrown into a recycling bin is not recyclable. The general apathy and lack of protocol towards garbage contamination drastically increases the cost of recycling during an age where we are in a global climate epidemic. According to recycleacrossamerica.org, U.S. RECYCLING IS IN A CRISIS. Nearly 1,000 recycling plants in California alone have shut down within the last two years because of recycling contamination. Our AI-influenced machine aims to solves the issue of garbage contamination with the cost of less than 100 dollars.
The name GarbaDoor was inspired from the Pokemon, Garbodor, which is closely related to trash, and who better else for the position than Garbodor!

## What it does
The GarbaDoor is a smart trash can that intakes a piece of trash and is able to process it into a respective trash compartment within the bin. The apparatus uses a webcam to snap a picture of the object, and uses Google API's Machine Learning and Cloud Vision to classify the object into predefined general categories, ranking at most the top 10 classifications by the percentage of certainty. The object is classified by its most accurate classification and from that, it is categorized among one of the three disposable types of trash: compost, recyclable, or general garbage.
After GarbaDoor determines where the trash belongs, it drops the trash into the respective category using Arduino-powered servo motors to open flaps. These doors guide the object to its destination with quickly-shifting platforms. While the trash is being organized, the data from the trash is collected and stored in Google Firestore Database for data analysis. Our website hosts this data as a Dashboard interface, where users can view all past 'thrown away' items and how many have been counted. This data is visualized on our website via Google Maps API with an interactive map and graphs.
## How we built it
Backend: machine learning and model training with object classification, using Google Cloud Platform's Machine Learning and Vision APIs to determine an object's type and machine learning to train the program to accurately detect other trash similar to previous results. Moreover, we connected Cloud Firestore with CRUD functionalities as new trash is being added and decided to keep track of the trash's name, category, and number of times it has been disposed. We used Flask server as our framework to quickly set up our application. After stable detection, we created Python scripts to provide Arduino connectivity.
Arduino: constructed a prototype of a smart trash can by reusing cardboard and duct tape from snack boxes from YHack. We decided to cut out compartments to house two flaps for the two servos, splitting up half and could rotate from one side to the other depending on the trash type. In fact, before the trash is guided to its correct compartment, it is judged by a camera with Google Vision in a relatively closed box.
Frontend: used ReactJS and KendoReact UI Library with Bootstrap framework to connect the backend and Firebase database to retrieve the data and display it on the webpage in real-time. We developed an interactive website for users to look through and discover interesting findings from data collected on trash from GarbaDoor with a dashboard and a map.
## Challenges we ran into
An early obstacle we ran into was working with Arduinos and have it accurately rotate its servos from one side to the other. In fact, it was our first time directly working with hardware for most of us, so it was both a struggle and a learning experience. We were able to move the servo to one side, but not as well to the other side until later on through many sweat and tears. ;)
From Saturday to the end of the project at 8am Sunday morning, we had technical troubles with connecting our Python scripts to the front-end React client. Specifically, getting data to React to display onto the view was somewhat dumbfounding, as there were multiple errors with getting our data from our backend to appear on our website.
## Accomplishments that we're proud of
Mainly, we are proud of the fact that we were able to use a webcam to determine an object's disposable type. However, we are also excited that we were able to construct a trash can to demo using cardboard and (a lot of) duct tape. Although the trash can isn't exactly the most appealing, we enjoyed the process of putting together the parts and connecting our Arduino/servos to it. As a team, we successfully divvied up the work and covered a solid plan for the creation and development of the software/hardware, and also coming up with a feasible business plan for future applications of GarbaDoor within IoT-connected smart cities.
## What we learned
We used a lot of duct tape.
Also important: we learned how to utilize React to integrate our web application and debugging the many compilation errors. We successfully completed integration with complex hardware made from scratch (with cardboard!) and being able to transfer data to the database and back to the front-end. This overall flow was an amazing achievement for all of us.
Overall, the learning experience derived from the technical difficulties we encountered along the way, which is both a blessing and a curse. We were able to discover new technologies to implement, while arduously understand the nuances to correctly use them.
## What's next for GarbaDoor
In the future, we would like to rebuild GarbaDoor using different and stronger materials since cardboard can only do so much for the vast number of trash we dispose of every day. We aim to facilitate the way we dispose our trash since forcing people to keep in mind what is garbage and what is not isn't the complete way to organize our trash.
Of course, our smart GarbaDoor cannot build a smart city with simply AI object classification. The next step for Garbador is to provide data and analysis for the city. For example, it can display a map of all the garbage cans in the city and warn the agency if a garbage can needs repair or cleaning. The city can also know where garbage cans are overfilled, thus needing more garbage cans, and where garbage cans are underused - thus can re-track the garbage cans. | ## Inspiration
With traditional travel and expense programs, processing a single report can take an average of 20 minutes! In a recent study, we found that users can save an average of 3 hours per week on expense management. That's an additional 156 hours per year that individuals could put towards things that truly matter.
## What it does
Receipify serves as an all-in-one platform for easily compiling your expenses. Users can simply take a picture of their receipts. Our machine-learning-based platform will employ OCR and NLP to interpret, categorize, and convert various components of the receipt to compile a PDF or CSV report neatly.
## How we built it
Receipify is built on a react.js front end with a node.js backend. We employ the Veryfi OCR API and NLP model to sort and interpret various components of a receipt, including total cost, currency type, contact details, and expense type.
## Challenges we ran into
We encountered problems with flask, as we initially planned to code it in python but later ran into many issues. Ultimately, we were forced to switch to node and code in js. Another issue was integrating our front end.
## Accomplishments that we're proud of
In combination with an intuitive and modern UI, Receipify was consistently able to quickly and accurately process receipts from various countries with different formats.
## What we learned
Despite not using it, we learned how to work with Flask. We also learned how to work better with React, Node, TypeScript and APIs.
## What's next for Receipify
The principle of Receipify can be applied to simple lives within countless industries. From medical records to test grading, OCR and NLP can be employed to automate processes that were tedious and extremely time-consuming at one point. | ## Inspiration
Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate.
## What it does
Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user.
## How we built it
ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI
## Challenges we ran into
Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen
## Accomplishments that we're proud of
It works as intended.
## What we learned
We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours.
## What's next for SeerAR
Expand to Apple watch and Android devices;
Improve the accuracy of object detection and recognition;
Connect with Firebase and Google cloud APIs; | losing |
## Inspiration
The EPA estimates that although 75% of American waste is recyclable, only 30% gets recycled. Our team was inspired to create RecyclAIble by the simple fact that although most people are not trying to hurt the environment, many unknowingly throw away recyclable items in the trash. Additionally, the sheer amount of restrictions related to what items can or cannot be recycled might dissuade potential recyclers from making this decision. Ultimately, this is detrimental since it can lead to more trash simply being discarded and ending up in natural lands and landfills rather than being recycled and sustainably treated or converted into new materials. As such, RecyclAIble fulfills the task of identifying recycling objects with a machine learning-based computer vision software, saving recyclers the uncertainty of not knowing whether they can safely dispose of an object or not. Its easy-to-use web interface lets users track their recycling habits and overall statistics like the number of items disposed of, allowing users to see and share a tangible representation of their contributions to sustainability, offering an additional source of motivation to recycle.
## What it does
RecyclAIble is an AI-powered mechanical waste bin that separates trash and recycling. It employs a camera to capture items placed on an oscillating lid and, with the assistance of a motor, tilts the lid in the direction of one compartment or another depending on whether the AI model determines the object as recyclable or not. Once the object slides into the compartment, the lid will re-align itself and prepare for proceeding waste. Ultimately, RecyclAIBle autonomously helps people recycle as much as they can and waste less without them doing anything different.
## How we built it
The RecyclAIble hardware was constructed using cardboard, a Raspberry Pi 3 B+, an ultrasonic sensor, a Servo motor, and a Logitech plug-in USB web camera, and Raspberry PI. Whenever the ultrasonic sensor detects an object placed on the surface of the lid, the camera takes an image of the object, converts it into base64 and sends it to a backend Flask server. The server receives this data, decodes the base64 back into an image file, and inputs it into a Tensorflow convolutional neural network to identify whether the object seen is recyclable or not. This data is then stored in an SQLite database and returned back to the hardware. Based on the AI model's analysis, the Servo motor in the Raspberry Pi flips the lip one way or the other, allowing the waste item to slide into its respective compartment. Additionally, a reactive, mobile-friendly web GUI was designed using Next.js, Tailwind.css, and React. This interface provides the user with insight into their current recycling statistics and how they compare to the nationwide averages of recycling.
## Challenges we ran into
The prototype model had to be assembled, measured, and adjusted very precisely to avoid colliding components, unnecessary friction, and instability. It was difficult to get the lid to be spun by a single Servo motor and getting the Logitech camera to be propped up to get a top view. Additionally, it was very difficult to get the hardware to successfully send the encoded base64 image to the server and for the server to decode it back into an image. We also faced challenges figuring out how to publicly host the server before deciding to use ngrok. Additionally, the dataset for training the AI demanded a significant amount of storage, resources and research. Finally, establishing a connection from the frontend website to the backend server required immense troubleshooting and inspect-element hunting for missing headers. While these challenges were both time-consuming and frustrating, we were able to work together and learn about numerous tools and techniques to overcome these barriers on our way to creating RecyclAIble.
## Accomplishments that we're proud of
We all enjoyed the bittersweet experience of discovering bugs, editing troublesome code, and staying up overnight working to overcome the various challenges we faced. We are proud to have successfully made a working prototype using various tools and technologies new to us. Ultimately, our efforts and determination culminated in a functional, complete product we are all very proud of and excited to present. Lastly, we are proud to have created something that could have a major impact on the world and help clean our environment clean.
## What we learned
First and foremost, we learned just how big of a problem under-recycling was in America and throughout the world, and how important recycling is to Throughout the process of creating RecyclAIble, we had to do a lot of research on the technologies we wanted to use, the hardware we needed to employ and manipulate, and the actual processes, institutions, and statistics related to the process of recycling. The hackathon has motivated us to learn a lot more about our respective technologies - whether it be new errors, or desired functions, new concepts and ideas had to be introduced to make the tech work. Additionally, we educated ourselves on the importance of sustainability and recycling as well to better understand the purpose of the project and our goals.
## What's next for RecyclAIble
RecycleAIble has a lot of potential as far as development goes. RecyclAIble's AI can be improved with further generations of images of more varied items of trash, enabling it to be more accurate and versatile in determining which items to recycle and which to trash. Additionally, new features can be incorporated into the hardware and website to allow for more functionality, like dates, tracking features, trends, and the weights of trash, that expand on the existing information and capabilities offered. And we’re already thinking of ways to make this device better, from a more robust AI to the inclusion of much more sophisticated hardware/sensors. Overall, RecyclAIble has the potential to revolutionize the process of recycling and help sustain our environment for generations to come. | ## Inspiration
An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin.
## What it does
The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper
We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown
The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin
## How we built it\
Using Recyclable Cardboard, used dc motors, and 3d printed parts.
## Challenges we ran into
We had to train our Model for the ground up, even getting all the data
## Accomplishments that we're proud of
We managed to get the whole infrastructure build and all the motor and sensors working.
## What we learned
How to create and train model, 3d print gears, use sensors
## What's next for Waste Wizard
A Smart bin able to sort the 7 types of plastic | ## Inspiration
One day, one of our teammates was throwing out garbage in his apartment complex and the building manager made him aware that certain plastics he was recycling were soft plastics that can't be recycled.
According to a survey commissioned by Covanta, “2,000 Americans revealed that 62 percent of respondents worry that a lack of knowledge is causing them to recycle incorrectly (Waste360, 2019).” We then found that knowledge of long “Because the reward [and] the repercussions for recycling... aren’t necessarily immediate, it can be hard for people to make the association between their daily habits and those habits’ consequences (HuffingtonPost, 2016)”.
From this research, we found that lack of knowledge or awareness can be detrimental to not only to personal life, but also to meeting government societal, environmental, and sustainability goals.
## What it does
When an individual is unsure of how to dispose of an item, "Bin it" allows them to quickly scan the item and find out not only how to sort it (recycling, compost, etc.) but additional information regarding potential re-use and long-term impact.
## How I built it
After brainstorming before the event, we built it by splitting roles into backend, frontend, and UX design/research. We concepted and prioritized features as we went based on secondary research, experimenting with code, and interviewing a few hackers at the event about recycling habits.
We used Google Vision API for the object recognition / scanning process. We then used Vue and Flask for our development framework.
## Challenges I ran into
We ran into challenges with deployment of the application due to . Getting set up was a challenge that was slowly overcome by our backend developers getting the team set up and troubleshooting.
## Accomplishments that I'm proud of
We were able to work as a team towards a goal, learn, and have fun! We were also able work with multiple Google API's. We completed the core feature of our project.
## What I learned
Learning to work with people in different roles was interesting. Also designing and developing from a technical stand point such as designing for a mobile web UI, deploying an app with Flask, and working with Google API's.
## What's next for Bin it
We hope to review feedback and save this as a great hackathon project to potentially build on, and apply our learnings to future projects, | winning |
# Shakespeak
Project for McHacks 2019
This is a website we built in 24 hours for McHacks 2019. We drew inspiration from [xkcd #1133](https://xkcd.com/1133/) and it's eponymous [The Up-Goer Five Text Editor](http://splasho.com/upgoer5/). In our 'Shakespeak Translator' text editor, we challenge users to explain technical/modern/weird topics to, well, Shakespeare. That is, we only permit the use of words that were present in any of Shakepeare's works, or are the name of a city/country (this was done with the help of the [Google Cloud Natural Language API](https://cloud.google.com/natural-language/) and its Entity Analysis capabilities).
Our website is live and can be found [here](http://willushakespeak.com/)! | ## What it does
Eloquent has two primary functions, both influenced by a connection between speaking and learning
The first is a public speaking coach, to help people practice their speeches. Users can import a speech or opt to ad-lib — the app will then listen to the user speak. When they finish, the app will present a variety of feedback: whether or not the user talked to fast, how many filler words they used, the informality of their language, etc. The user can take this feedback and continue to practice their speech, eventually perfecting it.
The second is a study tool, inspired by a philosophy that teaching promotes learning. Users can import Quizlet flashcard sets — the app then uses those flashcards to prompt the user, asking them to explain a topic or idea from the set. The app listens to the user's response, and determines whether or not the answer was satisfactory. If it was, the user can move on to the next question; but if it wasn't, the app will ask clarifying questions, leading the user towards a more complete answer.
## How we built it
The main technologies we used were Swift and Houndify. Swift, of course, was used to build our iOS app and code its logic. We used Houndify to transcribe the user's speech into text. We also took advantage of Houndify's "client matches" feature to improve accuracy when listening for keywords.
Much of our NLP analysis was custom-built in Swift, without a library. One feature that we used a library for, though, was keyword extraction. For this, we used a library called Reductio, which implements the TextRank algorithm in Swift. Actually, we used a fork of Reductio, since we had to make some small changes to the build-tools version of the library to make it compatible with our app.
Finally, we used a lightweight HTML Parsing and Searching library called Kanna to web-scrape Quizlet data.
## Challenges we ran into
I (Charlie) found it quite difficult to work on an iOS app, since I do not have a Mac. Coding in Swift without a Mac proved to be a challenge, since many powerful Swift libraries and tools are exclusive to Apple systems. This issue was partially alleviated by the decision to do most of the NLP analysis from the ground up, without an NLP library — in some cases though, coding without the ability to debug on my own machine was unavoidable.
We also had some difficulties with the Houndify API, but the #houndify Slack channel proved very useful. We ended up having to use some custom animations instead of Houndify's built-in one, but in the end, we solved all functionality issues. | ## Inspiration
Our inspiration for this project came from the issue we had in classrooms where many students would ask the same questions in slightly different ways, causing the teacher to use up valuable time addressing these questions instead of more pertinent and different ones.
Also, we felt that the bag of words embedding used to vectorize sentences does not make use of the sentence characteristics optimally, so we decided to create our structure in order to represent a sentence more efficiently.
## Overview
Our application allows students to submit questions onto a website which then determines whether this question is either:
1. The same as another question that was previously asked
2. The same topic as another question that was previously asked
3. A different topic entirely
The application does this by using the model proposed by the paper "Bilateral Multi-Perspective Matching for Natural Language Sentences" by Zhiguo et. al, with a new word structure input which we call "sentence tree" instead of a bag-of-words that outputs a prediction of whether the new question asked falls into one of the above 3 categories.
## Methodology
We built this project by splitting the task into multiple subtasks which could be done in parallel. Two team members worked on the web app while the other two worked on the machine learning model in order to our expertise efficiently and optimally. In terms of the model aspect, we split the task into getting the paper's code work and implementing our own word representation which we then combined into a single model.
## Challenges
Majorly, modifying the approach presented in the paper to suit our needs was challenging. On the web development side, we could not integrate the model in the web app easily as envisioned since we had customized our model.
## Accomplishments
We are proud that we were able to get accuracy close to the ones provided by the paper and for developing our own representation of a sentence apart from the classical bag of words approach.
Furthermore, we are excited to have created a novel system that eases the pain of classroom instructors a great deal.
## Takeaways
We learned how to implement research papers and improve on the results from these papers. Not only that, we learned more about how to use Tensorflow to create NLP applications and the differences between Tensorflow 1 and 2.
Going further, we also learned how to use the Stanford CoreNLP toolkit. We also learned more about web app design and how to connect a machine learning backend in order to run scripts from user input.
## What's next for AskMe.AI
We plan on finetuning the model to improve its accuracy and to also allow for questions that are multi sentence. Not only that, we plan to streamline our approach so that the tree sentence structure could be seamlessly integrated with other NLP models to replace bag of words and to also fully integrate the website with the backend. | partial |
## Inspiration
During the fall 2021 semester, the friend group made a fun contest to participate in: Finding the chonkiest squirrel on campus.
Now that we are back in quarantine, stuck inside all day with no motivation to do exercise, we wondered if we could make a timer like in the app [Forest](https://www.forestapp.cc/) to motivate us to work out.
Combine the two idea, and...
## Welcome to Stronk Chonk!
In this game, the user has a mission of the utmost importance: taking care of Mr Chonky, the neighbourhood squirrel!
Spending time working out in real life is converted, in the game, as time spent gathering acorns. Therefore, the more time spent working out, the more acorns are gathered, and the chonkier the squirrel will become, providing it excellent protection for the harsh Canadian winter ahead.
So work out like your life depends on it!

## How we built it
* We made the app using Android Studio
* Images were drawn in Krita
* Communications on Discord
## Challenges we ran into
36 hours is not a lot of time. Originally, the app was supposed to be a game involving a carnival high striker bell. Suffice to say: *we did not have time for this*.
And so, we implemented a basic stopwatch app on Android Studio... Which 3 of us had never used before. There were many headaches, many laughs.
The most challenging bits:
* Pausing the stopwatch: Android's Chronometer does not have a pre-existing pause function
* Layout: We wanted to make it look pretty *we did not have time to make every page pretty* (but the home page looks very neat)
* Syncing: The buttons were a mess and a half, data across different pages of the app are not synced yet
## Accomplishments that we're proud of
* Making the stopwatch work (thanks Niels!)
* Animating the squirrel
* The splash art
* The art in general (huge props to Angela and Aigiarn)
## What we learned
* Most team members used Android Studio for the first time
* This was half of the team's first hackathon
* Niels and Ojetta are now *annoyingly* familiar with Android's Chronometer function
* Niels and Angela can now navigate the Android Studio Layout functions like pros!
* All team members are now aware they might be somewhat squirrel-obsessed
## What's next for Stronk Chonk
* Syncing data across all pages
* Adding the game element: High Striker Squirrel | ## Inspiration
Cute factor of dogs/cats, also to improve the health of many pets such as larger dogs that can easily become overweight.
## What it does
Reads accelerometer data on collar, converted into steps.
## How I built it
* Arduino Nano
* ADXL345 module
* SPP-C Bluetooth module
* Android Studio for app
## Challenges I ran into
Android studio uses a large amount of RAM space.
Interfacing with the accelerometer was challenging with finding the appropriate algorithm that has the least delay and lag.
## Accomplishments that I'm proud of
As a prototype, it is a great first development.
## What I learned
Some android studio java shortcuts/basics
## What's next for DoogyWalky
Data analysis with steps to convert to calories, and adding a second UI for graphing data weekly and hourly with a SQLite Database. | ## Inspiration and what it does
We were inspired to create an app to create a customized workout plan for individuals with a limited space and time with an addition of a curated youtube video, especially with gyms being inaccessible due to COVID-19 restrictions and lockdown. Users have the freedom to pick and choose their duration of time, the space availability and area of focus in improving their strength for certain parts of their body. Part of the experience that makes this app exciting is that it incorporates functions that psychologically can motivate individuals to workout, such as personalized workout plan, voice memos, transcribed notes, and a tally of successful workout that has been done throughout the day. We designed this app to be inviting, and simple to use so that users will only have to worry to login to the app and track their workout daily to start their habit and stick with it in the year of 2022.
## How we built it
We built this project using Visual Studio Code as it is flexible to modify the design elements of the app using Flutter. Using Android Studio we were able to simulate the app upon hot restart and see how it was currently functioning. Used AssemblyAI, streamlit, and python to create the youtube video transcriber.
## Challenges we ran into and accomplishments that we're proud of
The biggest challenge was the lack of app development experience between the 4 of us. We all made our earnest attempt to learn and encounter new things as we set out to ambitiously write an app in one day. By step 1, there were already roadblocks with installing source code editing software and getting the environment ready for writing the app and all its design elements. Then came writing code in Dart, which was made easier by the Flutter demo presented Saturday morning, but did not move past basic element design. We also had difficulties deciding whether to use Next, we were able to get AssemblyAI configured with transcribing YouTube videos, but unfortunately could not get it to work with uploaded MP3 files (voice memos). Mostly the time constraint for our level of experience with something outside of all our depths was very challenging, but we did our best working at all hours and through the night to pull things together and make a presentable prototype.
## What we learned
Initially we built the project using Flutter and Dart with an option of exporting it to a html format or android studio. For Android Studio, we have attempted to install necessary plugins, and found that it can be quite large and daunting since we have a limited hardware space required to export the app product. We also learned how to use Figma interface to html, which is simpler although there is a learning curve in order to get to the level of proficiency we wanted with a working prototype especially with sorting the YouTube algorithm to display videos based on the user’s input. Due to restricted time, we wanted to show the prototype of our ideas about this app by using Figma.
## What's next for Ounce
Next we hope to further develop our prototype and release the app for thousands of potential users to benefit from | partial |
## Inspiration
Given the increase in mental health awareness, we wanted to focus on therapy treatment tools in order to enhance the effectiveness of therapy. Therapists rely on hand-written notes and personal memory to progress emotionally with their clients, and there is no assistive digital tool for therapists to keep track of clients’ sentiment throughout a session. Therefore, we want to equip therapists with the ability to better analyze raw data, and track patient progress over time.
## Our Team
* Vanessa Seto, Systems Design Engineering at the University of Waterloo
* Daniel Wang, CS at the University of Toronto
* Quinnan Gill, Computer Engineering at the University of Pittsburgh
* Sanchit Batra, CS at the University of Buffalo
## What it does
Inkblot is a digital tool to give therapists a second opinion, by performing sentimental analysis on a patient throughout a therapy session. It keeps track of client progress as they attend more therapy sessions, and gives therapists useful data points that aren't usually captured in typical hand-written notes.
Some key features include the ability to scrub across the entire therapy session, allowing the therapist to read the transcript, and look at specific key words associated with certain emotions. Another key feature is the progress tab, that displays past therapy sessions with easy to interpret sentiment data visualizations, to allow therapists to see the overall ups and downs in a patient's visits.
## How we built it
We built the front end using Angular and hosted the web page locally. Given a complex data set, we wanted to present our application in a simple and user-friendly manner. We created a styling and branding template for the application and designed the UI from scratch.
For the back-end we hosted a REST API built using Flask on GCP in order to easily access API's offered by GCP.
Most notably, we took advantage of Google Vision API to perform sentiment analysis and used their speech to text API to transcribe a patient's therapy session.
## Challenges we ran into
* Integrated a chart library in Angular that met our project’s complex data needs
* Working with raw data
* Audio processing and conversions for session video clips
## Accomplishments that we're proud of
* Using GCP in its full effectiveness for our use case, including technologies like Google Cloud Storage, Google Compute VM, Google Cloud Firewall / LoadBalancer, as well as both Vision API and Speech-To-Text
* Implementing the entire front-end from scratch in Angular, with the integration of real-time data
* Great UI Design :)
## What's next for Inkblot
* Database integration: Keeping user data, keeping historical data, user profiles (login)
* Twilio Integration
* HIPAA Compliancy
* Investigate blockchain technology with the help of BlockStack
* Testing the product with professional therapists | ## Inspiration
For this hackathon, we wanted to build something that could have a positive impact on its users. We've all been to university ourselves, and we understood the toll, stress took on our minds. Demand for mental health services among youth across universities has increased dramatically in recent years. A Ryerson study of 15 universities across Canada show that all but one university increased their budget for mental health services. The average increase has been 35 per cent. A major survey of over 25,000 Ontario university students done by the American college health association found that there was a 50% increase in anxiety, a 47% increase in depression, and an 86 percent increase in substance abuse since 2009.
This can be attributed to the increasingly competitive job market that doesn’t guarantee you a job if you have a degree, increasing student debt and housing costs, and a weakening Canadian middle class and economy. It can also be contributed to social media, where youth are becoming increasingly digitally connected to environments like instagram. People on instagram only share the best, the funniest, and most charming aspects of their lives, while leaving the boring beige stuff like the daily grind out of it. This indirectly perpetuates the false narrative that everything you experience in life should be easy, when in fact, life has its ups and downs.
## What it does
One good way of dealing with overwhelming emotion is to express yourself. Journaling is an often overlooked but very helpful tool because it can help you manage your anxiety by helping you prioritize your problems, fears, and concerns. It can also help you recognize those triggers and learn better ways to control them. This brings us to our application, which firstly lets users privately journal online. We implemented the IBM watson API to automatically analyze the journal entries. Users can receive automated tonal and personality data which can depict if they’re feeling depressed or anxious. It is also key to note that medical practitioners only have access to the results, and not the journal entries themselves. This is powerful because it takes away a common anxiety felt by patients, who are reluctant to take the first step in healing themselves because they may not feel comfortable sharing personal and intimate details up front.
MyndJournal allows users to log on to our site and express themselves freely, exactly like they were writing a journal. The difference being, every entry in a persons journal is sent to IBM Watson's natural language processing tone analyzing API's, which generates a data driven image of the persons mindset. The results of the API are then rendered into a chart to be displayed to medical practitioners. This way, all the users personal details/secrets remain completely confidential and can provide enough data to counsellors to allow them to take action if needed.
## How we built it
On back end, all user information is stored in a PostgreSQL users table. Additionally all journal entry information is stored in a results table. This aggregate data can later be used to detect trends in university lifecycles.
An EJS viewing template engine is used to render the front end.
After user authentication, a journal entry prompt when submitted is sent to the back end to be fed asynchronously into all IBM water language processing API's. The results from which are then stored in the results table, with associated with a user\_id, (one to many relationship).
Data is pulled from the database to be serialized and displayed intuitively on the front end.
All data is persisted.
## Challenges we ran into
Rendering the data into a chart that was both visually appealing and provided clear insights.
Storing all API results in the database and creating join tables to pull data out.
## Accomplishments that we're proud of
Building a entire web application within 24 hours. Data is persisted in the database!
## What we learned
IBM Watson API's
ChartJS
Difference between the full tech stack and how everything works together
## What's next for MyndJournal
A key feature we wanted to add for the web app for it to automatically book appointments with appropriate medical practitioners (like nutritionists or therapists) if the tonal and personality results returned negative. This would streamline the appointment making process and make it easier for people to have access and gain referrals. Another feature we would have liked to add was for universities to be able to access information into what courses or programs are causing the most problems for the most students so that policymakers, counsellors, and people in authoritative positions could make proper decisions and allocate resources accordingly.
Funding please | ## 💡 Inspiration 💡
Mental health is a growing concern in today's population, especially in 2023 as we're all adjusting back to civilization again as COVID-19 measures are largely lifted. With Cohere as one of our UofT Hacks X sponsors this weekend, we want to explore the growing application of natural language processing and artificial intelligence to help make mental health services more accessible. One of the main barriers for potential patients seeking mental health services is the negative stigma around therapy -- in particular, admitting our weaknesses, overcoming learned helplessness, and fearing judgement from others. Patients may also find it inconvenient to seek out therapy -- either because appointment waitlists can last several months long, therapy clinics can be quite far, or appointment times may not fit the patient's schedule. By providing an online AI consultant, we can allow users to briefly experience the process of therapy to overcome their aversion in the comfort of their own homes and under complete privacy. We are hoping that after becoming comfortable with the experience, users in need will be encouraged to actively seek mental health services!
## ❓ What it does ❓
This app is a therapy AI that generates reactive responses to the user and remembers previous information not just from the current conversation, but also past conversations with the user. Our AI allows for real-time conversation by using speech-to-text processing technology and then uses text-to-speech technology for a fluent human-like response. At the end of each conversation, the AI therapist generates an appropriate image summarizing the sentiment of the conversation to give users a method to better remember their discussion.
## 🏗️ How we built it 🏗️
We used Flask to make the API endpoints in the back-end to connect with the front-end and also save information for the current user's session, such as username and past conversations, which were stored in a SQL database. We first convert the user's speech to text and then send it to the back-end to process it using Cohere's API, which as been trained by our custom data and the user's past conversations and then sent back. We then use our text-to-speech algorithm for the AI to 'speak' to the user. Once the conversations is done, we use Cohere's API to summarize it into a suitable prompt for the DallE text-to-image API to generate an image summarizing the user's conversation for them to look back at when they want to.
## 🚧 Challenges we ran into 🚧
We faced an issue with implementing a connection from the front-end to back-end since we were facing a CORS error while transmitting the data so we had to properly validate it. Additionally, incorporating the speech-to-text technology was challenging since we had little prior experience so we had to spend development time to learn how to implement it and also format the responses properly. Lastly, it was a challenge to train the cohere response AI properly since we wanted to verify our training data was free of bias or negativity, and that we were using the results of the Cohere AI model responsibly so that our users will feel safe using our AI therapist application.
## ✅ Accomplishments that we're proud of ✅
We were able to create an AI therapist by creating a self-teaching AI using the Cohere API to train an AI model that integrates seamlessly into our application. It delivers more personalized responses to the user by allowing it to adapt its current responses to users based on the user's conversation history and
making conversations accessible only to that user. We were able to effectively delegate team roles and seamlessly integrate the Cohere model into our application. It was lots of fun combining our existing web development experience with venturing out to a new domain like machine learning to approach a mental health issue using the latest advances in AI technology.
## 🙋♂️ What we learned 🙋♂️
We learned how to be more resourceful when we encountered debugging issues, while balancing the need to make progress on our hackathon project. By exploring every possible solution and documenting our findings clearly and exhaustively, we either increased the chances of solving the issue ourselves, or obtained more targeted help from one of the UofT Hacks X mentors via Discord. Our goal is to learn how to become more independent problem solvers. Initially, our team had trouble deciding on an appropriately scoped, sufficiently original project idea. We learned that our project should be both challenging enough but also buildable within 36 hours, but we did not force ourselves to make our project fit into a particular prize category -- and instead letting our project idea guide which prize category to aim for. Delegating our tasks based on teammates' strengths and choosing teammates with complementary skills was essential for working efficiently.
## 💭 What's next? 💭
To improve our project, we could allow users to customize their AI therapist, such as its accent and pitch or the chat website's color theme to make the AI therapist feel more like a personalized consultant to users. Adding a login page, registration page, password reset page, and enabling user authentication would also enhance the chatbot's security. Next, we could improve our website's user interface and user experience by switching to Material UI to make our website look more modern and professional. | winning |
## Inspiration
Jessica here - I came up with the idea for BusPal out of expectation that the skill has already existed. With my Amazon Echo Dot, I was already doing everything from checking the weather to turning off and on my lights with Amazon skills and routines. The fact that she could not check when my bus to school was going to arrive was surprising at first - until I realized that Amazon and Google are one of the biggest rivalries there is between 2 tech giants. However, I realized that the combination of Alexa's genuine personality and the powerful location ability of Google Maps would fill a need that I'm sure many people have. That was when the idea for BusPal was born: to be a convenient Alexa skill that will improve my morning routine - and everyone else's.
## What it does
This skill enables Amazon Alexa users to ask Alexa when their bus to a specified location is going to arrive and to text the directions to a phone number - all hands-free.
## How we built it
Through the Amazon Alexa builder, Google API, and AWS.
## Challenges we ran into
We originally wanted to use stdlib, however with a lack of documentation for the new Alexa technology, the team made an executive decision to migrate to AWS roughly halfway into the hackathon.
## Accomplishments that we're proud of
Completing Phase 1 of the project - giving Alexa the ability to take in a destination, and deliver a bus time, route, and stop to leave for.
## What we learned
We learned how to use AWS, work with Node.js, and how to use Google APIs.
## What's next for Bus Pal
Improve the text ability of the skill, and enable calendar integration. | ## Inspiration
As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities.
## What it does
FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless.
## How we built it
We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's.
## Challenges we ran into
IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url.
## Accomplishments that we're proud of
We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement.
## What's next for FaceLyft
While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases. | ### Saturday 11AM: Starting Out
>
> *A journey of a thousand miles begins with a single step*
>
>
>
BusBuddy is pulling the curtain back on school buses. Students and parents should have equal access to information to know when and where their buses are arriving, how long it will take to get to school, and be up-to-date on any changes in routes. When we came onboard the project, our highest priorities were efficiency, access, and sustainability.
With our modern version of a solution to the traveling salesman problem, we hope to give students and parents some peace of mind when it comes to school transportation. Not only will BusBuddy make the experience more comfortable, but having reliable information means more parents will opt to save on gas and send their kids by bus.
### Saturday 3PM: Roadblocks, Missteps, Obstacles
>
> *I would walk a thousand miles just to fall down at your door*
>
>
>
No road is without its potholes; our road was no exception to this. Alongside learning curves and getting to know each other, we faced issues with finicky APIs that disagreed with our input data, temperamental CSS margins that refused to anchor where we wanted them, and missing lines of code that we swear we put in. With enough time and bubble tea, we found our critical errors and began to build our vision.
### Saturday 9PM: Finding Our Way
>
> *Just keep swimming, just keep swimming, just keep swimming, swimming, swimming…*
>
>
>
We conceptualized in Figma with asset libraries; we built our front-end in VS Code with HTML, CSS, and Jinja2; we used Flask, Python, SQL databases, and a Google Maps API, alongside the Affinity Propagation Clustering algorithm, to cluster home addresses; and finally, we ran a recursive DFS on a directed weighted graph to optimize a route for bus pickup of all students.
### Sunday 7AM: Summiting the Peak
>
> *Planting a flag at the top*
>
>
>
We achieved our minimum viable product! Given that our expectations were not low, it was no easy feat to climb this mountain.
### Sunday 11AM: Journey’s End
>
> *The journey matters more than the destination*
>
>
>
With a team composed of an 11th grader, a 12th grader, a UWaterloo first year, and a Mac second year, we certainly did not lack in range of experiences to bring to the table. Our biggest asset was having each other as sounding boards to bounce ideas off of. Getting to collaborate with each other certainly broadened our worldviews, especially with each others’ anecdotes about school pre-, during, and post-COVID.
### Sunday Onward
>
> *New Horizons*
>
>
>
So what’s next for us? And what’s next for BusBuddy?
Well, we’ll be doing some sleeping. As for BusBuddy, we hope to scale up and turn our application into something that BusBuddy’s students can use for years to come. | winning |
## Inspiration
We wanted to protect our laptops with the power of rubber bands.
## What it does
It shoots rubber bands at aggressive screen lookers.
## How we built it
Willpower and bad code.
## Challenges we ran into
Ourselves.
## Accomplishments that we're proud of
Having something. Honestly.
## What we learned
Never use continuous servos.
## What's next for Rubber Security
IPO | ## Inspiration
Members of our team know multiple people who suffer from permanent or partial paralysis. We wanted to build something that could be fun to develop and use, but at the same time make a real impact in people's everyday lives. We also wanted to make an affordable solution, as most solutions to paralysis cost thousands and are inaccessible. We wanted something that was modular, that we could 3rd print and also make open source for others to use.
## What it does and how we built it
The main component is a bionic hand assistant called The PulseGrip. We used an ECG sensor in order to detect electrical signals. When it detects your muscles are trying to close your hand it uses a servo motor in order to close your hand around an object (a foam baseball for example). If it stops detecting a signal (you're no longer trying to close) it will loosen your hand back to a natural resting position. Along with this at all times it sends a signal through websockets to our Amazon EC2 server and game. This is stored on a MongoDB database, using API requests we can communicate between our games, server and PostGrip. We can track live motor speed, angles, and if it's open or closed. Our website is a full-stack application (react styled with tailwind on the front end, node js on the backend). Our website also has games that communicate with the device to test the project and provide entertainment. We have one to test for continuous holding and another for rapid inputs, this could be used in recovery as well.
## Challenges we ran into
This project forced us to consider different avenues and work through difficulties. Our main problem was when we fried our EMG sensor, twice! This was a major setback since an EMG sensor was going to be the main detector for the project. We tried calling around the whole city but could not find a new one. We decided to switch paths and use an ECG sensor instead, this is designed for heartbeats but we managed to make it work. This involved wiring our project completely differently and using a very different algorithm. When we thought we were free, our websocket didn't work. We troubleshooted for an hour looking at the Wifi, the device itself and more. Without this, we couldn't send data from the PulseGrip to our server and games. We decided to ask for some mentor's help and reset the device completely, after using different libraries we managed to make it work. These experiences taught us to keep pushing even when we thought we were done, and taught us different ways to think about the same problem.
## Accomplishments that we're proud of
Firstly, just getting the device working was a huge achievement, as we had so many setbacks and times we thought the event was over for us. But we managed to keep going and got to the end, even if it wasn't exactly what we planned or expected. We are also proud of the breadth and depth of our project, we have a physical side with 3rd printed materials, sensors and complicated algorithms. But we also have a game side, with 2 (questionable original) games that can be used. But they are not just random games, but ones that test the user in 2 different ways that are critical to using the device. Short burst and hold term holding of objects. Lastly, we have a full-stack application that users can use to access the games and see live stats on the device.
## What's next for PulseGrip
* working to improve sensors, adding more games, seeing how we can help people
We think this project had a ton of potential and we can't wait to see what we can do with the ideas learned here.
## Check it out
<https://hacks.pulsegrip.design>
<https://github.com/PulseGrip> | ## Inspiration
Blink.ino
## What's next for Morse Codeverter
Realistically? Destruction. Theoretically? I could use a different output (i.e. audio) to make text on a screen intelligible accessible to people with visual impairments, but I'm sure there are better ways to do that. | winning |
## Inspiration
Fitness bands track your heart rate, but they do not take any action if anything is wrong. This makes them useless for people like heart disease patients who need them the most.
## What it does
Dr Heart connects data to the appropriate people. Using Smooch and Slack, it notifies doctors, families, and emergency crew when the patients' hear rate falls out of the pre-determined upper and lower bound, and enables simple text communication.
## How I built it
Using Microsoft Band's Android API, with Smooch chatting client integrated with Slack.
## Challenges I ran into
Initializing Smooch API properly, connecting to the band, Android build versions, SharedPreferences in Android.
## Accomplishments that I'm proud of
Simple solution for a potentially big problem.
## What I learned
Building Android Application.
## What's next for Dr Heart
* Remote control for doctor
* Accelerometer, barometer, step count integration to eliminate false detection
* Online record database
* Automatic emergency calls | ## Inspiration
Survival from out-of-hospital cardiac arrest remains unacceptably low worldwide, and it is the leading cause of death in developed countries. Sudden cardiac arrest takes more lives than HIV and lung and breast cancer combined in the U.S., where survival from cardiac arrest averages about 6% overall, taking the lives of nearly 350,000 annually. To put it in perspective, that is equivalent to three jumbo jet crashes every single day of the year.
For every minute that passes between collapse and defibrillation survival rates decrease 7-10%. 95% of cardiac arrests die before getting to the hospital, and brain death starts 4 to 6 minutes after the arrest.
Yet survival rates can exceed 50% for victims when immediate and effective cardiopulmonary resuscitation (CPR) is combined with prompt use of a defibrillator. The earlier defibrillation is delivered, the greater chance of survival. Starting CPR immediate doubles your chance of survival. The difference between the current survival rates and what is possible has given rise to the need for this app - IMpulse.
Cardiac arrest can occur anytime and anywhere, so we need a way to monitor heart rate in realtime without imposing undue burden on the average person. Thus, by integrating with Apple Watch, IMpulse makes heart monitoring instantly available to anyone, without requiring a separate device or purchase.
## What it does
IMpulse is an app that runs continuously on your Apple Watch. It monitors your heart rate, detecting for warning signs of cardiac distress, such as extremely low or extremely high heart rate. If your pulse crosses a certain threshold, IMpulse captures your current geographical location and makes a call to an emergency number (such as 911) to alert them of the situation and share your location so that you can receive rapid medical attention. It also sends SMS alerts to emergency contacts which users can customize through the app.
## How we built it
With newly-available access to Healthkit data, we queried heart sensor data from the Apple Watch in real time. When these data points are above or below certain thresholds, we capture the user's latitude and longitude and make an HTTPRequest to a Node.js server endpoint (currently deployed to heroku at <http://cardiacsensor.herokuapp.com>) with this information. The server uses the Google Maps API to convert the latitude and longitude values into a precise street address. The server then makes calls to the Nexmo SMS and Call APIs which dispatch the information to emergency services such as 911 and other ICE contacts.
## Challenges we ran into
1. There were many challenges testing the app through the XCode iOS simulators. We couldn't find a way to simulate heart sensor data through our laptops. It was also challenging to generate Location data through the simulator.
2. No one on the team had developed in iOS before, so learning Swift was a fun challenge.
3. It was challenging to simulate the circumstances of a cardiac arrest in order to test the app.
4. Producing accurate and precise geolocation data was a challenge and we experimented with several APIs before using the Google Maps API to turn latitude and longitude into a user-friendly, easy-to-understand street address.
## Accomplishments that we're proud of
This was our first PennApps (and for some of us, our first hackathon). We are proud that we finished our project in a ready-to-use, demo-able form. We are also proud that we were able to learn and work with Swift for the first time. We are proud that we produced a hack that has the potential to save lives and improve overall survival rates for cardiac arrest that incorporates so many different components (hardware, data queries, Node.js, Call/SMS APIs).
## What's next for IMpulse
Beyond just calling 911, IMpulse hopes to build out an educational component of the app that can instruct bystanders to deliver CPR. Additionally, with the Healthkit data from Apple Watch, IMpulse could expand to interact with a user's pacemaker or implantable cardioverter defibrillator as soon as it detects cardiac distress. Finally, IMpulse could communicate directly with a patient's doctor to deliver realtime heart monitor data. | ## Inspiration
Cardiovascular diseases are the leading cause of death globally. One in five deaths is a heart attack, and performing CPR immediately can greatly improve these odds, yet ambulances may only arrive so fast. This project aims to quickly help those in need by alerting nearby individuals with first-aid training of the incident.
## What it does
In order to provide support at the time of need, our app monitors real time heart rate and input data on a smart watch and allows the user to tap a button which would send notifications to nearby first-aid certified members, indicating an emergency . These registered members are trained CPR providers and the notifications will only be sent to members within a close distance to the person in need for help.
## How we built it +
## Challenges we ran into
One of the first challenges we ran into was coding on FibitOS using Fibit’s documentation, as we all carefully studied the documentation to program the app that would be installed on the watch. Additionally, the platform to test the app came with a simulator that was not equipped to handle API calls like an actual smart watch, which restricted the things we could test. Sending data from the sensor on the watch to a server took us a long time to figure out, with the console's unhelpful error messages.
Lastly, we chose to implement a MERN stack as it was best suited to work with FitbitOS, yet we were all new to it. Our team had to learn the entire framework and libraries that we could use in the timespan of this hackathon.
(Oh, and due to our collective lack of knowledge in git, we ended up making 7 copies of the repo!)
## Accomplishments that we're proud of +
## What we learned
We are all very proud and relieved that we were able to sort out our server issues, and learn the MERN stack in such a tight period of time. We still don't know where Tony went though..
## What's next for PulseSafe
Our idea was not driven by monetary incentive, though we understand that without funding, we cannot scale PulseSafe to a national level. There are still various components of PulseSafe that need to be polished and securely implemented, however in the far future, one of our primary steps would be to create a watch/band personal to PulseSafe watch benefits for financial gains and so that users do not have to purchase an expensive watch just to gain access to our PulseSafe watch app. | partial |
## Inspiration
The failure of a certain project using Leap Motion API motivated us to learn it and use it successfully this time.
## What it does
Our hack records a motion password desired by the user. Then, when the user wishes to open the safe, they repeat the hand motion that is then analyzed and compared to the set password. If it passes the analysis check, the safe unlocks.
## How we built it
We built a cardboard model of our safe and motion input devices using Leap Motion and Arduino technology. To create the lock, we utilized Arduino stepper motors.
## Challenges we ran into
Learning the Leap Motion API and debugging was the toughest challenge to our group. Hot glue dangers and complications also impeded our progress.
## Accomplishments that we're proud of
All of our hardware and software worked to some degree of success. However, we recognize that there is room for improvement and if given the chance to develop this further, we would take it.
## What we learned
Leap Motion API is more difficult than expected and communicating with python programs and Arduino programs is simpler than expected.
## What's next for Toaster Secure
-Wireless Connections
-Sturdier Building Materials
-User-friendly interface | ## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change. | ## Inspiration
I was inspired by the sheer size and amount of people/things happening at HackMIT, since I had never been to a college hackathon before. I didn't want to miss out on any of the action, but I also obviously could not be more than one place at once, so I thought: why don't I just make a robot that I can control wirelessly to take my phone and scope out the events? I thought it would also be fun to use the Leap Motion to communicate with the robot so it would be a no touch remote control, which was what I implemented.
## What it does
The robot moves according to certain gestures that the Leap Motion tracks. If you hold your hand flat, the robot goes forward. Index finger extended (only) makes it turn left, and pinky finger extended (only) makes it turn right. When you make a fist, the robot stops.
## How I built it
I used the Hercules robotics kit from Intel, as well as the Intel-Edison and Arduino extended boards. These acted as a parallel computer to my own, allowing cross platform communication. The Intel Edison is connected to my computer through wifi. The Leap Motion pushes data that it processes through my computer and the Leap Motion API onto a static localhost server run through flask. The Intel-Edison CPU unit then sends continuous GET requests to this server, downloading the output strings and using those as "commands" for its motor functions.
## Challenges I ran into
First off, there were several different very specific manuals for building the Hercules, but no comprehensive one, so I just figured out all the wiring and connections on my own. As someone with very little background experience in electronic circuits, this was a difficulty I did not anticipate (as I thought the manual would guide me through it). My second challenge was that I thought that the Intel-Edison already communicated with the computer, hence why it was originally guided by keypresses. However, since the Intel-Edison is its own computer, the keyboard merely acted as a wirelessly connected keyboard instead of the Edison actually retrieving data from the computer. I did not realize there was no communication between the platforms, so setting up a static server as a proxy proved to be the most difficult part of the code by far.
## Accomplishments that I'm proud of
I am proud that I actually was able to have hack that was very heavy in both hardware and software components, since I usually tend to focus on only one or the other. Additionally, I had never set up a static server and used ajax for post/get requests before, so I'm glad that the "data-transfer" actually worked out.
## What I learned
I learned how to use flask to set up a static server, how to use ajax/json for data input/output requests, and how to effectively wire an electronics system.
## What's next for Hand Roller
Although the Leap Motion API is a good start, it is not a very extensive API and therefore does not have the best hand mapping functions built in. Next time, I hope to create a wrapper class and define my own gestures, in order to have more accurate tracking for the remote control. | winning |
## Inspiration
Every day, students pay hundreds of dollars for textbooks that they could be getting for lower prices had they spent the time to browse different online stores. This disadvantageous situation forces many students to choose between piracy of textbooks or even going through a course without one. We imagined a way to automate the tedious process of manually searching online stores which offer cheaper prices.
## What it does
Deliber allows users to quickly enter a keyword search or ISBN and find the best prices for books online by consuming book pricing and currency conversion information from several upstream APIs provided by Amazon Web Services, Commission Junction, BooksRun, and Fixer.io.
## How we built it
The backend processing dealing with upstream APIs is done in Go and client-side work is in Javascript with jQuery. Reverse proxying and serving of our website is done using the Apache web server. The user interface of the site is implemented using Bootstrap.
The Go backend uses many open-source libraries such as the Go Amazon Product API, the Go Fixer.io wrapper, and the Go Validator package.
## Challenges we ran into
Parsing of XML, and to a lesser extent JSON, was a significant challenge that prevented us from using PHP as one of the backend languages. User interface design was also an obstacle in the development of the site.
A setback that befell us in the early stages of our planning was the rejection from the majority of online bookstores that we applied to for API access. Their main reason for rejection was the lack of content on the site since we could not write any code before the competition. We chose to persist in the face of this setback despite the resulting lack of vendors as the future potential of Deliber remained, and remains, high.
## Accomplishments that we're proud of
In 24 hours, we built a practical tool which anybody can use to save money on the internet when buying books. During this short time period, we were able to quickly learn new skills and hone existing ones to tackle the aforementioned challenges we faced.
## What we learned
In facing the challenges we encountered, we learned of the complexity of manipulating data coming from different sources with different schemas; the difficulty of processing this data in PHP in comparison to Go or Javascript; and the importance of consulting concise resources like the Mozilla Development Network Web Documentation. Additionally, the 24-hour time constraint of nwHacks showed us the importance of using open-source libraries to do low-level tasks rather than reinventing the wheel.
## What's next for Deliber
Now that we have a functional site with the required content, we plan to reapply for API access to the online bookstores which previously rejected us. With more vendors comes lower prices for the users of Deliber. Additionally, API access to these vendors is coupled with affiliate status, which is a path towards making Deliber a self-sustaining entity through the commission earned from affiliate links. | ## What it does
InternetVane is an IOT wind/weather vane. Set and calibrate the InternetVane and the arrow will point in the real-time direction of the wind at your location.
First, the arrow is calibrated to the direction of the magnetometer. Then, the GRPS Shield works with twilio and google maps geo location api to retrieve the GPS location of the device. Using the location, the EarthNetworks API delivers real-time wind information to InternetVane . Then, the correct, IRL direction of the wind is calculated. The arrow's orientation is updated and recalculated every 30 seconds to provide the user with the most accurate visualization possible.
## How we built it
We used an Arduino UNO with a GPRS Shield, twillio functions, google maps geo-location api, magnetometer, stepper motor, and other various hardware.
The elegant, laser cut enclosure is the cherry ontop!
## Challenges we ran into
We were rather unfamiliar with laser cutting and 3d printing. There were many trials and errors to get the perfect, fitted enclosure.
Quite a bit of time was spent deciding how to calibrate the device. The arrow needs to align with the magnetometer before moving to the appropriate direction. Many hardware options were considered before deciding on the specific choice of switch. | Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings.
## Problem
Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis.
## Solution
\*Train a machine learning model to automate the prediction of corporate credit ratings.
\*Compare vendor ratings with predicted ratings to identify discrepancies.
\*Present this information in a cross-platform application for RBC’s traders and clients.
## Data
Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM>
## Analysis
We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups.
## Product
We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in. | partial |
## Inspiration
Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills.
## What it does and how we built it
TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance.
## Challenges we ran into
Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques.
## Accomplishments that we're proud of
We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team.
## What we learned
Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users.
## What's next for TRACY
Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community. | ## Inspiration
It is nearly a year since the start of the pandemic and going back to normal still feels like a distant dream.
As students, most of our time is spent attending online lectures, reading e-books, listening to music, and playing online games. This forces us to spend immense amounts of time in front of a large monitor, clicking the same monotonous buttons.
Many surveys suggest that this has increased the anxiety levels in the youth.
Basically, we are losing the physical stimulus of reading an actual book in a library, going to an arcade to enjoy games, or play table tennis with our friends.
## What it does
It does three things:
1) Controls any game using hand/a steering wheel (non-digital) such as Asphalt9
2) Helps you zoom-in, zoom-out, scroll-up, scroll-down only using hand gestures.
3) Helps you browse any music of your choice using voice commands and gesture controls for volume, pause/play, skip, etc.
## How we built it
The three main technologies used in this project are:
1) Python 3
The software suite is built using Python 3 and was initially developed in the Jupyter Notebook IDE.
2) OpenCV
The software uses the OpenCV library in Python to implement most of its gesture recognition and motion analysis tasks.
3) Selenium
Selenium is a web driver that was extensively used to control the web interface interaction component of the software.
## Challenges we ran into
1) Selenium only works with google version 81 and is very hard to debug :(
2) Finding the perfect HSV ranges corresponding to different colours was a tedious task and required me to make a special script to make the task easier.
3) Pulling an all-nighter (A coffee does NOT help!)
## Accomplishments that we're proud of
1) Successfully amalgamated computer vision, speech recognition and web automation to make a suite of software and not just a single software!
## What we learned
1) How to debug selenium efficiently
2) How to use angle geometry for steering a car using computer vision
3) Stabilizing errors in object detection
## What's next for E-Motion
I plan to implement more components in E-Motion that will help to browse the entire computer and make the voice commands more precise by ignoring background noise. | ## Inspiration
The inspiration for the project was thinking back to playing contact sports. Sometimes players would receive head injuries and not undergo a proper concussion examination; a concussion could then go undetected - possibly resulting in cognitive issues in the future.
## What it does
The project is a web page that has a concussion diagnosis algorithm and a concussion diagnosis form. The web page can open the camera window - allowing the user to record video of their pupils. This video undergoes analysis by the concussion algorithm. This compares the radii of the patient's pupils over time. With this data, a concussion diagnosis can be provided. After analysis, a concussion form is provided to the patient to further confirm the diagnosis.
## How we built it
We built the project using openCV in python. The web browser was developed using JavaScript in WIX and the communication between the openCV algorithm and the browser is done using flask.
## Challenges we ran into
We ran into a challenge with sending data between flask and our hosted file. The file would refresh before receiving the flask data causing us to be unable to use our calculated concussion variable.
## Accomplishments that we're proud of
We are proud of developing a functional front end and a functional algorithm. We are also proud that we were able to use OpenCV to locate pupils and read their sizes.
## What we learned
We learned how to work with openCV, Wix, flask, and js. Although some of these skills are still very rudimentary, we have a foundation using frameworks and languages we had never used before.
## What's next for ConcussionMD
Next, we will try to improve our flask back end to ensure users can upload their own videos, as right now this functionality is not fully implemented. We will also try to implement an option for users who were found concussed to see hospitals near their location to go to. | winning |
## What it does
Paste in a text and it will identify the key scenes before turning it into a narrated movie. Favourite book, historical battle, or rant about work. Anything and everything, if you can read it, Lucid.ai can dream it.
## How we built it
Once you hit generate on the home UI, our frontend sends your text and video preferences to the backend, which uses our custom algorithm to cut up the text into key scenes. The backend then uses multithreading to make three simultaneous API calls. First, a call to GPT-3 to condense the chunks into image prompts to be fed into a Stable Diffusion/Deforum AI image generation model. Second, a sentiment keyword analysis using GPT-3, which is then fed to the Youtube API for a fitting background song. Finally, a call to TortoiseTTS generates a convincing narration of your text. Collected back at the front-end, you end up with a movie, all from a simple text.
## Challenges we ran into
Our main challenge was computing power. With no access to industry-grade GPU power, we were limited to running our models on personal laptop GPUs. External computing power also limited payload sizes, forcing us to find roundabout ways to communicate our data to the front-end.
## Accomplishments that we're proud of
* Extremely resilient commitment to the project, despite repeated technical setbacks
* Fast on-our-feet thinking when things don't go to plan
* A well-laid out front-end development plan
## What we learned
* AWS S3 Cloud Storage
* TortoiseTTS
* Learn how to dockerize large open source codebase
## What's next for Lucid.ai
* More complex camera motions beyond simple panning
* More frequent frame generation
* Real-time frame generation alongside video watching
* Parallel cloud computing to handle rendering at faster speeds | ## Inspiration
Ever wish you didn’t need to purchase a stylus to handwrite your digital notes? Each person at some point hasn’t had the free hands to touch their keyboard. Whether you are a student learning to type or a parent juggling many tasks, sometimes a keyboard and stylus are not accessible. We believe the future of technology won’t even need to touch anything in order to take notes. HoverTouch utilizes touchless drawings and converts your (finger)written notes to typed text! We also have a text to speech function that is Google adjacent.
## What it does
Using your index finger as a touchless stylus, you can write new words and undo previous strokes, similar to features on popular note-taking apps like Goodnotes and OneNote. As a result, users can eat a slice of pizza or hold another device in hand while achieving their goal. HoverTouch tackles efficiency, convenience, and retention all in one.
## How we built it
Our pre-trained model from media pipe works in tandem with an Arduino nano, flex sensors, and resistors to track your index finger’s drawings. Once complete, you can tap your pinky to your thumb and HoverTouch captures a screenshot of your notes as a JPG. Afterward, the JPG undergoes a masking process where it is converted to a black and white picture. The blue ink (from the user’s pen strokes) becomes black and all other components of the screenshot such as the background become white. With our game-changing Google Cloud Vision API, custom ML model, and vertex AI vision, it reads the API and converts your text to be displayed on our web browser application.
## Challenges we ran into
Given that this was our first hackathon, we had to make many decisions regarding feasibility of our ideas and researching ways to implement them. In addition, this entire event has been an ongoing learning process where we have felt so many emotions — confusion, frustration, and excitement. This truly tested our grit but we persevered by uplifting one another’s spirits, recognizing our strengths, and helping each other out wherever we could.
One challenge we faced was importing the Google Cloud Vision API. For example, we learned that we were misusing the terminal and our disorganized downloads made it difficult to integrate the software with our backend components. Secondly, while developing the hand tracking system, we struggled with producing functional Python lists. We wanted to make line strokes when the index finger traced thin air, but we eventually transitioned to using dots instead to achieve the same outcome.
## Accomplishments that we're proud of
Ultimately, we are proud to have a working prototype that combines high-level knowledge and a solution with significance to the real world. Imagine how many students, parents, friends, in settings like your home, classroom, and workplace could benefit from HoverTouch's hands free writing technology.
This was the first hackathon for ¾ of our team, so we are thrilled to have undergone a time-bounded competition and all the stages of software development (ideation, designing, prototyping, etc.) toward a final product. We worked with many cutting-edge softwares and hardwares despite having zero experience before the hackathon.
In terms of technicals, we were able to develop varying thickness of the pen strokes based on the pressure of the index finger. This means you could write in a calligraphy style and it would be translated from image to text in the same manner.
## What we learned
This past weekend we learned that our **collaborative** efforts led to the best outcomes as our teamwork motivated us to preserve even in the face of adversity. Our continued **curiosity** led to novel ideas and encouraged new ways of thinking given our vastly different skill sets.
## What's next for HoverTouch
In the short term, we would like to develop shape recognition. This is similar to Goodnotes feature where a hand-drawn square or circle automatically corrects to perfection.
In the long term, we want to integrate our software into web-conferencing applications like Zoom. We initially tried to do this using WebRTC, something we were unfamiliar with, but the Zoom SDK had many complexities that were beyond our scope of knowledge and exceeded the amount of time we could spend on this stage.
### [HoverTouch Website](hoverpoggers.tech) | ## Inspiration
We know that reading and writing is an important of a kid's education but often times it ends as chore so we wanted to make that process more interactive. Our team focused on this problem to make sure the children move forward in their development process especially during these COVID times.
So we are pushing for a Make-your-Story-book-Adventure, by modeling a game with visuals that has fill the blanks to fuel a kid's creativity and choices. This is modeled after AI Dungeons and Dragons.
## What it does
We take contextual data and provide user's 3 choice of words to create their own adventure with pictures painted on a web-app as they play. Furthermore, additional prompts and story lines are created based on previous user's choices.
## How we built it
We are building it my using natural language processing neural networks in the form of LSTM (Long short term memory) and recurrent neural networks that have feedback as well as feed forward layers.
Additionally, we have two layers one that has an reLU activation with a embedded linear function and another that has a tanh activation function with a corresponding linear function.
Finally, based on these technologies we were able to output 3-4 predictions that were scenarios based on the user's input of words or sentences which we treated as input features/context.
## Challenges we ran into
Trying to train large storybook text files took a long time with minimal improvements in accuracy.
Hosting on google cloud with functions api was difficult to hold and train models to ensure they work reliably and continually improve over-time.
## Accomplishments that we're proud of
The accomplishment we are proud of was the fact that were able to generate cohesive sentences and phrases out of several different storybook datasets through careful experimentation of model parameters as well algorithmic parsing/design of the machine learning networks.
## What we learned
We learned that it was truly difficult to form sentences that were intriguing and storybook, that there needs to be planning when using different GPU and tensor-flow cores.
## What's next for Interactive Stories
We would like to able to train more datasets in a much more organized and open-source manner with multiple forms including audio. | winning |
## Inspiration
Our team firmly believes that a hackathon is the perfect opportunity to learn technical skills while having fun. Especially because of the hardware focus that MakeUofT provides, we decided to create a game! This electrifying project places two players' minds to the test, working together to solve various puzzles. Taking heavy inspiration from the hit videogame, "Keep Talking and Nobody Explodes", what better way to engage the players than have then defuse a bomb!
## What it does
The MakeUofT 2024 Explosive Game includes 4 modules that must be disarmed, each module is discrete and can be disarmed in any order. The modules the explosive includes are a "cut the wire" game where the wires must be cut in the correct order, a "press the button" module where different actions must be taken depending the given text and LED colour, an 8 by 8 "invisible maze" where players must cooperate in order to navigate to the end, and finally a needy module which requires players to stay vigilant and ensure that the bomb does not leak too much "electronic discharge".
## How we built it
**The Explosive**
The explosive defuser simulation is a modular game crafted using four distinct modules, built using the Qualcomm Arduino Due Kit, LED matrices, Keypads, Mini OLEDs, and various microcontroller components. The structure of the explosive device is assembled using foam board and 3D printed plates.
**The Code**
Our explosive defuser simulation tool is programmed entirely within the Arduino IDE. We utilized the Adafruit BusIO, Adafruit GFX Library, Adafruit SSD1306, Membrane Switch Module, and MAX7219 LED Dot Matrix Module libraries. Built separately, our modules were integrated under a unified framework, showcasing a fun-to-play defusal simulation.
Using the Grove LCD RGB Backlight Library, we programmed the screens for our explosive defuser simulation modules (Capacitor Discharge and the Button). This library was used in addition for startup time measurements, facilitating timing-based events, and communicating with displays and sensors that occurred through the I2C protocol.
The MAT7219 IC is a serial input/output common-cathode display driver that interfaces microprocessors to 64 individual LEDs. Using the MAX7219 LED Dot Matrix Module we were able to optimize our maze module controlling all 64 LEDs individually using only 3 pins. Using the Keypad library and the Membrane Switch Module we used the keypad as a matrix keypad to control the movement of the LEDs on the 8 by 8 matrix. This module further optimizes the maze hardware, minimizing the required wiring, and improves signal communication.
## Challenges we ran into
Participating in the biggest hardware hackathon in Canada, using all the various hardware components provided such as the keypads or OLED displays posed challenged in terms of wiring and compatibility with the parent code. This forced us to adapt to utilize components that better suited our needs as well as being flexible with the hardware provided.
Each of our members designed a module for the puzzle, requiring coordination of the functionalities within the Arduino framework while maintaining modularity and reusability of our code and pins. Therefore optimizing software and hardware was necessary to have an efficient resource usage, a challenge throughout the development process.
Another issue we faced when dealing with a hardware hack was the noise caused by the system, to counteract this we had to come up with unique solutions mentioned below:
## Accomplishments that we're proud of
During the Makeathon we often faced the issue of buttons creating noise, and often times the noise it would create would disrupt the entire system. To counteract this issue we had to discover creative solutions that did not use buttons to get around the noise. For example, instead of using four buttons to determine the movement of the defuser in the maze, our teammate Brian discovered a way to implement the keypad as the movement controller, which both controlled noise in the system and minimized the number of pins we required for the module.
## What we learned
* Familiarity with the functionalities of new Arduino components like the "Micro-OLED Display," "8 by 8 LED matrix," and "Keypad" is gained through the development of individual modules.
* Efficient time management is essential for successfully completing the design. Establishing a precise timeline for the workflow aids in maintaining organization and ensuring successful development.
* Enhancing overall group performance is achieved by assigning individual tasks.
## What's next for Keep Hacking and Nobody Codes
* Ensure the elimination of any unwanted noises in the wiring between the main board and game modules.
* Expand the range of modules by developing additional games such as "Morse-Code Game," "Memory Game," and others to offer more variety for players.
* Release the game to a wider audience, allowing more people to enjoy and play it. | ## Inspiration
We really are passionate about hardware, however many hackers in the community, especially those studying software-focused degrees, miss out on the experience of working on projects involving hardware and experience in vertical integration.
To remedy this, we came up with modware. Modware provides the toolkit for software-focused developers to branch out into hardware and/or to add some verticality to their current software stack with easy to integrate hardware interactions and displays.
## What it does
The modware toolkit is a baseboard that interfaces with different common hardware modules through magnetic power and data connection lines as they are placed onto the baseboard.
Once modules are placed on the board and are detected, the user then has three options with the modules: to create a "wired" connection between an input type module (LCD Screen) and an output type module (knob), to push a POST request to any user-provided URL, or to request a GET request to pull information from any user-provided URL.
These three functionalities together allow a software-focused developer to create their own hardware interactions without ever touching the tedious aspects of hardware (easy hardware prototyping), to use different modules to interact with software applications they have already built (easy hardware interface prototyping), and to use different modules to create a physical representation of events/data from software applications they have already built (easy hardware interface prototyping).
## How we built it
Modware is a very large project with a very big stack: ranging from a fullstack web application with a server and database, to a desktop application performing graph traversal optimization algorithms, all the way down to sending I2C signals and reading analog voltage.
We had to handle the communication protocols between all the levels of modware very carefully. One of the interesting points of communication is using neodymium magnets to conduct power and data for all of the modules to a central microcontroller. Location data is also kept track of as well using a 9-stage voltage divider, a series circuit going through all 9 locations on the modware baseboard.
All of the data gathered at the central microcontroller is then sent to a local database over wifi to be accessed by the desktop application. Here the desktop application uses case analysis to solve the NP-hard problem of creating optimal wire connections, with proper geometry and distance rendering, as new connections are created, destroyed, and modified by the user. The desktop application also handles all of the API communications logic.
The local database is also synced with a database up in the cloud on Heroku, which uses the gathered information to wrap up APIs in order for the modware hardware to be able to communicate with any software that a user may write both in providing data as well as receiving data.
## Challenges we ran into
The neodymium magnets that we used were plated in nickel, a highly conductive material. However magnets will lose their magnetism when exposed to high heat and neodymium magnets are no different. So we had to extremely careful to solder everything correctly on the first try as to not waste the magnetism in our magnets. These magnets also proved very difficult to actually get a solid data and power and voltage reader electricity across due to minute differences in laser cut holes, glue residues, etc. We had to make both hardware and software changes to make sure that the connections behaved ideally.
## Accomplishments that we're proud of
We are proud that we were able to build and integrate such a huge end-to-end project. We also ended up with a fairly robust magnetic interface system by the end of the project, allowing for single or double sized modules of both input and output types to easily interact with the central microcontroller.
## What's next for ModWare
More modules! | ## Inspiration
Since the beginning of the pandemic, social isolation has been a problem for many people. To tackle this, a lot of people decided that it was time for a change and found the most joyful way to fight loneliness: they got dogs. Having these loyal creatures, full of love and energy, always ready to play with you, give you kisses, and bring you their favorite toys, has made quarantine a lot more bearable for many.
However, we know that with great fun comes great responsibility. Dogs need to have playdates and socialize, need to be groomed, and go to regular checkups with their vets, sometimes they need special training for different environments, and, umm, we will eventually need to find partners for them :)
That's how we came up with the idea for Coregous ("corgi" + "gorgeous"), a mobile app that provides a centralized experience for anything and everything dog-related, with a focus on matching dogs for playdates and mating.
## What it does
Using the application, a user can create an account and register as many dogs as they have. For each dog, they can add the dog’s picture, name, breed, sex, age, bio and they can select whether they are looking for a playdate partner or a mate.
After logging in to the application, a user can see other dog profiles which have been recommended by compatibility and proximity. Based on this, the user can decide if the dog is a good match or not and swipe left/right accordingly. After matching with a dog, we have a chat feature where you can contact the owner and set the time/date/place for a meeting.
The application also has a calendar feature where you can track all your upcoming playdates, dog training session dates, grooming appointments, and more.
Additionally, a user can use our Dog Map feature, where they can find the closest dog parks, dog trainers, vet clinics, pet stores, and all the dogs they got matched with!
## Challenges we ran into
We believe that the most challenging part was the amount of time we had. As we started coding, we wanted to add more and more features but unfortunately, because of the lack of time, we had to limit the scope of the application.
## Accomplishments that we're proud of
Hmm, we would probably say our design and branding. Since we both have much more experience with backend and data, we usually don’t get a chance to work on designing UIs and frontend. We are super proud that, during this short time, we learned React Native from the ground up and created a fully functional application!
## What we learned
We primarily learned React Native, including how it works, the various component libraries available online, the process of debugging, and testing our application on various platforms. We also had to learn to make decisions quickly so we don’t waste a second of this short time!
## What's next for Corgeous
We believe there are many features that we can add to our application to add functionality for the user and make the experience even easier:
* We can track the dates and the amount of dog food the user purchases and then notify them when it’s time to stock up! Of course, we’ll give them the closest pet store locations too! : )
* As for the algorithm side, we believe that the friend of my friend is my friend : ) So we want to keep track of all the successful playdates and if let’s say Marley and Max had so much fun playing with each other and Max and Bailey are basically each other’s Chandler and Joey, we are gonna suggest a meeting of Marley and Bailey!
* We also think that Corgeous will be a perfect platform for small local dog-related business owners to advertise their product/service.
* We also think about adding a tips section to educate and help beginner dog owners.
* But most importantly, we are going to try to keep our application *Corgeous* : ) | winning |
## Inspiration
One day Saaz was sitting at home thinking about his fitness goals and his diet. Looking in his fridge, he realized that, on days when his fridge was only filled with leftovers and leftover ingredients, it was very difficult for him to figure out what he could make that followed his nutrition goals. This dilemma is something Saaz and others like him often encounter, and so we created SmartPalate to solve it.
## What it does
SmartPalate uses AI to scan your fridge and pantry for all the ingredients you have at your disposal. It then comes up with multiple recipes that you can make with those ingredients. Not only can the user view step-by-step instructions on how to make these recipes, but also, by adjusting the nutrition information of the recipe using sliders, SmartPalate caters the recipe to the user's fitness goals without compromising the overall taste of the food.
## How we built it
The scanning and categorization of different food items in the fridge and pantry is done using YOLOv5, a single-shot detection convolutional neural network. These food items are sent as a list of ingredients into the Spoonacular API, which matches the ingredients to recipes that contain them. We then used a modified natural language processing model to split the recipe into 4 distinct parts: the meats, the carbs, the flavoring, and the vegetables. Once the recipe is split, we use the same NLP model to categorize our ingredients into whichever part they are used in, as well as to give us a rough estimate on the amount of ingredients used in 1 serving. Then, using the Spoonacular API and the estimated amount of ingredients used in 1 serving, we calculate the nutrition information for 1 serving of each part. Because the amount of each part can be increased or decreased without compromising the taste of the overall recipe, we are then able to use a Bayesian optimization algorithm to quickly adjust the number of servings of each part (and the overall nutrition of the meal) to meet the user's nutritional demands. User interaction with the backend is done with a cleanly built front end made with a React TypeScript stack through Flask.
## Challenges we ran into
One of the biggest challenges was identifying the subgroups in every meal(the meats, the vegetables, the carbs, and the seasonings/sauces). After trying multiple methods such as clustering, we settled on an approach that uses a state-of-the-art natural language model to identify the groups.
## Accomplishments that we're proud of
We are proud of the fact that you can scan your fridge with your phone instead of typing in individual items, allowing for a much easier user experience. Additionally, we are proud of the algorithm that we created to help users adjust the nutrition levels of their meals without compromising the overall taste of the meals.
## What we learned
Using our NLP model taught us just how unstable NLP is, and it showed us the importance of good prompt engineering. We also learned a great deal from our struggle to integrate the different parts of our project together, which required a lot of communication and careful code design.
## What's next for SmartPalate
We plan to allow users to rate and review the different recipes that they create. Additionally, we plan to add a social component to SmartPalate that allows people to share the nutritionally customized recipes that they created. | ## Inspiration
Let’s face it: getting your groceries is hard. As students, we’re constantly looking for food that is healthy, convenient, and cheap. With so many choices for where to shop and what to get, it’s hard to find the best way to get your groceries. Our product makes it easy. It helps you plan your grocery trip, helping you save time and money.
## What it does
Our product takes your list of grocery items and searches an automatically generated database of deals and prices at numerous stores. We collect this data by collecting prices from both grocery store websites directly as well as couponing websites.
We show you the best way to purchase items from stores nearby your postal code, choosing the best deals per item, and algorithmically determining a fast way to make your grocery run to these stores. We help you shorten your grocery trip by allowing you to filter which stores you want to visit, and suggesting ways to balance trip time with savings. This helps you reach a balance that is fast and affordable.
For your convenience, we offer an alternative option where you could get your grocery orders delivered from several different stores by ordering online.
Finally, as a bonus, we offer AI generated suggestions for recipes you can cook, because you might not know exactly what you want right away. Also, as students, it is incredibly helpful to have a thorough recipe ready to go right away.
## How we built it
On the frontend, we used **JavaScript** with **React, Vite, and TailwindCSS**. On the backend, we made a server using **Python and FastAPI**.
In order to collect grocery information quickly and accurately, we used **Cloudscraper** (Python) and **Puppeteer** (Node.js). We processed data using handcrafted text searching. To find the items that most relate to what the user desires, we experimented with **Cohere's semantic search**, but found that an implementation of the **Levenshtein distance string algorithm** works best for this case, largely since the user only provides one to two-word grocery item entries.
To determine the best travel paths, we combined the **Google Maps API** with our own path-finding code. We determine the path using a **greedy algorithm**. This algorithm, though heuristic in nature, still gives us a reasonably accurate result without exhausting resources and time on simulating many different possibilities.
To process user payments, we used the **Paybilt API** to accept Interac E-transfers. Sometimes, it is more convenient for us to just have the items delivered than to go out and buy it ourselves.
To provide automatically generated recipes, we used **OpenAI’s GPT API**.
## Challenges we ran into
Everything.
Firstly, as Waterloo students, we are facing midterms next week. Throughout this weekend, it has been essential to balance working on our project with our mental health, rest, and last-minute study.
Collaborating in a team of four was a challenge. We had to decide on a project idea, scope, and expectations, and get working on it immediately. Maximizing our productivity was difficult when some tasks depended on others. We also faced a number of challenges with merging our Git commits; we tended to overwrite one anothers’ code, and bugs resulted. We all had to learn new technologies, techniques, and ideas to make it all happen.
Of course, we also faced a fair number of technical roadblocks working with code and APIs. However, with reading documentation, speaking with sponsors/mentors, and admittedly a few workarounds, we solved them.
## Accomplishments that we’re proud of
We felt that we put forth a very effective prototype given the time and resource constraints. This is an app that we ourselves can start using right away for our own purposes.
## What we learned
Perhaps most of our learning came from the process of going from an idea to a fully working prototype. We learned to work efficiently even when we didn’t know what we were doing, or when we were tired at 2 am. We had to develop a team dynamic in less than two days, understanding how best to communicate and work together quickly, resolving our literal and metaphorical merge conflicts. We persisted towards our goal, and we were successful.
Additionally, we were able to learn about technologies in software development. We incorporated location and map data, web scraping, payments, and large language models into our product.
## What’s next for our project
We’re very proud that, although still rough, our product is functional. We don’t have any specific plans, but we’re considering further work on it. Obviously, we will use it to save time in our own daily lives. | ## Inspiration
The inspiration behind Pantry Palate was driven by a distinct need in today's digital age. With an abundance of food apps flooding the market, many equipped with fridge scanning capabilities, we recognized a gap that needed addressing. While such apps exist, they often fall short in catering to the specific needs of individuals with allergies and dietary restrictions. This realization spurred us to develop an inclusive solution that caters to everyone's unique dietary requirements. Our goal is to provide a comprehensive app that not only simplifies food management through fridge scanning but also incorporates essential elements to ensure individuals can seamlessly navigate their dietary restrictions and preferences. By combining technology with a commitment to inclusivity, our food scanning fridge aims to empower users to make informed and health-conscious choices, revolutionizing the way we interact with our refrigerated contents
## What it does
Pantry Palate is your 3-in-1 personal cook assistant helping you eliminate food waste, via fridge scan, AI chatbot, and comprehensive filters accommodating allergies and dietary restrictions.
## How we built it
Our Team was able to develop this Progressive Web Application using ReactJS in the frontend. The decision to utilise react was to open the doors of component reusability, while using Tailwind to develop interactive and dynamic components throughout the application. To incorporate functionality within the application, the team opted to use Flask. Not only for its efficiency in backend applications, but also chosen based on the team's experience with python. Once both applications were linked, our team shifted the attention to identifying the elements within the image environment. Using the Ultralytics library, our team was able to develop an Artificial Intelligence model and train it using Google Colab and Roboflow database. Finally, to accurately gather the recipes based on the given ingredients, our team decided to use Spoonacular API for its large dataset with accurate recipes.
## Challenges we ran into
There were numerous challenges our team had to overcome to deliver this application.
* For starters, this was the first time anyone in our team wanted to develop an application related to Artificial Intelligence. As such, there was initially a steep learning curve as the team needed to overcome to leverage its full capabilities.
* Another challenge that was faced was initially establishing a link between the frontend and backend. As the team wanted to incorporate flask for its lightweight. However, this is not very standard as ReactJS applications typically use nodeJS and expressJS to perform backend programming and server applications respectively. As such, our team also had to discover how to combine our familiarities with Python and channel it into a ReactJS application.
* However, none of these challenges would compare to what was faced towards the end development of the process. When the team was ready to incorporate the chatbot, an unforeseen merge conflict occurred. Setting our team backwards in progress as they scrambled for 4 hours trying to reassemble their frameworks.
## Accomplishments that we're proud of
We’re quite proud of the accuracy that our program has in detecting the ingredients available and believe it is the key reason why this program will be so effective. Additionally we feel very pleased about the frontend design and how the user experience turned out.
## What we learned
Our team was able to develop a firm understanding of web development principles coupled with backend practices that should be used to develop a Progressive web application. Additionally, our team was also able to learn how to connect frontend applications designs with backend applications using the Flask library. Finally, we learned how to use Artificial Intelligence libraries in python such as Ultralytics to effectively identify all elements within the environment provided. | partial |
## Inspiration
We wanted to help raise awareness for the segregation of different communities in Brampton
## What it does
* It uses analyzes Google reviews and determines legitimacy through NLP and finally outputs the info onto a map
## How we built it
* By running tests through jupyter notebook and using an API to gather requests directly from Google reviews
## Challenges we ran into
* Displaying the heatmap correctly, filtering out invalid responses, obtaining an accurate NLP api
## Accomplishments that we're proud of
* Finally choosing an appropriate map layout, adding the information onto the Excel file as a database, bringing everything together as a final product
## What we learned
* How to work with APIs, display data on graphs using plotly, basic NLP
## What's next for Flower City School Improvements
* Making an interactive website, using better NLP to provide more accurate data as well as address prominent issues using keywords | # Welcome to CBTree!
Side-note: My computer permanently blue screened with about 9 hours left, and I hadn't pushed yet, so I had to start over... Special thanks to Adi and TreeHacks organizers for giving me a new computer!
Cognitive behavioral therapy (CBT) has been shown to be one of the most effective ways to treat depression.
Unfortunately, there is a shortage of CBT trained professionals, and there are often financial barriers to treatment.
CBTree is created as a way for people to experience something similar to CBT using machine learning, NLP, and LLMs.
I trained a classification model using a Kaggle dataset of patient statements annotated with cognitive distortions (negative thought patterns).
CBTree allows you to enter text and helps you identify and challenge these thought patterns. Additionally, LLMs are used to provide a conversational experience.
## Tech Stack
Python (Reflex, Scikit-learn, Pandas, etc)
Together.AI
MonsterAPI
### Cognitive distortions
1. All-or-nothing thinking
This is a kind of polarized thinking. This involves looking at a situation as either black or white or thinking that there are only two possible outcomes to a situation. An example of such thinking is, "If I am not a complete success at my job; then I am a total failure."
2. Overgeneralization
When major conclusions are drawn based on limited information, or some large group is said to have same behavior or property. For example: “one nurse was rude to me, this means all medical staff must be rude.” or “last time I was in the pool I almost drowned, I am a terrible swimmer and should not go into the water again”.
3. Mental filter
A person engaging in filter (or “mental filtering) takes the negative details and magnifies those details while filtering out all positive aspects of a situation. This means: focusing on negatives and ignoring the positives. If signs of either of these are present, then it is marked as mental filter.
4. Should statements
Should statements (“I should pick up after myself more”) appear as a list of ironclad rules about how a person should behave, this could be about the speaker themselves or other. It is NOT necessary that the word ‘should’ or it’s synonyms (ought to, must etc.) be present in the statements containing this distortion. For example: consider the statement – “I don’t have ups and downs like teenagers are supposed to; everything just seems kind of flat with a few dips”, this suggests that the person believes that a teenager should behave in a certain way and they are not conforming to that pattern, this makes it a should statement cognitive distortion.
5. Labeling
Labeling is a cognitive distortion in which people reduce themselves or other people to a single characteristic or descriptor, like “I am a failure.” This can also be a positive descriptor such as “we were perfect”. Note that the tense in these does not always have to be present tense.
6. Personalization
Personalizing or taking up the blame for a situation which is not directly related to the speaker. This could also be assigning the blame to someone who was not responsible for the situation that in reality involved many factors and was out of your/the person’s control. The first entry in the sample is a good example for this.
7. Magnification
Blowing things way out of proportion. For example: “If I don’t pass this test, I would never be successful in my career”. The impact of the situation here is magnified. You exaggerate the importance of your problems and shortcomings, or you minimize the importance of your desirable qualities. Not to be confused with mental filter, you can think of it only as maximizing the importance or impact of a certain thing.
8. Emotional Reasoning
Basically, this distortion can be summed up as - “If I feel that way, it must be true.” Whatever a person is feeling is believed to be true automatically and unconditionally. One of the most common representation of this is some variation of – ‘I feel like a failure so I must be a failure’. It does not always have to be about the speaker themselves, “I feel like he is not being honest with me, he must be hiding something” is also an example of emotional reasoning.
9. Mind Reading
Any evidence of the speaker suspecting what others are thinking or what are the motivations behind their actions. Statements like “they won’t understand”, “they dislike me” suggest mind reading distortion. However, “she said she dislikes me” is not a distortion, but “I think she dislikes me since she ignored me” is again mind reading distortion (since it is based on assumption that you know why someone behaved in a certain way).
10. Fortune-telling
As the name suggests, this distortion is about expecting things to happen a certain way, or assuming that thing will go badly. Counterintuitively, this distortion does not always have future tense, for example: “I was afraid of job interviews so I decided to start my own thing” here the person is speculating that the interview will go badly and they will not get the job and that is why they decided to start their own business. Despite the tense being past, the error in thinking is still fortune-telling.
### How to use
1. Install reflex via pip then do `reflex run` | ## Inspiration
Many hackers cast their vision forward, looking for futuristic solutions for problems in the present. Instead, we cast our eyes backwards in time, looking to find our change in restoration and recreation. We were drawn to the ancient Athenian Agora -- a marketplace; not one where merchants sold goods, but one where thinkers and orators debated, discussed, and deliberated (with one another?) pressing social-political ideas and concerns. The foundation of community engagement in its era, the premise of the Agora survived in one form or another over the years in the various public spaces that have been focal points for communities to come together -- from churches to community centers.
In recent years, however, local community engagement has dwindled with the rise in power of modern technology and the Internet. When you're talking to a friend on the other side of the world, you're not talking a friend on the other side of the street. When you're organising with activists across countries, you're not organising with activists in your neighbourhood. The Internet has been a powerful force internationally, but Agora aims to restore some of the important ideas and institutions that it has left behind -- to make it just as powerful a force locally.
## What it does
Agora uses users' mobile phone's GPS location to determine the neighbourhood or city district they're currently in. With that information, they may enter a chat group specific to that small area. Having logged-on via Facebook, they're identified by their first name and thumbnail. Users can then chat and communicate with one another -- making it easy to plan neighbourhood events and stay involved in your local community.
## How we built it
Agora coordinates a variety of public tools and services (for something...). The application was developed using Android Studio (Java, XML). We began with the Facebook login API, which we used to distinguish and provide some basic information about our users. That led directly into the Google Maps Android API, which was a crucial component of our application. We drew polygons onto the map corresponding to various local neighbourhoods near the user. For the detailed and precise neighbourhood boundary data, we relied on StatsCan's census tracts, exporting the data as a .gml and then parsing it via python. With this completed, we had almost 200 polygons -- easily covering Hamilton and the surrounding areas - and a total of over 50,000 individual vertices. Upon pressing the map within the borders of any neighbourhood, the user will join that area's respective chat group.
## Challenges we ran into
The chat server was our greatest challenge; in particular, large amounts of structural work would need to be implemented on both the client and the server in order to set it up. Unfortunately, the other challenges we faced while developing the Android application diverted attention and delayed process on it. The design of the chat component of the application was also closely tied with our other components as well; such as receiving the channel ID from the map's polygons, and retrieving Facebook-login results to display user identification.
A further challenge, and one generally unexpected, came in synchronizing our work as we each tackled various aspects of a complex project. With little prior experience in Git or Android development, we found ourselves quickly in a sink-or-swim environment; learning about both best practices and dangerous pitfalls. It was demanding, and often-frustrating early on, but paid off immensely as the hack came together and the night went on.
## Accomplishments that we're proud of
1) Building a functioning Android app that incorporated a number of challenging elements.
2) Being able to make something that is really unique and really important. This is an issue that isn't going away and that is at the heart of a lot of social deterioration. Fixing it is key to effective positive social change -- and hopefully this is one step in that direction.
## What we learned
1) Get Git to Get Good. It's incredible how much of a weight of our shoulders it was to not have to worry about file versions or maintenance, given the sprawling size of an Android app. Git handled it all, and I don't think any of us will be working on a project without it again.
## What's next for Agora
First and foremost, the chat service will be fully expanded and polished. The next most obvious next step is towards expansion, which could be easily done via incorporating further census data. StatsCan has data for all of Canada that could be easily extracted, and we could rely on similar data sets from the U.S. Census Bureau to move international. Beyond simply expanding our scope, however, we would also like to add various other methods of engaging with the local community. One example would be temporary chat groups that form around given events -- from arts festivals to protests -- which would be similarly narrow in scope but not constrained to pre-existing neighbourhood definitions. | losing |
## Inspiration 🌟
Our project was inspired by the critical need for early detection of Surgical Site Infections (SSIs) and the rising concern of Methicillin-resistant Staphylococcus aureus (MRSA). We aimed to create a solution that enhances patient safety by enabling timely alerts for potential wound infections.
## What it does 💡
SurgiSafe is a wearable device designed to monitor wounds by detecting moisture, temperature, and pH levels in real time. This device provides crucial data, enabling early alerts for patients and healthcare providers, which can help prevent serious infections.
## How we built it 🔧
We built the device using Arduino for hardware and various components for data processing and sensor integration. We incorporated sensors to measure moisture, temperature, and pH levels, creating a user-friendly interface to display the information. Our goal was to ensure that the device is comfortable and practical for everyday wear.
## Challenges we ran into ⚙️
Throughout the project, we encountered challenges such as ensuring accurate sensor readings and designing a device that was both functional and comfortable. Integrating the hardware components smoothly required some problem-solving, and thorough testing was essential to validate our approach.
## Accomplishments that we're proud of 🎉
* **Working Prototype:** Developed a functional prototype that effectively monitors wound conditions, showcasing our ability to create practical solutions.
* **Skill Development:** Two of our four members were beginners, and their fresh perspectives enhanced our project, proving that anyone can contribute and learn.
* **Sensor Integration:** Successfully navigated challenges with sensor integration, gaining valuable troubleshooting skills along the way.
* **Collaboration:** Leveraged each team member’s strengths to foster open communication and tackle obstacles efficiently.
## What we learned 📚
This experience highlighted the importance of collaboration and adaptability in engineering projects. We gained insights into sensor integration, data processing, and the significance of user-centered design in creating health tech solutions.
## What's next for SurgiSafe 🚀
Looking ahead, we plan to refine our prototype based on user feedback and conduct further testing in clinical environments. We’re excited to enhance the device's accuracy and explore partnerships with healthcare providers for real-world implementation. Our vision includes expanding features to include data analytics and remote monitoring capabilities. | ## Inspiration
Have you ever wanted to listen to music based on how you’re feeling? Now, all you need to do is message MoodyBot a picture of yourself or text your mood, and you can listen to the Spotify playlist MoodyBot provides. Whether you’re feeling sad, happy, or frustrated, MoodyBot can help you find music that suits your mood!
## What it does
MoodyBot is a Cisco Spark Bot linked with Microsoft’s Emotion API and Spotify’s Web API that can detect your mood from a picture or a text. All you have to do is click the Spotify playlist link that MoodyBot sends back.
## How we built it
Using Cisco Spark, we created a chatbot that takes in portraits and gives the user an optimal playlist based on his or her mood. The chatbot itself was implemented on built.io which controls feeding image data through Microsoft’s Emotion API. Microsoft’s API outputs into a small Node.js server in order to compensate for the limited features of built.io. like it’s limitations when importing modules. From the external server we use moods classified by Microsoft’s API to select a Spotify playlist using Spotify’s Web API which is then sent back to the user on Cisco Spark.
## Challenges we ran into
Spotify’s Web API requires a new access token every hour. In the end, we were not able to find a solution to this problem. Our inexperience with Node.js also led to problems with concurrency. We had problems with built.io having limited APIs that hindered our project.
## Accomplishments that we're proud of
We were able to code around the fact that built.io would not encoding our images correctly. Built.io also was not able to implement other solutions to this problem that we tried to use.
## What we learned
Sometimes, the short cut is more work, or it won't work at all. Writing the code ourselves solved all the problems we were having with built.io.
## What's next for MoodyBot
MoodyBot has the potential to have its own app and automatically open the Spotify playlist it suggests. It could also connect over bluetooth to a speaker. | ## Problem Statement
As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025.
The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs.
## Solution
The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data.
We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions.
## Developing Process
Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs.
For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time.
Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring.
## Impact
* **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury.
* **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response.
* **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision.
* **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times.
* **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency.
## Challenges
One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly.
## Successes
The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals.
## Things Learnt
* **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results.
* **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution.
* **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model.
## Future Plans for SafeSpot
* First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals.
* Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it.
* The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured. | partial |
## Inspiration
In a fast-paced Agile work environment, scrum meetings are organized to maximize efficiency. However, meetings that are supposed to last less than 15 minutes can often drag on for much longer, consequently draining valuable time.
## What it does
Kong is a voice-enabled project management assistant. It acts as the voice-automated scrum master by leading the daily meeting in a concise, efficient and logical manner. Furthermore, it dynamically modifies tasks by connecting to an Agile project management software maintained through the cloud.
## How we built it
Using Google Cloud Natural Languages API, our team built a program that transcribes the users' speech to text, converts text to JSON commands, and finally transforms JSON commands into information that you see on the screen. Ultimately, Kong can filter through the noise to collect the most relevant data to be transformed into instantaneous updates for users enrolled in the project to view on their personal devices.
## Challenges we ran into
Our team had to pivot many times when we realized that certain frameworks we wanted to work with were not compatible for the ideas that we had planned with the time frame we were working with. For example, SOX, the Google Cloud Speech-to-Text API and the Google Cloud Natural Language Syntax API were used instead of DialogFlow.
In addition, every single person purposely decided to tackle new challenges. Individuals who had little experience with coding attempted to learn front-end design through React. Meanwhile, those who have built projects using React in the past looked towards back-end development using Firebase. Moreover, all of our members ventured into voice-enabled technology for the first time.
## Accomplishments that we're proud of
We are most proud of our willingness to take risks and try new challenges. When obstacles were encountered, we relied on each other for expertise outside of our own familiar domain. Challenges and teamwork inspired us to not only improve the product but ourselves as well.
## What we learned
Within these 36 hours, we obtained the fundamental knowledge of new technologies even if they were not all applied n the final product (e.g. CSS, HTML, JavaScript, Express, Python, Google Cloud API, DialogFlow, Twilio, etc.). In addition, we learned how to quickly find reliable and comprehensive resources. More over, we learned to use our network and reach out for help from the right people.
## What's next for Kong
Some possible next steps for Kong is to find a way to export the information through APIs. Additionally, our team could reduce the response time and expand the length of the recording. We may also expand our software to support voice recognition for individual users and delve into integration with personal notification systems (e.g. google calendars). Finally, it should be a long term goal to improve the security and storage of Kong. | ## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap. | ## Inspiration
Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call.
This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies.
## What it does
DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers.
Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene.
Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone.
## How we built it
We developed a comprehensive systems architecture design to visualize the communication flow across different softwares.

We developed DispatchAI using a comprehensive tech stack:
### Frontend:
* Next.js with React for a responsive and dynamic user interface
* TailwindCSS and Shadcn for efficient, customizable styling
* Framer Motion for smooth animations
* Leaflet for interactive maps
### Backend:
* Python for server-side logic
* Twilio for handling calls
* Hume and Hume's EVI for emotion detection and understanding
* Retell for implementing a voice agent
* Google Maps geocoding API and Street View for location services
* Custom-finetuned Mistral model using our proprietary 911 call dataset
* Intel Dev Cloud for model fine-tuning and improved inference
## Challenges we ran into
* Curated a diverse 911 call dataset
* Integrating multiple APIs and services seamlessly
* Fine-tuning the Mistral model to understand and respond appropriately to emergency situations
* Balancing empathy and efficiency in AI responses
## Accomplishments that we're proud of
* Successfully fine-tuned Mistral model for emergency response scenarios
* Developed a custom 911 call dataset for training
* Integrated emotion detection to provide more empathetic responses
## Intel Dev Cloud Hackathon Submission
### Use of Intel Hardware
We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration:
* Leveraged IDC Jupyter Notebooks throughout the development process
* Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform
### Intel AI Tools/Libraries
We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project:
* Utilized Intel® Extension for PyTorch (IPEX) for model optimization
* Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds
* This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools
### Innovation
Our project breaks new ground in emergency response technology:
* Developed the first empathetic, AI-powered dispatcher agent
* Designed to support first responders during resource-constrained situations
* Introduces a novel approach to handling emergency calls with AI assistance
### Technical Complexity
* Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud
* Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI
* Developed real-time call processing capabilities
* Built an interactive operator dashboard for data summarization and oversight
### Design and User Experience
Our design focuses on operational efficiency and user-friendliness:
* Crafted a clean, intuitive UI tailored for experienced operators
* Prioritized comprehensive data visibility for quick decision-making
* Enabled immediate response capabilities for critical situations
* Interactive Operator Map
### Impact
DispatchAI addresses a critical need in emergency services:
* Targets the 82% of understaffed call centers
* Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times)
* Potential to save lives by ensuring every emergency call is answered promptly
### Bonus Points
* Open-sourced our fine-tuned LLM on HuggingFace with a complete model card
(<https://huggingface.co/spikecodes/ai-911-operator>)
+ And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts>
* Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>)
* Promoted the project on Twitter (X) using #HackwithIntel
(<https://x.com/spikecodes/status/1804826856354725941>)
## What we learned
* How to integrate multiple technologies to create a cohesive, functional system
* The potential of AI to augment and improve critical public services
## What's next for Dispatch AI
* Expand the training dataset with more diverse emergency scenarios
* Collaborate with local emergency services for real-world testing and feedback
* Explore future integration | losing |
## Inspiration
Have you ever wondered if your outfit looks good on you? Have you ever wished you did not have to spend so much time trying on your whole closet, taking a photo of yourself and sending it to your friends for some advice? Have you ever wished you had worn a jacket because it was much windier than you thought? Then MIR will be your new best friend - all problems solved!
## What it does
Stand in front of your mirror. Then ask Alexa for fashion advice. A photo of your outfit will be taken, then analyzed to detect your clothing articles, including their types, colors, and logo (bonus point if you are wearing a YHack t-shirt!). MIR will simply let you know if your outfits look great, or if there are something even better in your closet. Examples of things that MIR takes into account include types and colors of the outfit, current weather, logos, etc.
## How I built it
### Frontend
React Native app for the smart mirror display. Amazon Lambda for controlling an Amazon Echo to process voice commands.
### Backend
Google Cloud Vision for identifying features and colors on a photo. Microsoft Cognitive Services for detecting faces and estimating where clothing would be. Scipy for template matching. Forecast.io for weather information.
Runs on Flask on Amazon EC2.
## Challenges I ran into
* Determining a good way to isolate clothing in an image - vision networks get distracted by things easily.
* React Native is amazing when it does work, but is just a pain when it doesn't.
* Our original method of using Google's Reverse Image Search for matching logos did not work as consistently.
## Accomplishments that I'm proud of
It works!
## What I learned
It can be done!
## What's next for MIR
MIR can be further developed and used in many different ways!
## Another video demo:
<https://youtu.be/CwQPjmIiaMQ> | ## Inspiration
As young adults, we're navigating the new waves of independence and university life, juggling numerous responsibilities and a busy schedule. Amidst the hustle, we often struggle to keep track of everything, including our groceries. It's all too common for food to get pushed to the back of the fridge, only to be rediscovered when it's too late and has gone bad. That’s how we came up with preservia - a personal grocery smart assistant designed to help you save money, reduce food waste, and enjoy fresher meals.
## What it does
**Catalogue food conveniently:** preservia.tech allows grocery shoppers to keep track of their purchased food, ensuring less goes to waste. Users take photos of their receipts and the app will identify the food items bought, estimate reasonable expiry timeframes, and catalogue them within a user-friendly virtual inventory. Users also have the option of directly photographing their grocery items and the app will add them to the database as well, or even manually enter items.
**Inventory:** The user interface offers intuitive control, allowing users to delete items from the inventory at their will once items are used. Users can also request the application to reevaluate expiry dates if they suspect any mistakes in the AI predictions.
**Recipes:** Additionally, users can select food items in their grocery inventory and prompt the application to suggest a recipe based on selected ingredients.
## How we built it
Preservia.tech is built around leveraging Large Language Models (**Cohere**) as flexible databases and answer engines, here to give nuanced answers about expiration, for even the **most specific food!** This allows us to enter any possible food item, and the AI systems will do their best to understand and classify them. The predictive power of Preservia.tech will only expand as LLMs grow.
**OpenAI’s GPT-4** was also used as a flexible system to accurately decipher cryptic and generally unstandardized receipts, a task probably impossible without such models. GPT-4 is also the engine generating recipes.
We employed Google’s **MediaPipe** for food item classification, and converted images to text with **API Ninjas** to read the receipts.
Our app is primarily built on a Python backend for computation, with Flask to handle the web app, and mySQL as a database to track items. The web pages are written in HTML with some CSS and JavaScript.
We can connect it to a smartphone through a local network to take pictures more easily.
## Challenges we ran into
Working with cutting edge APIs and AI was a brand-new challenge for the entire team, so we had to navigate different types of models and documentations, overcoming integration hell to eventually arrive at a successful project. We also found prompt engineering hard, especially trying to get the most accurate results possible.
It was all of our first times working with Flask, so there was a learning curve there. Deploying our app to online services like Replit or Azure also posed a major challenge.
## Accomplishments that we're proud of
Our team is especially proud of successfully integrating such a broad range of AI features we had never worked with before. From image classification to Optical Character Recognition, and leveraging LLMs in novel ways as flexible databases and parsers.
For our team members, this marked the beginning of our deep dive into the realm of APIs and AI, making the experience all the more exciting. We were impressed with our quick progress in bringing the project to life. Finally, We’re proud that our vision was realized in the app and our brand, preservia.tech, a clever play on the words — preserve [food] via technology.
## What we learned
Our team learned how to use different kinds of **APIs**, the functionings and **applications of LLMs and image models**, as well as **flask** and **mySQL** principles to build future projects with easy web interfaces.
Our team was new to working with APIs and image-to-text models like MediaPipe. To integrate the image-to-text, text classification, image classification, and text interpretation features into our project, we strengthened our fundamental coding skills and learned how to weave APIs in to create a viable product.
## What's next for preservia.tech
In the future, we hope to enhance our image recognition software to recognize multiple food items within a single image, and with better accuracy, surpassing the current capability of one at a time. Additionally, we’re looking into other AI LLM models that can exhibit high precision in estimating food expiry dates. We may even be able to train machine-learning models ourselves to elevate the accuracy of our backend expiry date prediction system. It’ll also be interesting to build a mobile app to make uploading content even easier, as well as accelerating the LLMs we are using. | ## Inspiration
Inspired by the learning incentives offered by Duolingo, and an idea from a real customer (Shray's 9 year old cousin), we wanted to **elevate the learning experience by integrating modern technologies**, incentivizing students to learn better and teaching them about different school subjects, AI, and NFTs simultaneously.
## What it does
It is an educational app, offering two views, Student and Teacher. On Student view, compete with others in your class through a leaderboard by solving questions correctly and earning points. If you get questions wrong, you have the chance to get feedback from Together.ai's Mistral model. Use your points to redeem cool NFT characters and show them off to your peers/classmates in your profile collection!
For Teachers, manage students and classes and see how each student is doing.
## How we built it
Built using TypeScript, React Native and Expo, it is a quickly deployable mobile app. We also used Together.ai for our AI generated hints and feedback, and CrossMint for verifiable credentials and managing transactions with Stable Diffusion generated NFTs
## Challenges we ran into
We had some trouble deciding which AI models to use, but settled on Together.ai's API calls for its ease of use and flexibility. Initially, we wanted to do AI generated questions but understandably, these had some errors so we decided to use AI to provide hints and feedback when a student gets a question wrong. Using CrossMint and creating our stable diffusion NFT marketplace was also challenging, but we are proud of how we successfully incorporated it and allowed each student to manage their wallets and collections in a fun and engaging way.
## Accomplishments that we're proud of
Using Together.ai and CrossMint for the first time, and implementing numerous features, such as a robust AI helper to help with any missed questions, and allowing users to buy and collect NFTs directly on the app.
## What we learned
Learned a lot about NFTs, stable diffusion, how to efficiently prompt AIs, and how to incorporate all of this into an Expo React Native app.
Also met a lot of cool people and sponsors at this event and loved our time at TreeHacks!
## What's next for MindMint: Empowering Education with AI & NFTs
Our priority is to incorporate a spaced repetition-styled learning algorithm, similar to what Anki does, to tailor the learning curves of various students and help them understand difficult and challenging concepts efficiently.
In the future, we would want to have more subjects and grade levels, and allow the teachers to input questions for the student to solve. Another interesting idea we had was to create a mini real-time interactive game for students to play among themselves, so they can encourage each other to play between themselves. | losing |
## Inspiration
In 2012 in the U.S infants and newborns made up 73% of hospitals stays and 57.9% of hospital costs. This adds up to $21,654.6 million dollars. As a group of students eager to make a change in the healthcare industry utilizing machine learning software, we thought this was the perfect project for us. Statistical data showed an increase in infant hospital visits in recent years which further solidified our mission to tackle this problem at its core.
## What it does
Our software uses a website with user authentication to collect data about an infant. This data considers factors such as temperature, time of last meal, fluid intake, etc. This data is then pushed onto a MySQL server and is fetched by a remote device using a python script. After loading the data onto a local machine, it is passed into a linear regression machine learning model which outputs the probability of the infant requiring medical attention. Analysis results from the ML model is passed back into the website where it is displayed through graphs and other means of data visualization. This created dashboard is visible to users through their accounts and to their family doctors. Family doctors can analyze the data for themselves and agree or disagree with the model result. This iterative process trains the model over time. This process looks to ease the stress on parents and insure those who seriously need medical attention are the ones receiving it. Alongside optimizing the procedure, the product also decreases hospital costs thereby lowering taxes. We also implemented a secure hash to uniquely and securely identify each user. Using a hyper-secure combination of the user's data, we gave each patient a way to receive the status of their infant's evaluation from our AI and doctor verification.
## Challenges we ran into
At first, we challenged ourselves to create an ethical hacking platform. After discussing and developing the idea, we realized it was already done. We were challenged to think of something new with the same amount of complexity. As first year students with little to no experience, we wanted to tinker with AI and push the bounds of healthcare efficiency. The algorithms didn't work, the server wouldn't connect, and the website wouldn't deploy. We persevered and through the help of mentors and peers we were able to make a fully functional product. As a team, we were able to pick up on ML concepts and data-basing at an accelerated pace. We were challenged as students, upcoming engineers, and as people. Our ability to push through and deliver results were shown over the course of this hackathon.
## Accomplishments that we're proud of
We're proud of our functional database that can be accessed from a remote device. The ML algorithm, python script, and website were all commendable achievements for us. These components on their own are fairly useless, our biggest accomplishment was interfacing all of these with one another and creating an overall user experience that delivers in performance and results. Using sha256 we securely passed each user a unique and near impossible to reverse hash to allow them to check the status of their evaluation.
## What we learned
We learnt about important concepts in neural networks using TensorFlow and the inner workings of the HTML code in a website. We also learnt how to set-up a server and configure it for remote access. We learned a lot about how cyber-security plays a crucial role in the information technology industry. This opportunity allowed us to connect on a more personal level with the users around us, being able to create a more reliable and user friendly interface.
## What's next for InfantXpert
We're looking to develop a mobile application in IOS and Android for this app. We'd like to provide this as a free service so everyone can access the application regardless of their financial status. | ## The Problem
Gift giving plays a significant role in society. From busy seasons like the winter holidays, to birthdays, individuals are always asked: what can I get you? Communicating these wants clearly and simply is not easy. There are lists across multiple online retailers, but beyond that there was no simple, secure, and well designed system to share wish lists with friends and family.
## What it does
Our platform fills this gap affording users a clean, well designed system on which they can collect their desired items and securely invite friends to view them.
## How We built it
The platform is built upon a PHP and mySQL backend and driven by a responsive jQuery and AJAX front end, which we designed and coded ourselves from scratch. We took great care in crafting the design of the site to maximize its usability for our users.
## Challenges We ran into
We had some difficulties getting started with an AJAX driven frontend, but by the end of the project we had significantly improve our abilities.
## Accomplishments that We're proud of
When creating this project we aimed to deploy an easy account system that was also very secure. We decided to use a passwordless system that uses uniquely generated and expiring login links to securely sign users into our system without the need of a password. This session is then maintain using browser cookies. We also were able to use live AJAX calls to smartly populate our item entry form as the user entered its name as well as pull a photo for each item.
## What We learned
This hackthon our team learned a lot about web security, we had the opportunity work with a few industry professionals who gave us some great information and support.
## What's next for WishFor (wishfor.xyz)
The platform will be expanded to allow further collaboration between friends in claiming and splitting the costs of items.
## Functionality Demo
(dev post GIF images not displaying properly)
<https://gyazo.com/18ed9bd881265342853d59692fa00e4d>
<https://gyazo.com/75f904f287b6780dd90c6976e4ede9e8> | ## Inspiration
We wanted to make the world a better place by giving patients control over their own data and allow easy and intelligent access of patient data.
## What it does
We have built an intelligent medical record solution where we provide insights on reports saved in the application and also make the personal identifiable information anonymous before saving it in our database.
## How we built it
We have used Amazon Textract Service as OCR to extract text information from images. Then we use the AWS Medical Comprehend Service to redact(Mask) sensitive information(PII) before using Groq API to extract inferences that explain the medical document to the user in the layman language. We have used React, Node.js, Express, DynamoDB, and Amazon S3 to implement our project.
## Challenges we ran into
## Accomplishments that we're proud of
We were able to fulfill most of our objectives in this brief period of time.We also got to talk to a lot of interesting people and were able to bring our project to completion despite a team member not showing up.We also got to learn about a lot of cool stuff that companies like Groq, Intel, Hume, and You.com are working on and we had fun talking to everyone.
## What we learned
## What's next for Pocket AI | winning |
## Inspiration
University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion!
## What it does
Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry.
## How we built it
We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI.
## Challenges we ran into
Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it.
## Accomplishments that we're proud of
We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app.
## What we learned
We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything.
## What's next for Companion
The next steps for Companion are:
* Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee.
* Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users.
## Sample account
If you can't register your own account for some reason, here is a sample one to log into:
Email: [[email protected]](mailto:[email protected])
Password: password | ## Inspiration
We were inspired by hard working teachers and students. Although everyone was working hard, there was still a disconnect with many students not being able to retain what they learned. So, we decided to create both a web application and a companion phone application to help target this problem.
## What it does
The app connects students with teachers in a whole new fashion. Students can provide live feedback to their professors on various aspects of the lecture, such as the volume and pace. Professors, on the other hand, get an opportunity to receive live feedback on their teaching style and also give students a few warm-up exercises with a built-in clicker functionality.
The web portion of the project ties the classroom experience to the home. Students receive live transcripts of what the professor is currently saying, along with a summary at the end of the lecture which includes key points. The backend will also generate further reading material based on keywords from the lecture, which will further solidify the students’ understanding of the material.
## How we built it
We built the mobile portion using react-native for the front-end and firebase for the backend. The web app is built with react for the front end and firebase for the backend. We also implemented a few custom python modules to facilitate the client-server interaction to ensure a smooth experience for both the instructor and the student.
## Challenges we ran into
One major challenge we ran into was getting and processing live audio and giving a real-time transcription of it to all students enrolled in the class. We were able to solve this issue through a python script that would help bridge the gap between opening an audio stream and doing operations on it while still serving the student a live version of the rest of the site.
## Accomplishments that we’re proud of
Being able to process text data to the point that we were able to get a summary and information on tone/emotions from it. We are also extremely proud of the
## What we learned
We learned more about React and its usefulness when coding in JavaScript. Especially when there were many repeating elements in our Material Design. We also learned that first creating a mockup of what we want will facilitate coding as everyone will be on the same page on what is going on and all thats needs to be done is made very evident. We used some API’s such as the Google Speech to Text API and a Summary API. We were able to work around the constraints of the API’s to create a working product. We also learned more about other technologies that we used such as: Firebase, Adobe XD, React-native, and Python.
## What's next for Gradian
The next goal for Gradian is to implement a grading system for teachers that will automatically integrate with their native grading platform so that clicker data and other quiz material can instantly be graded and imported without any issues. Beyond that, we can see the potential for Gradian to be used in office scenarios as well so that people will never miss a beat thanks to the live transcription that happens. | ## Inspiration
At the heart of our creation lies a profound belief in the power of genuine connections. Our project was born from the desire to create a safe space for authentic conversations, fostering empathy, understanding, and support. In a fast-paced, rapidly changing world, it is important for individuals to identify their mental health needs, and receive the proper support.
## What it does
Our project is an innovative chatbot leveraging facial recognition technology to detect and respond to users' moods, with a primary focus on enhancing mental health. By providing a platform for open expression, it aims to foster emotional well-being and create a supportive space for users to freely articulate their feelings.
## How we built it
We use opencv2 to actively analyze video and used its built-in facial recognition to build a cascade around individuals' faces when identified. Then, using the deepface library with a pre-trained model for identifying emotions, we assigned each cascaded face with an emotion in live time. Then, using this information, we used cohere's language learning model to then generate a message corresponding to the emotion the individual is feeling. The LLM is also used to chat back and forth between the user and the bot. Then all this information is displayed on the website which is designed using flask for the backend and html, css, react, nodejs for the frontend.
## Challenges we ran into
The largest adversity encountered was dealing with the front-end portion of it. Our group lacked experience with developing websites, and found many issues connecting the back-end to the front-end.
## Accomplishments that we're proud of
Creating the cascades around the faces, incorporating a machine learning model to the image processing, and attemping to work on front-end for the first time with no experience.
## What we learned
We learned how to create a project with a team and divide up tasks and combine them together to create one cohesive project. We also learned new technical skills (i.e how to use API's, machine learning, and front-end development)
## What's next for My Therapy Bot
Try and design a more visually appealing and interactive webpage to display our chat-bot. Ideally it would include live video feed of themselves with cascades and emotions while chatting. It would be nice to train our own machine learning model to determine expressions as well. | partial |
## Inspiration
Whenever you want to learn something new, there's always an initial wall, a barrier, that needs to be surpassed. How? How do you approach learning this thing? What part of it do you learn first? What do you need to know, in order to be proficient?
We've all faced this problem when we've become interested in learning something new. There's an abundance of guides out there, but nothing that can concretely, aesthetically, and clearly give you a visual breakdown of how your learning should proceed.
Personally, the 4 of us, have struggled with this in the past, and we know others have too. Getting started is one of the biggest problems in learning - and we aim to break down that wall.
## What it does
Skilltree is a unique web app designed using Svelte, Vite, and Tailwind CSS, and powered by an incredible neural network that takes what you want to learn and constructs a personalised skill tree, dividing this goal into sub-goals - giving you milestones to achieve on your learning path.
Inspired by how appealing and satisfying skill trees are in most video games, we've applied this concept to real-world learning. By tracking your progress through the wide array of pre-requisites and components of the topic you want to learn, Skilltree optimizes your learning and gives you an early boost towards your goal.
## How we built it
We developed Skilltree using the Svelte javascript framework, configured with Vite, with Tailwind CSS, to design an aesthetically pleasing and satisfying front-end interface for the user.
By mixing local persistent storage functionality as well as queries and responses in our routes, we built the entire application to be lightweight (using server-side processing) and portable (functional on all platforms).
We studied and constructed an algorithm using a neural network to generate lists of sub-topics within the larger topic the user wants to learn and laid out the results in the form of the tree data structure, which we then visualised using TreeVSN and a complex hierarchical traversal algorithm.
## Challenges we ran into
Many of our team members were unfamiliar with the Svelte framework, and thus, development was a slow start.
Additionally, we faced issues with server-side vs client-side processing for fast response times from the neural network, which required a lot of brainpower to overcome as generation is a resource-heavy task.
The visualisation of the skill tree required the testing of many different packages and options available to us, and displaying the tree in an appealing and interactive manner proved to be a much more difficult task than initially predicted.
## Accomplishments that we're proud of
We're proud of the end result, the application we have developed is something that we believe will help each of us greatly, and hopefully others too. Overcoming the technical challenges mentioned above were all also major milestones for all of the team members, and we're proud of getting through them together.
## What we learned
Some of our team members learnt a new framework! In addition to that we all learnt how important communication is, and how much a deep learning web app needs to be optimised in order to be a feasible solution for people to use (seriously don't know why we didn't realise this).
## What's next for SkillTree
We hope to improve upon our generation algorithm as well as develop more interactive behaviour for the actual skill tree itself. We would like to recommend resources to learn the topics we suggest as well as build a mobile app for better native support for skill tree on mobile.
We also want to add in user accounts to allow people to access these trees from other devices.
**Discord Usernames**
* desgroup#0675
* Talfryn#1377
* testname\_plsignor#9652
* rcordi#0621 | ## Inspiration
Our team was inspired specifically by our schools Intelligent Systems book, "Statistical Machine Learning, A Unified Framework" by Richard Golden. The book has amazing insights on building machine learning models from the ground up, but the first time we opened the book we were immediately overwhelmed by the sheer amount of information presented in the book. So we created a solution that extracts key ideas and concepts from textbooks and research papers and shows how those concepts build upon each other to achieve their final goal of helping you learn.
## What it does
The backbone of our application is our algorithm that strips keywords from text based on importance before building a graph that represents all of the key ideas someone has to understand before moving on to the next section. We then display this to the user in the frontend with various tools to help the user understand that concept through embedding summarization and specialized chat bot assistance. This is all built on top of our gamification that encourages setting up a regular weekly schedule to keep up in studying a particular material.
## How we built it
The algorithm is a combination of two other algorithms including Key Word Extraction and a off-shoot of the page rank algorithm named topic rank that works to learn how key words are related to each other. This works well for generating a directed graph from beginning to the end of the book where future chapters are built off of preceding concepts. The processing of data is done in our Fast API backend which also holds our data for textbooks that have already been processed. The frontend is built in Svelte Kit a frontend JavaScript framework.
## Challenges we ran into
One challenge we ran into is that because the keyword extraction is generally optimized towards smaller datasets of paragraphs, but due to hardware limitations we had to use larger datasets at once which lead to less desired results. With a stronger GPU we could easily offload that computation and achieve a much higher standard of concepts that are generated. We attempted to achieve this by using Docker and a loaned GPU online, but was struggling to get the computation to work as expected.
## Accomplishments that we're proud of
This was an entirely new project for us! We're very proud of the progress we've made and the material we've learned along the way. We are very proud of the fact that the application is fully deployed to <https://main-deployment.deilcwp6iri9p.amplifyapp.com/> Feel free to view and upload pdf files of textbooks or research papers to get a feeling of how it works.
## What we learned
We learned a lot about different ways to parse and extract data from pdf's before using that data in a multi-stage pipeline of machine learning models to work towards a singular goal. We also learned a lot about accessibility and how to implement it in a graphical application using key binds and alt text to ensure that anyone can maneuver through the application.
## What's next for Redwood
We would love to see new features added to Redwood to further enrich the learning experience and help students comprehend textbooks and research papers. We've discussed one day including flashcard generation and daily quizzes that implement the Ebbinghaus learning curve.
### Discords
Abigail Smith - MisfitTea
Eric Shields - hammyhampster
Nicholas Zolton - syzygyn
Nia Anderson - nia7anderson | ## \*\* Internet of Things 4 Diabetic Patient Care \*\*
## The Story Behind Our Device
One team member, from his foot doctor, heard of a story of a diabetic patient who almost lost his foot due to an untreated foot infection after stepping on a foreign object. Another team member came across a competitive shooter who had his lower leg amputated after an untreated foot ulcer resulted in gangrene.
A symptom in diabetic patients is diabetic nephropathy which results in loss of sensation in the extremities. This means a cut or a blister on a foot often goes unnoticed and untreated.
Occasionally, these small cuts or blisters don't heal properly due to poor blood circulation, which exacerbates the problem and leads to further complications. These further complications can result in serious infection and possibly amputation.
We decided to make a device that helped combat this problem. We invented IoT4DPC, a device that detects abnormal muscle activity caused by either stepping on potentially dangerous objects or caused by inflammation due to swelling.
## The technology behind it
A muscle sensor attaches to the Nucleo-L496ZG board that feeds data to a Azure IoT Hub. The IoT Hub, through Trillo, can notify the patient (or a physician, depending on the situation) via SMS that a problem has occurred, and the patient needs to get their feet checked or come in to see the doctor.
## Challenges
While the team was successful in prototyping data aquisition with an Arduino, we were unable to build a working prototype with the Nucleo board. We also came across serious hurdles with uploading any sensible data to the Azure IoTHub.
## What we did accomplish
We were able to set up an Azure IoT Hub and connect the Nucleo board to send JSON packages. We were also able to aquire test data in an excel file via the Arduino | losing |
## Inspiration 🤔
The brain, the body's command center, orchestrates every function, but damage to this vital organ in contact sports often goes unnoticed. Studies show that 99% of football players are diagnosed with CTE, 87% of boxers have experienced at least one concussion, and 15-30% of hockey injuries are brain-related. If only there were a way for players and coaches to monitor the brain health of players before any long-term damage can occur.
## Our Solution💡
Impactify addresses brain health challenges in contact sports by integrating advanced hardware into helmets used in sports like hockey, boxing, and football. This hardware records all impacts sustained during training or games, capturing essential data from each session. The collected data provides valuable insights into an athlete's brain health, enabling them to monitor and assess their cognitive well-being. By staying informed about potential head injuries or concussion risks, athletes can take proactive measures to protect their health. Whether you're a player who wants to track their own brain health or a coach who wants to track all their players' brain health, Impactify has a solution for both.
## How we built it 🛠️
Impactify leverages a mighty stack of technologies to optimize its development and performance. React was chosen for the front end due to its flexibility in building dynamic, interactive user interfaces, allowing for a seamless and responsive user experience. Django powers the backend, providing a robust and scalable framework for handling complex business logic, API development, and secure authentication. PostgreSQL was selected for data storage because of its reliability, advanced querying capabilities, and easy handling of large datasets. Last but not least, Docker was employed to manage dependencies across multiple devices. This helped maintain uniformity in the development and deployment processes, reducing the chances of environment-related issues.
On the hardware side, we used an ESP32 microprocessor connected to our team member's mobile hotspot, allowing the microprocessor to send data over the internet. The ESP32 was then connected to 4 pressure sensors and an accelerometer, where it reads the data at fixed intervals. The data is sent over the internet to our web server for further processing. The parts were then soldered together and neatly packed into our helmet, and we replaced all the padding to make the helmet wearable again. The hardware was powered with a 9V battery, and then LEDs and a power switch were added to the helmet so the user could turn it on and off. The LEDs served as a visual indicator of whether or not the ESP32 had an internet connection.
## Challenges we ran into 💥
The first challenge we had was getting all the sensors and components positioned in the correct locations within the helmet such that the data will be read accurately. On top of getting the correct positioning, the wiring and all the components must be put in place in such a way that it does not detract from the protective aspect of the helmet. Getting all the components hidden properly and securely was a great challenge and took hours of tinkering.
Another challenge that we faced was making sure that the data that was being read was accurate. We took a long time to callibrate the pressure sensors inside the helmet, because when the helmet is being worn, your head naturally excerts some pressure on the sides of the helmet. Making sure that our data input was reliable was a big challenge to overcome because we had to iterate multiple times on tinkering with the helmet, collecting data, and plotting it on a graph to visually inspect it, before we were satisfied with the result.
## Accomplishments that we're proud of 🥂
We are incredibly proud of how we turned our vision into a reality. Our team successfully implemented key features such as pressure and acceleration tracking within the helmet, and our software stack is robust and scalable with a React frontend and Django backend. We support individual user sessions and coach user management for sports teams, and have safety features such as sending an SMS to a coach if their player takes excessive damage. We developed React components that visualize the collected data, making the website easy to use, visually appealing and interactive. The hardware design was compact and elegant, seamlessly fitting into the helmet without compromising its structure.
## What we learned 🧠
Throughout this project, we learned a great deal about hardware integration, data visualization, and balancing safety with functionality. We also gained invaluable insights into optimizing the development process and managing complex technical challenges.
## What's next for Impactify 🔮
Moving forward, we aim to enhance the system by incorporating more sophisticated data analysis, providing even deeper insights into brain health aswell as fitting our hardware into a larger array of sports gear. We plan to expand the use of Impactify into more sports and further improve its ease of use for athletes and coaches alike. Additionally, we will explore ways to miniaturize the hardware even further to make the integration even more seamless. | ## Inspiration
We built an AI-powered physical trainer/therapist that provides real-time feedback and companionship as you exercise.
With the rise of digitization, people are spending more time indoors, leading to increasing trends of obesity and inactivity. We wanted to make it easier for people to get into health and fitness, ultimately improving lives by combating these the downsides of these trends. Our team built an AI-powered personal trainer that provides real-time feedback on exercise form using computer vision to analyze body movements. By leveraging state-of-the-art technologies, we aim to bring accessible and personalized fitness coaching to those who might feel isolated or have busy schedules, encouraging a more active lifestyle where it can otherwise be intimidating.
## What it does
Our AI personal trainer is a web application compatible with laptops equipped with webcams, designed to lower the barriers to fitness. When a user performs an exercise, the AI analyzes their movements in real-time using a pre-trained deep learning model. It provides immediate feedback in both textual and visual formats, correcting form and offering tips for improvement. The system tracks progress over time, offering personalized workout recommendations and gradually increasing difficulty based on performance. With voice guidance included, users receive tailored fitness coaching from anywhere, empowering them to stay consistent in their journey and helping to combat inactivity and lower the barriers of entry to the great world of fitness.
## How we built it
To create a solution that makes fitness more approachable, we focused on three main components:
Computer Vision Model: We utilized MediaPipe and its Pose Landmarks to detect and analyze users' body movements during exercises. MediaPipe's lightweight framework allowed us to efficiently assess posture and angles in real-time, which is crucial for providing immediate form correction and ensuring effective workouts.
Audio Interface: We initially planned to integrate OpenAI’s real-time API for seamless text-to-speech and speech-to-text capabilities, enhancing user interaction. However, due to time constraints with the newly released documentation, we implemented a hybrid solution using the Vosk API for speech recognition. While this approach introduced slightly higher latency, it enabled us to provide real-time auditory feedback, making the experience more engaging and accessible.
User Interface: The front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, manages communication between the AI model, audio interface, and user data. This setup allows the machine learning models to run efficiently, providing smooth real-time feedback without the need for powerful hardware.
On the user interface side, the front end was built using React with JavaScript for a responsive and intuitive design. The backend, developed in Flask with Python, handles the communication between the AI model, audio interface, and the user's data.
## Challenges we ran into
One of the major challenges was integrating the real-time audio interface. We initially planned to use OpenAI’s real-time API, but due to the recent release of the documentation, we didn’t have enough time to fully implement it. This led us to use the Vosk API in conjunction with our system, which introduced increased codebase complexity in handling real-time feedback.
## Accomplishments that we're proud of
We're proud to have developed a functional AI personal trainer that combines computer vision and audio feedback to lower the barriers to fitness. Despite technical hurdles, we created a platform that can help people improve their health by making professional fitness guidance more accessible. Our application runs smoothly on various devices, making it easier for people to incorporate exercise into their daily lives and address the challenges of obesity and inactivity.
## What we learned
Through this project, we learned that sometimes you need to take a "back door" approach when the original plan doesn’t go as expected. Our experience with OpenAI’s real-time API taught us that even with exciting new technologies, there can be limitations or time constraints that require alternative solutions. In this case, we had to pivot to using the Vosk API alongside our real-time system, which, while not ideal, allowed us to continue forward. This experience reinforced the importance of flexibility and problem-solving when working on complex, innovative projects.
## What's next for AI Personal Trainer
Looking ahead, we plan to push the limits of the OpenAI real-time API to enhance performance and reduce latency, further improving the user experience. We aim to expand our exercise library and refine our feedback mechanisms to cater to users of all fitness levels. Developing a mobile app is also on our roadmap, increasing accessibility and convenience. Ultimately, we hope to collaborate with fitness professionals to validate and enhance our AI personal trainer, making it a reliable tool that encourages more people to lead healthier, active lives. | ## Inspiration
As avid cyclists who are interested in wearable tech, this idea came naturally to us. After stumbling upon a news article, we were shocked to discover that nearly 8000 canadian cyclists are seriously injured each year. 1 in 3 of all cyclist deaths occur in poor lighting conditions. We knew that we could create a product that could help mitigate the amount of cycling injuries. From there, we thought of Helmetx and many more features that could optimize the cycling experience.
## What it does
Helmetx is a multifunctional cycling helmet with many safety features. The helmet’s main feature is a stunning LCD screen, which allows cyclists to view biking metrics, such as speed and acceleration. Cyclists can cycle through the metrics by pressing the button at the side of the helmet. The helmet uses an accelerometer to measure the acceleration and from there, the speed was able to be derived and displayed. Additionally, if the helmet’s ultrasonic detects an oncoming vehicle, the LCD will display a warning message with the vehicle's proximity to the user. The lights around the helmet will also flash to alert the vehicle. To manually control the lights, cyclists simply have to press one of the side buttons on the helmet. The LCD can also display directions to the cyclist’s location on the screen.
## How we built it
We divided our team into two separate teams: hardware and software. The hardware team was responsible for all electronics related aspects of the project and the software team worked on our code and the Google Maps direction API. On the hardware side, after brainstorming our idea, we did a lot of research into what features this helmet should have and which devices would be best for the task at hand. Then, we got started by testing out different sensors individually and then combined them into our main program. We did a lot of tests along the way and once all our features worked, we assembled the electronics onto the helmet. On the software side, we researched the Google Maps API and how to use it. We wrote code which extracts the directions and then outputs an integer representing the type of turn. Then, the code on the helmet converts the integer back to the appropriate direction and displays it on the LCD screen.
## Challenges we ran into
Building Helmetx came with many challenges, but we were always able to solve them! Our biggest challenge as a group was learning how to use the Raspberry Pi and the Google Maps API, which was new to each member in the group. As such, we had to do an immense amount of research. On the hardware side, we had an issue with the LCD screens. When we first started to prototype with the hardware, we attempted to use 128x128 pixel LCD screens. After researching troubleshooting methods to fix the screens, we swapped to using a 16x2 LCD screen. On the software side, we had many technical issues with using certain APIs. After much trial and error, we switched to using the Google Maps API.
## Accomplishments that we’re proud of
Overall, we are proud of how we were able to take on a project that was out of our comfort zones, face the challenges that we approached, and create a working end product. As a team, we entered the project with many uncertainties but we were able to come out victorious. We are most proud of how well we all worked together and how our skills complemented one another.
## What we learned
Our team learned how to use the Google Maps API to direct the Raspberry Pi. Both of these systems were quite new to us, and required a significant amount of research before we could start playing around with them. At the end, we were able to develop a method which involved extracting directions and outputting them as an integer which represented a specific type of turn. We also learnt how to incorporate different sensors into our main program.
## What’s next
Going forward with this project, we would like to improve on the screen and look into the possibilities of incorporating a transparent display. We definitely want to incorporate additional sensors to provide the cyclists with more metrics and explore the possibility of having an onboard camera to record rides. We would also make use of a 3D printer to print a proper casing for our electronic components. In the future, we will make use of bluetooth or internet connectivity with the phone and Raspberry Pi to have accurate and updated directions based on GPS data. | winning |
Live Demo Link: <https://www.youtube.com/live/I5dP9mbnx4M?si=ESRjp7SjMIVj9ACF&t=5959>
## Inspiration
We all fall victim to impulse buying and online shopping sprees... especially in the first few weeks of university. A simple budgeting tool or promising ourselves to spend less just doesn't work anymore. Sometimes we need someone, or someone's, to physically stop us from clicking the BUY NOW button and talk us through our purchase based on our budget and previous spending. By drawing on the courtroom drama of legal battles, we infuse an element of fun and accountability into doing just this.
## What it does
Dime Defender is a Chrome extension built to help you control your online spending to your needs. Whenever the extension detects that you are on a Shopify or Amazon checkout page, it will lock the BUY NOW button and take you to court! You'll be interrupted by two lawyers, the defence attorney explaining why you should steer away from the purchase 😒 and a prosecutor explains why there still are some benefits 😏. By giving you a detailed analysis of whether you should actually buy based on your budget and previous spendings in the month, Dime Defender allows you to make informed decisions by making you consider both sides before a purchase.
The lawyers are powered by VoiceFlow using their dialog manager API as well as Chat-GPT. They have live information regarding the descriptions and prices of the items in your cart, as well as your monthly budget, which can be easily set in the extension. Instead of just saying no, we believe the detailed discussion will allow users to reflect and make genuine changes to their spending patterns while reducing impulse buys.
## How we built it
We created the Dime Defender Chrome extension and frontend using Svelte, Plasma, and Node.js for an interactive and attractive user interface. The Chrome extension then makes calls using AWS API gateways, connecting the extension to AWS lambda serverless functions that process queries out, create outputs, and make secure and protected API calls to both VoiceFlow to source the conversational data and ElevenLabs to get our custom text-to-speech voice recordings. By using a low latency pipeline, with also AWS RDS/EC2 for storage, all our data is quickly captured back to our frontend and displayed to the user through a wonderful interface whenever they attempt to check out on any Shopify or Amazon page.
## Challenges we ran into
Using chrome extensions poses the challenge of making calls to serverless functions effectively and making secure API calls using secret api\_keys. We had to plan a system of lambda functions, API gateways, and code built into VoiceFlow to create a smooth and low latency system to allow the Chrome extension to make the correct API calls without compromising our api\_keys. Additionally, making our VoiceFlow AIs arguing with each other with proper tone was very difficult. Through extensive prompt engineering and thinking, we finally reached a point with an effective and enjoyable user experience. We also faced lots of issues with debugging animation sprites and text-to-speech voiceovers, with audio overlapping and high latency API calls. However, we were able to fix all these problems and present a well-polished final product.
## Accomplishments that we're proud of
Something that we are very proud of is our natural conversation flow within the extension as well as the different lawyers having unique personalities which are quite evident after using our extension. Having your cart cross-examined by 2 AI lawyers is something we believe to be extremely unique, and we hope that users will appreciate it.
## What we learned
We had to create an architecture for our distributed system and learned about connection of various technologies to reap the benefits of each one while using them to cover weaknesses caused by other technologies.
Also.....
Don't eat the 6.8 million Scoville hot sauce if you want to code.
## What's next for Dime Defender
The next thing we want to add to Dime Defender is the ability to work on even more e-commerce and retail sites and go beyond just Shopify and Amazon. We believe that Dime Defender can make a genuine impact helping people curb excessive online shopping tendencies and help people budget better overall. | ## Inspiration
One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track.
## What it does
Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community!
## How we built it
React front-end, MongoDB, Express REST server
## Challenges we ran into
Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics.
## Completion
In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics.
## What we learned
Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch!
## What's next for IDNI - I Don't Need It!
We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store! | ## Inspiration
Lyft's round up and donate system really inspired us here.
We wanted to find a way to benefit both users and help society. We all want to give back somehow, but don't know how sometimes or maybe we want to donate, but don't really know how much to give back or if we could afford it.
We wanted an easy way incorporated into our lives and spending habits.
This would allow us to reach a wider amount of users and utilize the power of the consumer society.
## What it does
With a chrome extension like "Heart of Gold", the user gets every purchase's round up to nearest dollar (for example: purchase of $9.50 has a round up of $10, so $0.50 gets tracked as the "round up") accumulated. The user gets to choose when they want to donate and which organization gets the money.
## How I built it
We built a web app/chrome extension using Javascript/JQuery, HTML/CSS.
Firebase javascript sdk library helped us store the calculations of the accumulation of round up's.
We make an AJAX call to the Paypal API, so it took care of payment for us.
## Challenges I ran into
For all of the team, it was our first time creating a chrome app extension. For most of the team, it was our first time heavily working with javascript let alone using technologies like Firebase and the Paypal API.
Choose what technology/platform would make the most sense was tough, but the chrome extension would allow for more relevance since a lot of people make more online purchases nowadays and an extension can run in the background/seem omnivalent.
So we picked up the javascript language to start creating the extension. Lisa Lu integrated the PayPal API to handle donations and used HTML/CSS/JavaScript to create the extension pop-up. She also styled the user interface.
Firebase was also completely new to us, but we chose to use it because it didn't require us to have a two step process: a server (like Flask) + a database (like mySQL or MongoDB). It also helped that we had a mentor guide us through. We learned a lot about the Javascript language (mostly that we haven't even really scratched the surface of it), and the importance of avoiding race conditions. We also learned a lot about how to strategically structure our code system (having a background.js to run firebase database updates
## Accomplishments that I'm proud of
Veni, vidi, vici.
We came, we saw, we conquered.
## What I learned
We all learned that there are multiple ways to create a product to solve a problem.
## What's next for Heart of Gold
Heart of Gold has a lot of possibilities: partnering with companies that want to advertise to users and social good organizations, making recommendations to users on charities as well as places to shop, game-ify the experience, expanding capabilities of what a user could do with the round up money they accumulate. Before those big dreams, cleaning up the infrastructure would be very important too. | winning |
## Inspiration
We wanted to create a tool that would improve the experience of all users of the internet, not just technically proficient ones. Once fully realized, our system could allow groups such as blind people and the elderly a great deal of utility, as it allows for navigation by simple voice command with only minor setup from someone else. We could also implement function sharing to allow users to share their ideas and tools with everyone else or allow companies to define defaults on their own webpages to improve the user experience.
Created with the intention to make browsing the web easier, if not possible for the visually impaired. Access to the internet may soon be considered a fundamental human right- it should be made accessible to everyone regardless of their abilities.
## What it does
A chrome extension that allows a user to speak a phrase(e.g. "Search for guitar songs" on Youtube), then enter a series of actions. Our extension will store the instructions entered, then use a combination of STT and natural language processing to generalize the query for the site and all permutations of the same structure. We use Microsoft LUIS, which at a baseline allows for synonyms. The more it is used, the better this becomes, so it could expand to solve "find piano music" as well. We are also in the process of developing a simple interface to allow users to easily define their custom instruction sets.
## How we built it
We used webkit Speech to Text in a chrome plugin to create sentences from recordings. We also created a system to track mouse and keyboard inputs in order to replicate them. This info was passed to a StdLib staging area that processes data and manages Microsoft LUIS to interpret the inputted sentence. This is then passed back to the plugin so it knows which sequence of actions to perform. Our project has a system to generalize "entities" i.e. the variables in an instruction (i.e. "guitar songs").
## Challenges we ran into
* No native APIs for several UX/UI elements. Forced to create workarounds and hack bits of code together.
* Making the project functions easy for users to follow and understand.
## Accomplishments that we're proud of
Our team learned to use an unfamiliar system with an entirely different paradigm from traditional web hosting, and how to manage its advantages and disadvantages on the fly while integrating with several other complex systems
## What we learned
It is a better strategy to iterate on outwards from the simplest core of your system rather than aim big. We had to cut features, which meant we sunk unnecessary time into development initially. We also learned all about FAAS and serverless hosting, and about natural language processing.
## What's next for Quack - Voice Controlled Action Automation for Chrome | ## Inspiration
After growing up learning programming concepts from great tools such as Scratch, Logo and Processing, we realised the power but also the limitations of using visual feedback to make learning programming more rewarding.
Tools like these and many others, rely on visual mediums to communicate core programming ideas to those who are new to programming.
However, people with visual impairments are locked out of learning to program in this intuitive, rewarding manner.
We wanted to break down this barrier, and bring them the power of instant feedback learning techniques, vastly increasing the accessibility of programming education.
## What it does
Our project provides an environment for people with visual impairments to learn core computer science concepts.
Our project contains a fully featured online editor which uses text to speech technologies to make text editing accessible by speaking the code the user is interacting with back to them in an intuitive way.
The editor allows users to program in a novel, innovative language designed specifically to provide an excellent learning experience, with no setup required.
With this language users can use a fun, intuitive and beginner-friendly API to create all manner of sounds and music.
This language brings the instant feedback used in visual beginner languages, such as Scratch, to the domain of audio.
Users can write code in their browser, and our custom interpreter and audio engine will compile and play whatever sounds and music their hearts desire.
## How we built it
Our back-end is written in Python, it uses the TextX PEG parser generator and pydub audio manipulation library.
Our online editor is written in Angular.
## Challenges we ran into
It turns out that designing and implementing a language from scratch is a lot harder than we predicted.
We also did not expect that the audio manipulation would be as challenging as it was.
Our final major hurdle that we overcame was the streaming of the generated sound back to the users browser.
## Accomplishments that we're proud of
We are very proud of our final product, and we firmly believe that it can be a very useful accessibility tool for visually impaired learners in the future.
Our language syntax was specifically designed to be as friendly to text to speech editing as possible, minimising use of special characters, and matching English speech as closely as possible. We are extremely proud of how this turned out and how readable (literally) it is.
## What we learned
Each of our team members learned much about their respective areas of the project. Between language design, audio manipulation, and dev tool design we were forced to familiarise ourselves quickly with many areas of computer science in which we had no prior experience.
There was a lot of co-operation and pair programming so the new knowledge was well distributed among the team.
## What's next for Muse
We have great plans to continue our development of this project throughout the coming years in the hope that this project will enable as many people as possible to enter the world of computer science that otherwise may not have had the opportunity. | ## Inspiration
Informa’s customers want to understand what new technologies will be most relevant to their businesses. This is also more “hype” around technologies. Therefore, it is increasingly important for companies to stay informed about emerging technologies.
## What it does
Marble Grapes display the most relevant technologies for each of Informa's 6 industry-specific clients.
## How we built it
We developed a neural network to algorithmically predict the estimated “noise” of a technology.
This information is then displayed in a dynamic dashboard for Informa’s market analysts.
## Challenges we ran into
Time constraints were a significant problem. There was limited accessibility of meaningful data. There were also minor syntax issues with Javascript ES6.
## Accomplishments that we're proud of
Interviewing Informa, and understanding the problem in a deep way. We're also proud of developing a website that is intuitive to use.
## What we learned
We learned that it is hard to access meaningful data, despite having a good solution in mind. We also learned that 4 young adults can eat a surprising amount of grapes in a short period of time.
## What's next for Maple Grapes
We'd like to improve the accuracy of the algorithm by increasing the body of historical data for technological successes and failures. We'd also to account for a social media impact score, by doing sentiment analysis.
## Team
Faith Dennis (UC Berkeley), Shekhar Kumar (University of Toronto), Peter Zheng (City University of New York), Avkash Mukhi (University of Toronto) | losing |
## Inspiration
The three of us all love music and podcasts. Coming from very diverse backgrounds, we all enjoy listening to content from a variety of places all around the globe. We wanted to design a platform where users can easily find new content from anywhere to enable cultural interconnectivity.
## What it does
TopCharts allows you to place a pin anywhere in the world using our interactive map, and shows you the top songs and podcasts in that region. You then can follow the link directly to Spotify and listen!
## How we built it
We used the MapBox API to display an interactive map, and also reverse GeoLocate the area in which the pin is dropped. We used the Spotify API to query data based on the geolocation. The app itself is built in React and is hosted through Firebase!
## Challenges we ran into
Getting the MapBox API customized to our needs!
## Accomplishments that we're proud of
Making a fully functional website with clean UI/UX within ~30 hours of ideation. We also got to listen to a lot of cool podcasts and songs from around the world while testing!
## What we learned
How robust the MapBox API is. It is so customizable, which we love! We also learned some great UI/UX tips from Grace Ma (Meta)!
## What's next for TopCharts
Getting approval from Spotify for an API quota extension so anyone across the world can use TopCharts!
Team #18 - Ben (benminor#5721), Graham (cracker#4700), Cam (jeddy#1714) | ## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues | 
## Inspiration
Social anxiety affects hundreds of thousands of people and can negatively impact social interaction and mental health. Around campuses and schools, we were inspired by bulletin boards with encouraging anonymous messages, and we felt that these anonymous message boards were an inspiring source of humanity. With Bulletin, we aim to bring this public yet anonymous way of spreading words of wisdom to as many people as possible. Previous studies have even shown that online interaction decreased social anxiety in people with high levels of anxiety or depression.
## What it does
Bulletin is a website for posting anonymous messages. Bulletin's various boards are virtual reality spaces for users to enter messages. Bulletin uses speech-to-text to create a sense of community within the platform, as everything you see has been spoken by other users. To ensure anonymity, Bulletin does not store any of its users data, and only stores a number of recent messages. Bulletin uses language libraries to detect and filter negative words and profanity. To try Bulletin (<https://bulletinvr.online>), simply enter one of the bulletin boards and double tap or press the enter key to start recording your message.

## What is WebVR?
WebVR, or Web-based virtual reality, allows users to experience a VR environment within a web browser. As a WebVR app, Bulletin can also be accessed on the Oculus Rift, Oculus Go, HTC Vive, Windows Mixed Reality, Samsung Gear VR, Google Cardboard, and your computer or mobile device. As the only limit is having an internet connection, Bulletin is available to all and seeks to bring people together through the power of simple messages.
## How we built it
We use the A-Frame JavaScript framework to create WebVR experiences. Voice recognition is handled with the HTML Speech Recognition API.
The back-end service is written in Python. Our JS scripts use AJAX to make requests to the Flask-powered server, which queries the database and returns the messages that the WebVR front-end should display. When the user submits a message, we run it through the Python `fuzzy-wuzzy` library, which uses the Levenshtein metric to make sure it is appropriate and then save it to the database.
## Challenges we ran into
**Integrating A-Frame with our back-end was difficult**. A-Frame is simple of itself to create very basic WebVR scenes, but creating custom JavaScript components which would communicate with the Flask back-end proved time-consuming. In addition, many of the community components we tried to integrate, such as an [input mapping component](https://github.com/fernandojsg/aframe-input-mapping-component), were outdated and had badly-documented code and installation instructions. Kevin and Hamilton had to resort to reading GitHub issues and pull requests to get some features of Bulletin to work properly.
## Accomplishments that we're proud of
We are extremely proud of our website and how our WebVR environment turned out. It's exceeded all expectations, and features such as multiple bulletin boards and recording by voice were never initially planned, but work consistently well. Integrating the back-end with the VR front-end took time, but was extremely satisfying; when a user sends a message, other users will near-instantaneously see their bulletin update.
We are also proud of using a client-side speech to text service, which improves security and reduces website bandwith and allows for access via poor internet connection speeds.
Overall, we're all proud of building an awesome website.
## What we learned
Hamilton learned about the A-Frame JavaScript library (and JavaScript itself), which he had no experience with previously. He developed the math involved with rendering text in the WebVR environment.
Mykyta and Kevin learned how to use the HTML speech to text API and integrate the WebVR scenes with the AJAX server output.
Brandon learned to use the Google App Engine to host website back-ends, and learned about general web deployment.
## What's next for Bulletin
We want to add more boards to Bulletin, and expand possible media to also allowing images to be sent. We're looking into more sophisticated language libraries to try and better block out hate speech.
Ultimately, we would like to create an adaptable framework to allow for anyone to include a private Bulletin board in their own website. | winning |
## What inspired you?
The inspiration behind this project came from my grandmother, who has struggled with poor vision for years. Growing up, I witnessed firsthand how her limited sight made daily tasks and walking in unfamiliar environments increasingly difficult for her. I remember one specific instance when she tripped on an uneven curb outside the grocery store. Though she wasn’t hurt, the fall shook her confidence, and she became hesitant to go on walks or run errands by herself. This incident helped spark the idea of creating something that could help people like her feel safer and more secure while navigating their surroundings.
I wanted to develop a solution that would give visually impaired individuals not just mobility but confidence. The goal became clear: to create a product that could intuitively guide users through their environment, detecting obstacles like curbs, steps, and uneven terrain, and providing feedback they could easily understand. By incorporating haptic feedback, pressure-based sensors, and infrared technology, this system is designed to give users more control and awareness over their movements, helping them move through the world with greater independence and assurance. My hope is that this technology can empower people like my grandmother to reclaim their confidence and enjoy everyday activities without fear.
## What it does
This project is a smart shoe system designed to help visually impaired individuals safely navigate their surroundings by detecting obstacles and terrain changes. It uses infrared sensors located on both the front and bottom of the shoe to detect the distance to obstacles like curbs and stairs. When the user approaches an obstacle, the system provides real-time feedback through 5 servos. 3 servos are responsible for haptic feedback related to the distance from the ground, and distance in front of them, while the remaining servos are related to help guiding the user through navigation. The infrared sensors detect how far the foot is off the ground, and the servos respond accordingly. The vibrational motors, labeled 1a and 2a, are used when the distance exceeds 6 inches, delivering pulsating signals to inform the user of upcoming terrain changes. This real-time feedback ensures users can sense potential dangers and adjust their steps to prevent falls or missteps. Additionally the user would connect to the shoe based off of bluetooth.
The shoe system operates using three key zones of detection: the walking range (0-6 inches), the far walking range (6-12 inches), and the danger zone (12+ inches). In the walking range, the haptic feedback is minimal but precise, giving users gentle vibrations when the shoe detects small changes, such as a flat surface or minor elevation shifts. As the shoe moves into the far walking range (6-12 inches), where curbs or stairs may appear, the intensity of the feedback increases, and the vibrational motors start to pulse more frequently. This alert serves as a warning that the user is approaching a significant elevation change. When the distance exceeds 12 inches—the danger zone—the vibrational motors deliver intense, rapid feedback to indicate a drop-off or large obstacle, ensuring the user knows to take caution and adjust their step. These zones are carefully mapped to provide a seamless understanding of the terrain without overwhelming the user.
The system also integrates seamlessly with a mobile app, offering GPS-based navigation via four directional haptic feedback sensors that guide the user forward, backward, left, or right. Users can set their route through voice commands, unfortunately we had trouble integrating Deepgram AI, which would assist by understanding speech patterns, accents, and multiple languages, making it accessible to people who are impaired lingually. Additionally we had trouble integrating Skylo, which the idea would be to serve areas where Wi-Fi is unavailable, or connection unstable, the system automatically switches to Skylo via their Type1SC circuit board and antenna, a satellite backup technology, to ensure constant connectivity. Skylo sends out GPS updates every 1-2 minutes, preventing the shoe from losing its route data. If the user strays off course, Skylo triggers immediate rerouting instructions through google map’s api in the app which we did set up in our app, ensuring that they are safely guided back on track. This combination of sensor-driven feedback, haptic alerts, and robust satellite connectivity guarantees that visually impaired users can move confidently through diverse environments.
## How we built it
We built this project using a combination of hardware components and software integrations. To start, we used infrared sensors placed at the front and bottom of the shoe to detect distance and obstacles. We incorporated five servos into the design: three for haptic feedback based on distance sensing 3 and two for GPS-related feedback. Additionally, we used vibrational motors (1a and 2a) to provide intense feedback when larger drops or obstacles were detected. The app we developed integrates the Google Maps API for route setting and navigation. To ensure connectivity in areas with limited Wi-Fi, we integrated Skylo’s Type 1SC satellite hardware, allowing for constant GPS data transmission even in remote areas.
For the physical prototype, we constructed a 3D model of a shoe out of cardboard. Attached to this model are two 5000 milliamp-hour batteries, providing a total of 10000 mAh to power the system. We used an ESP32 microcontroller to manage the various inputs and outputs, along with a power distribution board to efficiently allocate power to the servos, sensors, and vibrational motors. All components were securely attached to the cardboard shoe prototype to create a functional model for testing.
## Challenges we ran into
One of the main challenges we encountered was working with the Skylo Type 1SC hardware. While the technology itself was impressive, the PDF documentation and schematics were quite advanced, requiring us to dive deeper into understanding the technical details. We successfully established communication between the Arduino and the Type 1SC circuit but faced difficulties in receiving a response back from the modem, which required further troubleshooting. Additionally, distinguishing between the different components on the circuit, such as data pins and shorting components, proved challenging, as the labeling was intricate and required careful attention. These hurdles allowed us to refine our skills in circuit analysis and deepen our knowledge of satellite communication systems.
On the software side, we had to address several technical challenges. Matching the correct Java version for our app development was more complex than expected, as version discrepancies affected performance. We also encountered difficulties creating a Bluetooth hotspot that could seamlessly integrate with the Android UI for smooth user interaction. On the hardware end, ensuring reliable connections was another challenge; we found that some of our solder joints for the pins weren’t as stable as needed, leading to occasional issues with connectivity. Through persistent testing and adjusting our approaches, we were able to resolve most of these challenges while gaining valuable experience in both hardware and software integration.
## Accomplishments that we're proud of
One of the accomplishments we’re most proud of is successfully setting up Skylo services and establishing satellite connectivity, allowing the system to access LTE data in areas with low or no Wi-Fi. This was a key component of the project, and getting the hardware to communicate with satellites smoothly was a significant milestone. Despite the initial challenges with understanding the complex schematics, we were able to wire the Arduino to the Type 1SC board correctly, ensuring that the system could relay GPS data and maintain consistent communication. The experience gave us a deeper appreciation for satellite technology and its role in enhancing connectivity for projects like ours.
Additionally, we’re proud of how well the array of sensors was set up and how all the hardware components functioned together. Each sensor, whether for terrain detection or obstacle awareness, worked seamlessly with the servos and haptic feedback system, resulting in better-than-expected performance. The responsiveness of the hardware components was more precise and reliable than we had originally anticipated, which demonstrated the strength of our design and implementation. This level of integration and functionality validates our approach and gives us confidence in the potential impact this project can have for the visually impaired community.
## What we learned
Throughout this project, we gained a wide range of new skills that helped bring the system to life. One of our team members learned to solder, which was essential for securing the hardware components and making reliable connections. We also expanded our knowledge in React system programming, which allowed us to create the necessary interactions and controls for the app. Additionally, learning to use Flutter enabled us to develop a smooth and responsive mobile interface that integrates with the hardware components.
On the hardware side, we became much more familiar with the ESP32 microcontroller, particularly its Bluetooth connectivity functions, which were crucial for communication between the shoe and the mobile app. We also had the opportunity to dive deep into working with the Type 1SC board, becoming comfortable with its functionality and satellite communication features. These new skills not only helped us solve the challenges within the project but also gave us valuable experience for future work in both hardware and software integration.
## What's next for PulseWalk
Next for PulseWalk, we plan to enhance the system's capabilities by refining the software to provide even more precise feedback and improve user experience. We aim to integrate additional features, such as obstacle detection for more complex terrains and improved GPS accuracy with real-time rerouting. Expanding the app’s functionality to include more languages and customization options using Deepgram AI will ensure greater accessibility for a diverse range of users. Additionally, we’re looking into optimizing battery efficiency and exploring more durable materials for the shoe design, moving beyond the cardboard prototype. Ultimately, we envision PulseWalk evolving into a fully commercialized product that offers a seamless, dependable mobility aid for the visually impaired which partners with shoe brands to bring a minimalist approach to the brand and make it look less like a medical device and more like an everyday product. | ## Inspiration
The current landscape of data aggregation for ML models relies heavily on centralized platforms, such as Roboflow and Kaggle. This causes an overreliance on invalidated human-volunteered data. Billions of dollars worth of information is unused, resulting in unnecessary inefficiencies and challenges in the data engineering process. With this in mind, we wanted to create a solution.
## What it does
**1. Data Contribution and Governance**
DAG operates as a decentralized and autonomous organization (DAO) governed by smart contracts and consensus mechanisms within a blockchain network. DAG also supports data annotation and enrichment activities, as users can participate in annotating and adding value to the shared datasets.
Annotation involves labeling, tagging, or categorizing data, which is increasingly valuable for machine learning, AI, and research purposes.
**2. Micropayments in Cryptocurrency**
In return for adding datasets to DAG, users receive micropayments in the form of cryptocurrency. These micropayments act as incentives for users to share their data with the community and ensure that contributors are compensated based on factors such as the quality and usefulness of their data.
**3. Data Quality Control**
The community of users actively participates in data validation and quality assessment. This can involve data curation, data cleaning, and verification processes. By identifying and reporting data quality issues or errors, our platform encourages everyone to actively participate in maintaining data integrity.
## How we built it
DAG was used building Next.js, MongoDB, Cohere, Tailwind CSS, Flow, React, Syro, and Soroban. | ## Inspiration
After a brutal Achilles tear and weeks in bed, one of our teammate’s moms re-tore her Achilles after a fall from a mobility scooter, similar to our walker. Luckily, she was at home when the re-injury occurred and could get help from family. Not everyone is so lucky, and this incident got us thinking about accessibility tools such as scooters, wheelchairs, and walkers and how they could be improved to assist when no one else is around.
When someone who is physically impaired falls, they are often unable or severely restricted in their ability to get back to their physical aid. Surely, it would be helpful if their walker came back for them?
## What it does
This walker is equipped with a camera, a large motor for power, and two smaller motors to pull the brakes. Using a computer vision algorithm that we trained using YOLOv8, the walker detects when someone has fallen and raised their hand, requesting assistance. The walker then motors toward the fallen user to help them.
## How we built it
This project combined complex **hardware**, **mechanical design**, and **software components**,
* On the hardware side, a **Raspberry Pi** handles our hand-detection algorithm and communicates with an ESC controlling the motors.
* The main motor spins a wheel for power, while two smaller motors apply pressure as brakes, enabling the walker to stop and turn in the direction of its user.
* On the mechanical design side, key mounts, brakes, and housing for the servo motors were all custom-designed and **3D printed** over the weekend!
* For software, we trained a **YOLOv8** computer vision model using hand-annotated pictures of raised hands. The model was hosted on a web server, and the **Raspberry Pi** communicated via **UART** with the ESC. The Pi also communicated with the servos via IO.
## Challenges we ran into
1. After training our model, we found that it wasn't running as we wanted on the Raspberry Pi due to power limitations. We had to quickly pivot and upload the model to a web server, developing our own API for the Raspberry Pi to upload images and receive the necessary information.
2. **UART** was really finicky to work with, after writing code that worked literally hours before, our team was disappointed to find that it not longer worked and that we had to fix it.
3. None of our team had worked with a Raspberry Pi before, making setup and configuration quite challenging.
4. We didn’t get all the crucial hardware we requested, but our team adapted and altered the project accordingly.
## Accomplishments that we're proud of
* Training the model in such a short time frame was a huge win, especially with the help of **roboflow**.
* Writing the **UART** was difficult but extremely satisfying once completed.
* Our team maintained a resourceful and positive attitude, overcoming every obstacle.
## What we learned
We learned a lot of new **technical skills**: new Python libraries, Linux commands, CAD designs, 3D printing settings, and hardware techniques. Working with a **Raspberry Pi** was an eye-opening experience, especially with UART communication.
Additionally, we gained valuable **teamwork** lessons, including how much sleep deprivation can affect problem-solving, but also the extent to which persistence can lead to solutions!
## What's next for Moonwalk?
Who knows? We may continue working on this after the weekend. It’s too early to say on this very sleep-deprived Sunday morning... | partial |
## Inspiration
Have you ever wanted to search something, but aren't connected to the internet? Data plans too expensive, but you really need to figure something out online quick? Us too, and that's why we created an application that allows you to search the internet without being connected.
## What it does
Text your search queries to (705) 710-3709, and the application will text back the results of your query.
Not happy with the first result? Specify a result using the `--result [number]` flag.
Want to save the URL to view your result when you are connected to the internet? Send your query with `--url` to get the url of your result.
Send `--help` to see a list of all the commands.
## How we built it
Built on a **Nodejs** backend, we leverage **Twilio** to send and receive text messages. When receiving a text message, we send this information using **RapidAPI**'s **Bing Search API**.
Our backend is **dockerized** and deployed continuously using **GitHub Actions** onto a **Google Cloud Run** server. Additionally, we make use of **Google Cloud's Secret Manager** to not expose our API Keys to the public.
Internally, we use a domain registered with **domain.com** to point our text messages to our server.
## Challenges we ran into
Our team is very inexperienced with Google Cloud, Docker and GitHub Actions so it was a challenge needing to deploy our app to the internet. We recognized that without deploying, we would could not allow anybody to demo our application.
* There was a lot of configuration with permissions, and service accounts that had a learning curve. Accessing our secrets from our backend, and ensuring that the backend is authenticated to access the secrets was a huge challenge.
We also have varying levels of skill with JavaScript. It was a challenge trying to understand each other's code and collaborating efficiently to get this done.
## Accomplishments that we're proud of
We honestly think that this is a really cool application. It's very practical, and we can't find any solutions like this that exist right now. There was not a moment where we dreaded working on this project.
This is the most well planned project that we've all made for a hackathon. We were always aware how our individual tasks contribute to the to project as a whole. When we felt that we were making an important part of the code, we would pair program together which accelerated our understanding.
Continuously deploying is awesome! Not having to click buttons to deploy our app was really cool, and it really made our testing in production a lot easier. It also reduced a lot of potential user errors when deploying.
## What we learned
Planning is very important in the early stages of a project. We could not have collaborated so well together, and separated the modules that we were coding the way we did without planning.
Hackathons are much more enjoyable when you get a full night sleep :D.
## What's next for NoData
In the future, we would love to use AI to better suit the search results of the client. Some search results have a very large scope right now.
We would also like to have more time to write some tests and have better error handling. | ## Inspiration
Our mission is to foster a **culture of understanding**. A culture where people of diverse backgrounds get to truly *connect* with each other. But, how can we reduce the barriers that exists today and make the world more inclusive?
Our solution is to bridge the communication gap of **people with different races and cultures** and **people of different physical abilities**.
## What we built
In 36 hours, we created a mixed reality app that allows everyone in the conversation to communicate using their most comfortable method:
You want to communicate using your mother tongue?
Your friend wants to communicate using sign language?
Your aunt is hard of hearing and she wants to communicate without that back-and-forth frustration?
Our app enables everyone to do that.
## How we built it
VRbind takes in speech and coverts it into text using Bing Speech API. Internally, that text is then translated into your mother tongue language using Google Translate API, and given out as speech back to the user through the built-in speaker on Oculus Rift. Additionally, we also provide a platform where the user can communicate using sign language. This is detected using the leap motion controller and interpreted as an English text. Similarly, the text is then translated into your mother tongue language and given out as speech to Oculus Rift.
## Challenges we ran into
We are running our program in Unity, therefore the challenge is in converting all our APIs into C#.
## Accomplishments that we are proud of
We are proud that we were able complete with all the essential feature that we intended to implement and troubleshoot the problems that we had successfully throughout the competition.
## What we learned
We learn how to code in C# as well as how to select, implement, and integrate different APIs onto the unity platform.
## What's next for VRbind
Facial, voice, and body language emotional analysis of the person that you are speaking with. | ## Inspiration
You don't have Internet connection, but you want to use Bing anyway cuz you're a hopeless Internet addict.
Or you want to find cheap hotel deals. HotlineBing can help.
## What it does
Allows you to use Bing via text messaging, search for hotel deals using natural language (that's right no prompts) using HP-HavenOnDemand Extract Entity API
## How I built it
Nodejs, Express, HP-HavenOnDemand, BrainTree API, Bing API, PriceLine API
## Challenges I ran into
Twilio Voice API requires an upgraded account so this is not really a hotline (\*sighs Drake), general asynchronous JS flow conflicts with our expectation of how our app should flow, integrating Braintree API is also hard given the nature of our hack
## Accomplishments that I'm proud of
Get some functionality working
## What I learned
JS is a pain to work with
## What's next for HotlineBing
More polished version: improve natural language processing, allow for more dynamic workflow, send Drake gifs via MMS?? | winning |
## Inspiration
I dreamed about the day we would use vaccine passports to travel long before the mRNA vaccines even reached clinical trials. I was just another individual, fortunate enough to experience stability during an unstable time, having a home to feel safe in during this scary time. It was only when I started to think deeper about the effects of travel, or rather the lack thereof, that I remembered the children I encountered in Thailand and Myanmar who relied on tourists to earn $1 USD a day from selling handmade bracelets. 1 in 10 jobs are supported by the tourism industry, providing livelihoods for many millions of people in both developing and developed economies. COVID has cost global tourism $1.2 trillion USD and this number will continue to rise the longer people are apprehensive about travelling due to safety concerns. Although this project is far from perfect, it attempts to tackle vaccine passports in a universal manner in hopes of buying us time to mitigate tragic repercussions caused by the pandemic.
## What it does
* You can login with your email, and generate a personalised interface with yours and your family’s (or whoever you’re travelling with’s) vaccine data
* Universally Generated QR Code after the input of information
* To do list prior to travel to increase comfort and organisation
* Travel itinerary and calendar synced onto the app
* Country-specific COVID related information (quarantine measures, mask mandates etc.) all consolidated in one destination
* Tourism section with activities to do in a city
## How we built it
Project was built using Google QR-code APIs and Glideapps.
## Challenges we ran into
I first proposed this idea to my first team, and it was very well received. I was excited for the project, however little did I know, many individuals would leave for a multitude of reasons. This was not the experience I envisioned when I signed up for my first hackathon as I had minimal knowledge of coding and volunteered to work mostly on the pitching aspect of the project. However, flying solo was incredibly rewarding and to visualise the final project containing all the features I wanted gave me lots of satisfaction. The list of challenges is long, ranging from time-crunching to figuring out how QR code APIs work but in the end, I learned an incredible amount with the help of Google.
## Accomplishments that we're proud of
I am proud of the app I produced using Glideapps. Although I was unable to include more intricate features to the app as I had hoped, I believe that the execution was solid and I’m proud of the purpose my application held and conveyed.
## What we learned
I learned that a trio of resilience, working hard and working smart will get you to places you never thought you could reach. Challenging yourself and continuing to put one foot in front of the other during the most adverse times will definitely show you what you’re made of and what you’re capable of achieving. This is definitely the first of many Hackathons I hope to attend and I’m thankful for all the technical as well as soft skills I have acquired from this experience.
## What's next for FlightBAE
Utilising GeoTab or other geographical softwares to create a logistical approach in solving the distribution of Oyxgen in India as well as other pressing and unaddressed bottlenecks that exist within healthcare. I would also love to pursue a tech-related solution regarding vaccine inequity as it is a current reality for too many. | ## Inspiration
It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in.
## What it does
Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site.
## How we built it
We built it using:
* AWS
+ DynamoDB
+ Lambda
+ Cognito
+ APIGateway
+ Amplify
* React
+ Redux
+ React-Dom
+ MaterialUI
* serverless
* Twilio
* Domain.com
* Netlify
## Challenges we ran into
Centring divs should not be so difficult :(
Transferring the name servers from domain.com to Netlify
Serverless deploying with dependencies
## Accomplishments that we're proud of
Our logo!
It works :)
## What we learned
We learned how to host a domain and we improved our front-end html/css skills
## What's next for You're Doing Great
We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message. | ## Inspiration
Our team identified two intertwined health problems in developing countries:
1) Lack of easy-to-obtain medical advice due to economic, social and geographic problems, and
2) Difficulty of public health data collection in rural communities.
This weekend, we built SMS Doc, a single platform to help solve both of these problems at the same time. SMS Doc is an SMS-based healthcare information service for underserved populations around the globe.
Why text messages? Well, cell phones are extremely prevalent worldwide [1], but connection to the internet is not [2]. So, in many ways, SMS is the *perfect* platform for reaching our audience in the developing world: no data plan or smartphone necessary.
## What it does
Our product:
1) Democratizes healthcare information for people without Internet access by providing a guided diagnosis of symptoms the user is experiencing, and
2) Has a web application component for charitable NGOs and health orgs, populated with symptom data combined with time and location data.
That 2nd point in particular is what takes SMS Doc's impact from personal to global: by allowing people in developing countries access to medical diagnoses, we gain self-reported information on their condition. This information is then directly accessible by national health organizations and NGOs to help distribute aid appropriately, and importantly allows for epidemiological study.
**The big picture:** we'll have the data and the foresight to stop big epidemics much earlier on, so we'll be less likely to repeat crises like 2014's Ebola outbreak.
## Under the hood
* *Nexmo (Vonage) API* allowed us to keep our diagnosis platform exclusively on SMS, simplifying communication with the client on the frontend so we could worry more about data processing on the backend. **Sometimes the best UX comes with no UI**
* Some in-house natural language processing for making sense of user's replies
* *MongoDB* allowed us to easily store and access data about symptoms, conditions, and patient metadata
* *Infermedica API* for the symptoms and diagnosis pipeline: this API helps us figure out the right follow-up questions to ask the user, as well as the probability that the user has a certain condition.
* *Google Maps API* for locating nearby hospitals and clinics for the user to consider visiting.
All of this hosted on a Digital Ocean cloud droplet. The results are hooked-through to a node.js webapp which can be searched for relevant keywords, symptoms and conditions and then displays heatmaps over the relevant world locations.
## What's next for SMS Doc?
* Medical reports as output: we can tell the clinic that, for example, a 30-year old male exhibiting certain symptoms was recently diagnosed with a given illness and referred to them. This can allow them to prepare treatment, understand the local health needs, etc.
* Epidemiology data can be handed to national health boards as triggers for travel warnings.
* Allow medical professionals to communicate with patients through our SMS platform. The diagnosis system can be continually improved in sensitivity and breadth.
* More local language support
[1] <http://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/>
[2] <http://www.internetlivestats.com/internet-users/> | partial |
## Inspiration
We are a team of goofy engineers and we love making people laugh. As Western students (and a stray Waterloo engineer), we believe it's important to have a good time. We wanted to make this game to give people a reason to make funny faces more often.
## What it does
We use OpenCV to analyze webcam input and initiate signals using winks and blinks. These signals control a game that we coded using PyGame.
See it in action here: <https://youtu.be/3ye2gEP1TIc>
## How to get set up
##### Prerequisites
* Python 2.7
* A webcam
* OpenCV
1. [Clone this repository on Github](https://github.com/sarwhelan/hack-the-6ix)
2. Open command line
3. Navigate to working directory
4. Run `python maybe-a-game.py`
## How to play
**SHOW ME WHAT YOU GOT**
You are playing as Mr. Poopybutthole who is trying to tame some wild GMO pineapples. Dodge the island fruit and get the heck out of there!
##### Controls
* Wink left to move left
* Wink right to move right
* Blink to jump
**It's time to get SssSSsssSSSssshwinky!!!**
## How we built it
Used haar cascades to detect faces and eyes. When users' eyes disappear, we can detect a wink or blink and use this to control Mr. Poopybutthole movements.
## Challenges we ran into
* This was the first game any of us have ever built, and it was our first time using Pygame! Inveitably, we ran into some pretty hilarious mistakes which you can see in the gallery.
* Merging the different pieces of code was by-far the biggest challenge. Perhaps merging shorter segments more frequently could have alleviated this.
## Accomplishments that we're proud of
* We had a "pineapple breakthrough" where we realized how much more fun we could make our game by including this fun fruit.
## What we learned
* It takes a lot of thought, time and patience to make a game look half decent. We have a lot more respect for game developers now.
## What's next for ShwinkySwhink
We want to get better at recognizing movements. It would be cool to expand our game to be a stand-up dance game! We are also looking forward to making more hacky hackeronis to hack some smiles in the future. | ## Inspiration
The beginnings of this idea came from long road trips. When driving having good
visibility is very important. When driving into the sun, the sun visor never seemed
to be able to actually cover the sun. When driving at night, the headlights of
oncoming cars made for a few moments of dangerous low visibility. Why isn't there
a better solution for these things? We decided to see if we could make one, and
discovered a wide range of applications for this technology, going far beyond
simply blocking light.
## What it does
EyeHUD is able to track objects on opposite sides of a transparent LCD screen in order to
render graphics on the screen relative to all of the objects it is tracking. i.e. Depending on where the observer and the object of interest are located on the each side of the screen, the location of the graphical renderings are adjusted
Our basic demonstration is based on our original goal of blocking light. When sitting
in front of the screen, eyeHUD uses facial recognition to track the position of
the users eyes. It also tracks the location of a bright flash light on the opposite side
of the screen with a second camera. It then calculates the exact position to render a dot on the screen
that completely blocks the flash light from the view of the user no matter where
the user moves their head, or where the flash light moves. By tracking both objects
in 3D space it can calculate the line that connects the two objects and then where
that line intersects the monitor to find the exact position it needs to render graphics
for the particular application.
## How we built it
We found an LCD monitor that had a broken backlight. Removing the case and the backlight
from the monitor left us with just the glass and liquid crystal part of the display.
Although this part of the monitor is not completely transparent, a bright light would
shine through it easily. Unfortunately we couldn't source a fully transparent display
but we were able to use what we had lying around. The camera on a laptop and a small webcam
gave us the ability to track objects on both sides of the screen.
On the software side we used OpenCV's haar cascade classifier in python to perform facial recognition.
Once the facial recognition is done we must locate the users eyes in their face in pixel space for
the user camera, and locate the light with the other camera in its own pixel space. We then wrote
an algorithm that was able to translate the two separate pixel spaces into real 3D space, calculate
the line that connects the object and the user, finds the intersection of this line and the monitor,
then finally translates this position into pixel space on the monitor in order to render a dot.
## Challenges we ran Into
First we needed to determine a set of equations that would allow us to translate between the three separate
pixel spaces and real space. It was important not only to be able to calculate this transformation, but
we also needed to be able to calibrate the position and the angular resolution of the cameras. This meant
that when we found our equations we needed to identify the linearly independent parts of the equation to figure
out which parameters actually needed to be calibrated.
Coming up with a calibration procedure was a bit of a challenge. There were a number of calibration parameters
that we needed to constrain by making some measurements. We eventually solved this by having the monitor render
a dot on the screen in a random position. Then the user would move their head until the dot completely blocked the
light on the far side of the monitor. We then had the computer record the positions in pixel space of all three objects.
This then told the computer that these three pixel space points correspond to a straight line in real space.
This provided one data point. We then repeated this process several times (enough to constrain all of the degrees of freedom
in the system). After we had a number of data points we performed a chi-squared fit to the line defined by these points
in the multidimensional calibration space. The parameters of the best fit line determined our calibration parameters to use
in the transformation algorithm.
This calibration procedure took us a while to perfect but we were very happy with the speed and accuracy we were able to calibrate at.
Another difficulty was getting accurate tracking on the bright light on the far side of the monitor. The web cam we were
using was cheap and we had almost no access to the settings like aperture and exposure which made it so the light would
easily saturate the CCD in the camera. Because the light was saturating and the camera was trying to adjust its own exposure,
other lights in the room were also saturating the CCD and so even bright spots on the white walls were being tracked as well.
We eventually solved this problem by reusing the radial diffuser that was on the backlight of the monitor we took apart.
This made any bright spots on the walls diffused well under the threshold for tracking. Even after this we had a bit of trouble
locating the exact center of the light as we were still getting a bit of glare from the light on the camera lens. We were
able to solve this problem by applying a gaussian convolution to the raw video before trying any tracking. This allowed us
to accurately locate the center of the light.
## Accomplishments that we are proud of
The fact that our tracking display worked at all we felt was a huge accomplishments. Every stage of this project felt like a
huge victory. We started with a broken LCD monitor and two white boards full of math. Reaching a well working final product
was extremely exciting for all of us.
## What we learned
None of our group had any experience with facial recognition or the OpenCV library. This was a great opportunity to dig into
a part of machine learning that we had not used before and build something fun with it.
## What's next for eyeHUD
Expanding the scope of applicability.
* Infrared detection for pedestrians and wildlife in night time conditions
* Displaying information on objects of interest
* Police information via license plate recognition
Transition to a fully transparent display and more sophisticated cameras.
General optimization of software. | ## Inspiration
The inspiration for OdinCare AI comes from the high costs and scheduling challenges associated with traditional healthcare. The goal is to provide an AI-powered doctor that allows users to report their day-to-day health status and receive AI-based recommendations and diagnostics.
## What it does
OdinCare AI serves as a health visual assistant that collects users' day-to-day health status and provides them with AI-based diagnosis and recommendations. It's designed to alleviate the burden of high healthcare expenses and the constraints and cost of scheduling doctor visits. It is also a tool for doctors to use to monitor their patients health status and monitor drug efficiency since patiences/users provide their day-to-day health status to the AI.
## How we built it
OdinCare AI is currently built in JavaScript and utilizes speech recognition APIs, text-to-speech recognition, and the ChatGPT 3.5 Turbo API. The application collects users' daily health status and provides diagnostic information in the form of reports, recommendations, and more.
## Challenges we ran into
The original plan involved collecting previous health records from users, which has not been fully implemented yet. There may have been technical or data-related challenges in implementing this feature.
Also, we tried using Terra's Odin AI, but we could not get enough data and support to proceed.
## Accomplishments that we're proud of
One of the accomplishments of the project is the successful collection of users' day-to-day health status and the provision of AI-based diagnosis. Additionally, the team learned how to set up and implement the ChatGPT API.
## What we learned
The team learned how to implement the ChatGPT API and gained a deeper understanding of the healthcare sector during the project.
## What's next for OdinCare Ai
The next steps for OdinCare AI include further development to build a prototype that can be tested to assess its feasibility. This implies a continued commitment to improving and expanding the capabilities of the AI-based healthcare assistant. | winning |
## Inspiration 💡
The inspiration behind Songsnap came from the desire to combine the nostalgia of photos with the emotional resonance of music. We wanted to create a tool that could analyze the visual elements of a photo and curate a Spotify playlist that captures the essence of the scene depicted.
## What it does 🎵
Songsnap utilizes **GPT-4 Vision** to extract detailed information from images such as time period, mood, cultural elements, and location. This data is then summarized using **Cohere** into concise bullet points. Users can upload photos to the platform, which then generates a curated playlist based on the image description using a **fine-tuned GPT-3.5** model. The generated playlists can be accessed and played directly through the website thanks to integration with the **Spotify API**. Additionally, users can manage and access their playlists through a library page powered by Supabase, a **PostgreSQL database** API. We also have an **Auth0** login system set up.
## How we built it 🔨
We built Songsnap using a combination of advanced AI models, APIs, and web development technologies. Here's the breakdown:
* Image Analysis: GPT-4 Vision for extracting image information.
* Summarization: Cohere for summarizing image data into bullet points.
* Playlist Generation: Fine-tuned GPT-3.5 model to generate playlists based on image descriptions.
* Web Development: Flask framework for building the website.
* Authentication: Auth0 for user authentication.
* Spotify Integration: Spotify API for playlist generation and playback.
* Database Management: Supabase for storing and managing user playlists using PostgreSQL.
## Challenges we ran into 🚩
* Model Integration: Integrating multiple AI models and APIs seamlessly was challenging and required thorough testing and debugging.
* Data Handling: Managing and processing large amounts of image data and playlist information efficiently was a significant challenge.
* API Limitations: Working within the limitations of third-party APIs, especially Spotify and Auth0, posed challenges during development and integration.
* User Experience: Ensuring a smooth and intuitive user experience across the website, especially with playlist management and playback, required careful design and implementation.
## Accomplishments that we're proud of ⭐
* Successfully integrating multiple AI models and APIs to create a cohesive platform.
* Building a user-friendly website with seamless authentication and playlist management features.
* Generating accurate and relevant playlists based on image descriptions using AI-driven analysis.
* Overcoming technical challenges and delivering a functional prototype within the hackathon timeframe.
## What we learned 📚
* Deepened our understanding of AI model integration and API usage.
* Enhanced our skills in web development, particularly with Flask and database management.
* Learned effective strategies for handling and processing diverse data types, including images and text.
* Gained insights into user experience design and optimization for web applications.
## What's next for Songsnap 🚀
* Enhanced Image Analysis: Further refining the image analysis capabilities to extract even more detailed and nuanced information.
* Improved Playlist Generation: Fine-tuning the playlist generation algorithm to provide more accurate and diverse song recommendations.
* Community Engagement: Implementing social features to allow users to share and discover playlists created by others.
* Mobile App Development: Expanding Songsnap's reach by developing mobile applications for iOS and Android platforms.
* Pitching: Seeing if Spotify is interested in the idea. | ## Inspiration : A Picture Is Worth A Thousand Words (Songs)?
Our team wanted to connect to the intangible parts of the music listening experience.
We were inspired by the feeling of walking down the street on a particularly beautiful day and want some music to match the mood you're in. Do you ever feel like a picture is worth a thousand songs?
With the help of web frameworks, computer vision, and some music APIs, we were able to capture the essence of the experience. **Audioscape** captures the musical essence of your favorite scenery.
## What it does
**Audioscape** captures the image that the user takes with their phone or mobile device. It then parses the image using the [KMeans Algorithms](https://en.wikipedia.org/wiki/K-means_clustering) for statistical analysis.
It uses the result of this analysis to create Spotify playlists matching the (mathematically decided!) "mood" of the picture.
## How I built it
We split the major components into tasks
## Challenges I ran into
**Simon**: I connected all the different parts of our app together to make them work cohesively. The challenging parts were to implement working login and file upload mechanisms that worked across both the front-end and back-end. There were several tutorials and guides online which were a bit over-engineered for the scope of our project, so I had to spend some time figuring out exactly which parts we would need and implement just those parts.
**Stephan**: I worked primarily on the mechanics of connecting the image values to song features, creating the playlist, and getting user data from the Spotify API. Authetication and understanding how the Spotify API returned information was difficult at first, but I was able to figure it out through lots of trial and error.
**Lisette**: While initially trying to build a python image analysis back-end using only the `Pillow` and `Image` libraries, it quickly became clear an elegant solution required turning the images in `numpy` arrays, iterating through them, and finding means of every pixel RGB in the section (or cluster) and finding means inside each cluster. I needed to use a computer vision library, along with `scikit-learn` and scikit image for image analysis.
**Nicha**: Challenges I ran into: I had a lot of trouble running the backend, so it was really hard to test whether the API requests correctly returned what was needed from the backend.
## Accomplishments that I'm proud of
**Lisette**:I'm proud of being able to use a nuanced algorithm to accurately determine the "true mean" of the image from the user.
**Nicha**: Accomplishments that I’m proud of: I am proud that I was able to figure about how to use React/HTML/CSS and design the frontend in such a short amount of time. I learnt React using a tutorial and this is the first real project I’ve done with it.
**Simon**: I was happy to be able to connect all the parts so seamlessly, because usually that's what troubles me the most in my projects. It was nice to make sure that the frontend and backend could communicate with no weird bugs.
**Stephan**: Being able to see a complete Spotify playlist after working on the API functions was very rewarding. I was also very surprised to see how closely those songs matched each other in terms of mood.
## What I learned
**Stephan**: Working on a project like this is ultimately about adapting and good communication! Knowing that there will be things that are new to learn as well as having teammates you can ask for help were crucial for me in succeeding on my individual part.
**Lisette**: I learned about working with a successful developer team!
**Nicha**: I learned to design a responsive front-end using React for both web apps and mobile apps. i learned how to connect fundamentals of web design to create a great user experience.
**Simon**: This is the smoothest that a hackathon has gone for me and it was really because of how we all communicated well and clearly defined tasks for ourselves. In my future projects, I will try to replicate that and make sure we're not held up by uncertainty.
## What's next for Audioscape
We'd love to develop a smoother front-end for the app - we think this could utimately tie in the experience of connecting music to beautiful images. | # Nexus, **Empowering Voices, Creating Connections**.
## Inspiration
The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms.
With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals.
Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness.
## What it Does
In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered.
We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make.
## How we built it
We built our application using a multitude of services/frameworks/tool:
* React.js for the core client frontend
* TypeScript for robust typing and abstraction support
* Tailwind for a utility-first CSS framework
* DaisyUI for animations and UI components
* 100ms live for real-time audio communication
* Clerk for a seamless and drop-in OAuth provider
* React-icons for drop-in pixel perfect icons
* Vite for simplified building and fast dev server
* Convex for vector search over our database
* React-router for client-side navigation
* Convex for real-time server and end-to-end type safety
* 100ms for real-time audio infrastructure and client SDK
* MLH for our free .tech domain
## Challenges We Ran Into
* Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working.
* Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus.
* Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time.
## Accomplishments that we're proud of
* Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus.
* Learning a ton of new technologies we would have never come across without Cal Hacks.
* Being able to code for at times 12-16 hours straight and still be having fun!
* Integrating 100ms well enough to experience bullet-proof audio communication.
## What we learned
* Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better.
* Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability!
* How to work together on a team project with many commits and iterate fast on our moving parts.
## What's next for Nexus
* Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users.
* Allow users to favorite or persist their favorite matches to possibly re-connect in the future.
* Create more options for users within rooms to interact with not just their own audio and voice but other users as well.
* Establishing a more sophisticated and bullet-proof matchmaking service and algorithm.
## 🚀 Contributors 🚀
| | | | |
| --- | --- | --- | --- |
| [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) | | losing |
# Pose-Bot
### Inspiration ⚡
**In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are
spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot**
### What it does 🤖
We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a
image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away
from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen.
We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application.
We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀.
### How We built it 💡
1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications.
2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone.
3. For training our model we used our own image as the train data and tested it in different settings.
4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose.
5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen.
### Challenges we ran into 🧠
* Creating a model with good acccuracy in a general setting.
* Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval.
* Integrating the model into our website.
* Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes.
* Deploying the application.
### Accomplishments that we are proud of 😌
We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display
system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by
any user!
### What we learned 🤩
We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API.
We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could
actually benefit student and help them understand the severity of the cause.
### What's next for Pose-Bot 📈
#### ➡ Creating a chrome extension
So that the user can use the functionality on their web browser.
#### ➡ Improve the pose detection model.
The accuracy of the pose detection model can be increased in the future.
#### ➡ Create more classes to help students more concentrate.
Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate.
### Help File 💻
* Clone the repository to your local directory
* `git clone https://github.com/cryptus-neoxys/posture.git`
* `npm i -g live-server`
* Install live server to run it locally
* `live-server .`
* Go to project directory and launch the website using live-server
* Voilla the site is up and running on your PC.
* Ctrl + C to stop the live-server!!
### Built With ⚙
* HTML
* CSS
* Javascript
+ Tensorflow.js
+ Web Browser API
* Google API
* EchoAR
* Google Poly
* Deployed on Vercel
### Try it out 👇🏽
* 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/)
* 🕸 [The Website](https://pose-bot.vercel.app/)
* 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1)
### 3️⃣ Cheers to the team 🥂
* [Apurva Sharma](https://github.com/Apurva-tech)
* [Aniket Singh Rawat](https://github.com/dikwickley)
* [Dev Sharma](https://github.com/cryptus-neoxys) | ## Overview
People today are as connected as they've ever been, but there are still obstacles in communication, particularly for people who are deaf/mute and can not communicate by speaking. Our app allows bi-directional communication between people who use sign language and those who speak.
You can use your device's camera to talk using ASL, and our app will convert it to text for the other person to view. Conversely, you can also use your microphone to record your audio which is converted into text for the other person to read.
## How we built it
We used **OpenCV** and **Tensorflow** to build the Sign to Text functionality, using over 2500 frames to train our model. For the Text to Sign functionality, we used **AssemblyAI** to convert audio files to transcripts. Both of these functions are written in **Python**, and our backend server uses **Flask** to make them accessible to the frontend.
For the frontend, we used **React** (JS) and MaterialUI to create a visual and accessible way for users to communicate.
## Challenges we ran into
* We had to re-train our models multiple times to get them to work well enough.
* We switched from running our applications entirely on Jupyter (using Anvil) to a React App last-minute
## Accomplishments that we're proud of
* Using so many tools, languages and frameworks at once, and making them work together :D
* submitting on time (I hope? 😬)
## What's next for SignTube
* Add more signs!
* Use AssemblyAI's real-time API for more streamlined communication
* Incorporate account functionality + storage of videos | ## Inspiration
The pandemic has affected the health of billions worldwide, and not just through COVID-19. Studies have shown a worrying decrease in physical activity due to quarantining and the closure of clubs, sports, and gyms. Not only does this discourage an active lifestyle, but it can also lead to serious injuries from working out alone at home. Without a gym partner or professional trainer to help spot and correct errors in movements, one can continue to perform inefficient and often damaging exercises without even being aware themselves.
## What it does
Our solution to this problem is **GymLens**, a virtual gym trainer that allows anyone to workout at home with *personal rep tracking* and *correct posture guidance*. During the development of our Minimum Viable Product, we implemented a pose tracker using TensorFlow to track the movement of the person’s key body points. The posture during exercises such as pushups can then be evaluated by processing the data points through a linear regression algorithm. Based on the severity of the posture, a hint is provided on the screen to re-correct the posture.
## How we built it
We used a Tensorflow MoveNet model to detect the positions of body parts. These positions were used as inputs for our machine learning algorithm, which we trained to identify specific stances. Using this, we were able to identify repetitions between each pose.
## Challenges we ran into
From the beginning, our team had to navigate the code editor of Sublime Text and Floobits, which proved to be more difficult than we imagined since members could not log in and sign in to Github. Our front end members who were coding with HTML and CSS ran into problems with margins and paddings with divs and buttons. Aligning and making sure the elements were proportional caused a lot of frustration, but with moral support and many external sources, we were able to get a sleek website with which we could host our project.
Incompatibilities between machine learning tools and the library used for pose detection were a major hurdle. We were able to solve this issue by using our own custom-coded machine learning library with a simple feed forward neural network.
Lastly, our struggles with Floobits ended up being one of our biggest setbacks. It turned out, our entire team soon realized, that when two people were on the same file, the lines of code would severely glitch out, causing uncontrollable chaos when typing. Due to the separate nature of front end and back end programmers, it was inevitable that members would step on each others’ toes on the same file and accidentally undo, delete, or add too many characters into one line of code. We ended up having to code cautiously in the fear of deleting valuable code, but we had many laughs over the numerous errors that transpired due to this glitch. Furthermore, Floobits’ ability to overwrite code turned out to be an asset and liability. Although we were able to work on the same files in real-time, destruction from one member of the team turned out to be collateral. On the last evening of the hackathon, one of our team members accidentally overwrote the remote files that the rest of the team had worked hours on instead of the local ones. In a frantic effort to get our code back, our group tried to press ctrl-z to get to the point where the deletion occurred, but it was too late. Unfortunately, there was nothing we could do to get about 3 hours of work back. Luckily, with our excellent team morale, we separated into groups to repair what had been lost. However, our problems with this code editor did not end here. Nearing the end of the hackathon, our front end and back end duos came together triumphantly as we presented our accomplishments to each other. This final step turned out to be an unsuspecting hurdle once more. As the back end merged their final product into the website, many errors with Github pushes and the integration with Floobits became apparent. Progress had not been saved from branch to branch, and the front end code ended up being set back another 2 hours. Having dealt with this problem before, our team put our heads down, pushed away the frustration of restarting, and began to mend the lost progress once more.
## Accomplishments that we're proud of
One significant milestone within our project was the successful alignment of the canvas-drawn posture overlay with the body of a user. Its occurrence brought the team to a video call, where we offered congratulations while hiding our faces behind our freshly made overlay. Its successful tracking later became the main highlight of multiple demonstrations of exercises and jokes surrounding bad posture.
The front end development team enjoyed the challenge of coding in unfamiliar territory. Their encounters with unfamiliar functions as well as their first attempts at Javascript to create a stopwatch with working buttons all resulted in greater feelings of pride as they were incorporated into the site. They are proud to have designed a self-reassured, visually-appealing website by applying the knowledge they’ve gained over the last 36 hours.
Personally, we're proud to have stayed awake for 21 hours.
## What we learned
This hackathon became a giant learning experience for all of our members, despite the range of coding abilities and experience. Our more experienced back end members tackled Tensorflow’s MoveNet pose detection library for the first time. Meanwhile, the members responsible for the user interface, design, and website navigated Floobits and Sublime text as beginners. Our understanding of the different css functions greatly increased, most notably with layering and element positioning with inspect element. Additionally, members ventured into the world of Javascript for the first time, and realized its handiness with HTML and CSS.
Overall, our team surprised ourselves with our mental fortitude when numerous obstacles were thrown our way, and our ability to learn different functions, languages, software, and platforms so quickly. We worked cohesively and efficiently, all the while discovering the true extent of our capabilities under time pressure and during late nights calls.
## What's next for GymLens
In the future, we hope to expand the functionality of our project by making greater use of its movement correction and movement recognition capabilities. We plan to offer a greater library of exercises sourced from online trainers’ videos used as training data. This extra variety will allow users to take total command of their workout regimes, all while ensuring the safety and efficacy of all exercises. | winning |
## Inspiration
As the demand for developers increases worldwide, providing high quality learning resources is critical for beginners in coding. However, the majority of programming resources are written in English, which may introduce a potential language barrier for individuals that do not have English as a first language. Incode was designed to make programming resources more inclusive and accessible to the community of developers around the globe, whether they are beginners or seasoned professionals.
## How it works
Powered by OpenAI's GPT-3, Incode utilizes language translation in combination with code generation to provide programming material in just a few seconds. You can provide a command in any spoken language and our application will generate code in response while providing helpful comments and variable names in the same language that you prompted with!
### Example
We want to generate C code to solve the Dining Philosophers problem. The below generates C code in response to the following prompt in French:
Écrire un programme pour résoudre le problème des philosophes qui dînent.
*Response*
```
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define N 5
sem_t fourchettes[N];
void *philosophe(void *num)
{
int i = *(int *)num;
int gauche = (i + N - 1) % N;
int droite = (i + 1) % N;
while (1)
{
sem_wait(&fourchettes[gauche]);
sem_wait(&fourchettes[droite]);
printf("Le philosophe %d mange\n", i);
sleep(1);
sem_post(&fourchettes[gauche]);
sem_post(&fourchettes[droite]);
}
}
int main()
{
pthread_t tid[N];
int i;
for (i = 0; i < N; i++)
sem_init(&fourchettes[i], 0, 1);
for (i = 0; i < N; i++)
pthread_create(&tid[i], NULL, philosophe, &i);
for (i = 0; i < N; i++)
pthread_join(tid[i], NULL);
return 0;
}
```
## Tech Stack
### Frontend
The Frontend was developed using React and Typescript.
### Backend
The Backend was developed using Flask and Python. In addition, the models that we used for language translation and code generation are from [OpenAI's GPT-3](https://beta.openai.com/docs/models/gpt-3). Finally, we deployed using Microsoft Azure. | ## Inspiration
Coding in Spanish is hard. Specifically, Carol had to work in Spanish while contributing to One Laptop per Child, but non-English speakers have this barrier every day. Around the world, many international collaborators work on open and closed source projects but in many cases, this language barrier can pose an additional obstacle to contributing for such projects, especially since English is the only widely supported language for programming. Thus, we aimed to solve this by allowing collaborators to easily translate source code files into their desired language locally, while maintaining the ability to commit in the original language of the project.
## What it does
Polycode is a developer command-line tool, which is also available as an Atom plugin, that lets you translate code to your language. Currently, it supports Python and Javascript in addition to any language that Google Translate supports for translating functionality, with plans in place to support more coding languages in the future.
## How I built it
* Polycode tokenizes identifiers and objects within the source files, then it finds out which strings can be translated
* Backend stdlib API interacts with the Translate API from the Google Cloud Platform
* Local maps are built to ensure 1:1 translations and that translations do not change over time, resulting in breaking changes to the code
## Challenges I ran into
* Parsing source code files and finding identifiers that should be translated i.e. primarily variable and function names
* Handling asynchronous calls to the Translate API within the API created by us in stdlib
## Accomplishments that we're proud of
* Figuring out how to create a pip package to allow for easy installation of command line tools
* Integrating with Atom
## What I learned
* Parsing and overwriting source files is hard
* Google Translate is weird
## What's next for Polycode
* Support more programming languages
* Deploying for the world to use! | ## AI, AI, AI...
The number of projects using LLMs has skyrocketed with the wave of artificial intelligence. But what if you *were* the AI, tasked with fulfilling countless orders and managing requests in real time? Welcome to chatgpME, a fast-paced, chaotic game where you step into the role of an AI who has to juggle multiple requests, analyzing input, and delivering perfect responses under pressure!
## Inspired by games like Overcooked...
chatgpME challenges you to process human queries as quickly and accurately as possible. Each round brings a flood of requests—ranging from simple math questions to complex emotional support queries—and it's your job to fulfill them quickly with high-quality responses!
## How to Play
Take Orders: Players receive a constant stream of requests, represented by different "orders" from human users. The orders vary in complexity—from basic facts and math solutions to creative writing and emotional advice.
Process Responses: Quickly scan each order, analyze the request, and deliver a response before the timer runs out.
Get analyzed - our built-in AI checks how similar your answer is to what a real AI would say :)
## Key Features
Fast-Paced Gameplay: Just like Overcooked, players need to juggle multiple tasks at once. Keep those responses flowing and maintain accuracy, or you’ll quickly find yourself overwhelmed.
Orders with a Twist: The more aware the AI becomes, the more unpredictable it gets. Some responses might start including strange, existential musings—or it might start asking you questions in the middle of a task!
## How We Built It
Concept & Design: We started by imagining a game where the player experiences life as ChatGPT, but with all the real-time pressure of a time management game like Overcooked. Designs were created in Procreate and our handy notebooks.
Tech Stack: Using Unity, we integrated a system where mock requests are sent to the player, each with specific requirements and difficulty levels. A template was generated using defang, and we also used it to sanitize user inputs. Answers are then evaluated using the fantastic Cohere API!
Playtesting: Through multiple playtests, we refined the speed and unpredictability of the game to keep players engaged and on their toes. | partial |
## Inspiration
Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation.
We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently.
## What it does
An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible!
## How we built it

*So how does CharactAR work?*
To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit!
In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment.
The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!).
In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models.
Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen.
Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment.
For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate.
## Challenges we ran into
One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E.
The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning.
Below, we have “banana man” renders from our hosted Shap-E model.


*Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).*
Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats.
## Accomplishments that we're proud of
There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing.
Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query.
We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription.
The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability.
For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly.
## What we learned
For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming.
We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience.
Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character.
## What's next for CharactAR
Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable.
While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations. | ## Inspiration
Our inspiration comes from the idea that the **Metaverse is inevitable** and will impact **every aspect** of society.
The Metaverse has recently gained lots of traction with **tech giants** like Google, Facebook, and Microsoft investing into it.
Furthermore, the pandemic has **shifted our real-world experiences to an online environment**. During lockdown, people were confined to their bedrooms, and we were inspired to find a way to basically have **access to an infinite space** while in a finite amount of space.
## What it does
* Our project utilizes **non-Euclidean geometry** to provide a new medium for exploring and consuming content
* Non-Euclidean geometry allows us to render rooms that would otherwise not be possible in the real world
* Dynamically generates personalized content, and supports **infinite content traversal** in a 3D context
* Users can use their space effectively (they're essentially "scrolling infinitely in 3D space")
* Offers new frontier for navigating online environments
+ Has **applicability in endless fields** (business, gaming, VR "experiences")
+ Changing the landscape of working from home
+ Adaptable to a VR space
## How we built it
We built our project using Unity. Some assets were used from the Echo3D Api. We used C# to write the game. jsfxr was used for the game sound effects, and the Storyblocks library was used for the soundscape. On top of all that, this project would not have been possible without lots of moral support, timbits, and caffeine. 😊
## Challenges we ran into
* Summarizing the concept in a relatively simple way
* Figuring out why our Echo3D API calls were failing (it turned out that we had to edit some of the security settings)
* Implementing the game. Our "Killer Tetris" game went through a few iterations and getting the blocks to move and generate took some trouble. Cutting back on how many details we add into the game (however, it did give us lots of ideas for future game jams)
* Having a spinning arrow in our presentation
* Getting the phone gif to loop
## Accomplishments that we're proud of
* Having an awesome working demo 😎
* How swiftly our team organized ourselves and work efficiently to complete the project in the given time frame 🕙
* Utilizing each of our strengths in a collaborative way 💪
* Figuring out the game logic 🕹️
* Our cute game character, Al 🥺
* Cole and Natalie's first in-person hackathon 🥳
## What we learned
### Mathias
* Learning how to use the Echo3D API
* The value of teamwork and friendship 🤝
* Games working with grids
### Cole
* Using screen-to-gif
* Hacking google slides animations
* Dealing with unwieldly gifs
* Ways to cheat grids
### Natalie
* Learning how to use the Echo3D API
* Editing gifs in photoshop
* Hacking google slides animations
* Exposure to Unity is used to render 3D environments, how assets and textures are edited in Blender, what goes into sound design for video games
## What's next for genee
* Supporting shopping
+ Trying on clothes on a 3D avatar of yourself
* Advertising rooms
+ E.g. as your switching between rooms, there could be a "Lululemon room" in which there would be clothes you can try / general advertising for their products
* Custom-built rooms by users
* Application to education / labs
+ Instead of doing chemistry labs in-class where accidents can occur and students can get injured, a lab could run in a virtual environment. This would have a much lower risk and cost.
…the possibility are endless | ## Inspiration
As a video game lover and someone that's been working with Gen AI and LLMs for a while, I really wanted to see what combining both in complex and creative ways could lead to. I truly believe that not too far in the future we'll be able to explore worlds in RPGs where the non-playable-characters feel immersively alive, and part of their world. Also I was sleep-deprived and wanted to hack something silly :3
## What it does
I leveraged generative AI (Large Language Models), as well as Vector Stores and Prompt Chaining to 'train' an NPC without having to touch the model itself. Everything is done in context, and through external memory using the Vector Store. Furthermore, a seperate model is concurrently analyzing the conversation as it goes to calculate conversation metrics (familiarity, aggresivity, trust, ...) to trigger events and new prompts dynamically! Sadly there is no public demo for it, because I didn't want to force anyone to create their own api key to use my product, and the results just wouldn't be the same on small hostable free tier llms.
## How we built it
For the frontend, I wanted to challenge myself and not use any framework or library, so this was all done through good-old html and vanilla JS with some tailwind here and there. For the backend, I used the Python FastAPI framework to leverage async workflows and websockets for token streaming to the frontend. I use OpenAI models combined together using Langchain to create complex pipelines of prompts that work together to keep the conversation going and update its course dynamically depending on user input. Vector Stores serve as external memory for the LLM, which can query them through similarity search (or other algorithms) in real time to supplement its in-context conversation memory through two knowledge sources: 'global' knowledge, which can be made up of thousands of words or small text documents, sources that can be shared by NPCs inhabiting the same 'world'. These are things the NPC should know about the world around them, its history, its geography, etc. The other source is 'local' knowledge, which is mostly unique to the NPC: personal history, friends, daily life, hobbies, occupations, etc. The combination of both, accessible in real time, and easily enhanceable through other LLMs (more on this in 'what's next) leads us to a chatbot that's been essentially gaslit into a whole new virtual life! Furthermore, heuristically determined conversation 'metrics' are dynamically analyzed by a separate llm on the side, to trigger pre-determined events based on their evolution. Each NPC can have pre-set values for these metrics, along with their own metric-triggered events, which can lead to complex storylines and give way to cool applications (quest giving, ...)
## Challenges we ran into
I wanted to do this project solo, so I ran out of time on a few features. The token streaming for the frontend was somehow impossible to make work correctly. It was my first time coding a 'raw' API like this, so that was also quite a challenge, but got easier once I got the hang of it. I could say a similar thing for the frontend, but I had so much fun coding it that I wouldn't even count it as a challenge!
Working with LLM's is always quite a challenge, as trying to get correctly formatted outputs can be compared to asking a toddler
## Accomplishments that we're proud of
I'm proud of the idea and the general concept and design, as well as all the features and complexities I noted down that I couldn't implement! I'm also proud to have dedicated so much effort to such a useless, purely-for-fun scatter-brained 3-hours-of-sleep project in a way that I really haven't done before. I guess that's the point of hackathons! Despite a few things not working, I'm proud to have architectured quite a complex program in very little time, by myself, starting from nothing but sleep-deprivation-fueled jotted-down notes on my phone.
## What we learned
I learned a surprising amount of HTML, CSS and JS from this, elements of programming I always pushed away because I am a spoiled brat. I got to implement technologies I hadn't tried before as well, like Websockets and Vector Stores. As with every project, I learned about feature creep and properly organising my ideas in a way that something, anything can get done. I also learned that there is such a thing as too much caffeine, which I duely noted and will certainly regret tonight.
## What's next for GENPC
There's a lot of features I wanted to work on but didn't have time, and also a lot of potential for future additions. One I mentioned earlier is too automatically extend global or local knowledge through a separate LLM: given keywords or short phrases, a ton of text can be added to complement the existing data and further fine-tune the NPC.
There's also an 'improvement mode' I wanted to add, where you can directly write data into static memory through the chat mode. I also didn't have time to completely finish the vector store or conversation metric graph implementations, although at the time I'm writing this devpost I still have 2 more hours to grind >:)
There's a ton of stuff that can arise from this project in the future: this could become a scalable web-app, where NPCs can be saved and serialized to be used elsewhere. Conversations could be linked to voice-generation and facial animation AIs to further boost the immersiveness. A ton of heuristic optimizations can be added around the metric and trigger systems, like triggers influencing different metrics. The prompt chaining itself could become much more complex, with added layers of validation and analysis. The NPCs could be linked to other agentic models and perform complex actions in simulated worlds! | winning |
## Inspiration
The upcoming election season is predicted to be drowned out by mass influx of fake news. Deepfakes are a new method to impersonate famous figures saying fictional things, and could be particularly influential in the outcome of this and future elections. With international misinformation becoming more common, we wanted to develop a level of protection and reality for users. Facebook's Deepfake Detection Challenge, which aims to crowdsource ideas, inspired us to approach this issue.
## What it does
Our Chrome extension processes the current video the user is watching. The video is first scanned using AWS to identify the politician/celebrity in subject. Knowing the public figure allows the app to choose the model trained by machine learning to identify Deepfakes targeting that specific celebrity. The result of the deep fake analysis are then shown to the user through the chrome extension, allowing the user to see in the moment whether a video might be authentic or forged.
Our Web app offers the same service, and includes a prompt that describes this issue to users. Users are able to upload a link to any video they are concerned may be tampered with, and receive a result regarding the authenticity of the video.
## How we built it
We used the PCA (Principal Component Analysis) to build the model by hand, and broke down the video into one second frames.
Previous research has evaluated the mouths of the deepfake videos, and noticed that these are altered, as are the general facial movements of a person. For example, previous algorithms have looked at mannerisms of politicians and detected when these mannerisms differed in a deepfake video. However, this is computationally a very costly process, and current methods are only utilized in a research settings.
Each frame is then analyzed with six different concavity values that train the model. Testing data is then used to try the model, which was trained in a Linear Kernel (Support Vector Machine). The Chrome extension is done in JS and HTML, and the other algorithms are in Python.
The testing data set is comprised from the user's current browser video, and the training set is composed of Google Images of the celebrity.
## Challenges we ran into
Finding a dataset of DeepFakes large enough to train our model was difficult. We ended up splitting the frames into 70% dataset, and 30% of it is used for testing (all frames are different however). Automatically exporting data from JavaScript to Python was also difficult, as JavaScript can not write on external files.
Therefore, we utilized a server and were able to successfully coordinate the machine learning with our web application and Chrome extension!
## Accomplishments that we're proud of
We are proud to have created our own model and make the ML algorithm work! It was very satisfying to see the PCA clusters, and graph the values into groups within the Support Vector Machine. Furthermore, getting each of the components ie, Flask and Chrome extension to work was gratifying, as we had little prior experience in regards to transferring data between applications.
We are able to successfully determine if a video is a deep fake, and notify the person in real time if they may be viewing tampered content!
## What we learned
We learned how to use AWS and configure the credentials, and SDK's to work. We also learned how to configure and utilize our own machine learning algorithm, and about the dlib/OpenCV python Libraries!
Furthermore, we understood the importance of the misinformation issue, and how it is possible to utilize a conjuction of a machine learning model with an effective user interface to appeal to and attract internet users of all ages and demographics.
## What's next for DeFake
Train the model with more celebrities and get the chrome extension to output whether the videos in your feed are DeepFakes or not as you scroll. Specifically, this would be done by decreasing the run time of the machine learning algorithm in the background. Although the algorithm is not as computationally costly as conventional algorithms created by experts, the run time barrs exact real-time feedback within seconds.
We would also like to use the Facebooks DeepFake datasets when they are released.
Specifically, deepfakes are more likely to become a potent tool of cyber-stalking and bullying in the short term, says Henry Ajder, an analyst at Deeptrace. We hope to utilize larger data bases of more diverse deepfakes outside of celebrity images to also prevent this future threat. | ## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future. | ## Inspiration
Everybody eats and in college if you are taking difficult classes it is often challenging to hold a job. Therefore as college students we have no income during the year. Our inspiration came as we have moved off campus this year to live in apartments with full kitchens but the lack of funds to make complete meals at a reasonable price. So along came the thought that we couldn't be the only ones with this issue, so..."what if we made an app where all of us could connect via a social media platform and share and post our meals with the price range attached to it so that we don't have to come up with good cost effective meals on our own".
## What it does
Our app connects college students, or anyone that is looking for a great way to find good cost effective meals and doesn't want to come up with the meals on their own, by allowing everyone to share their meals and create an abundant database of food.
## How we built it
We used android studio to create the application and tested the app using the built in emulator to see how the app was coming along when viewed on the phone. Specifically we used an MVVM design to interweave the functionality with the graphical display of the app.
## Challenges we ran into
The backend that we were familiar with ended up not working well for us, so we had to transition over to another backend holder called back4app. We also were challenged with the user personal view and being able to save the users data all the time.
## Accomplishments that we're proud of
We are proud of all the work that we put into the application in a very short amount of time, and learning how to work with a new backend during the same time so that everything worked as intended. We are proud of the process and organization we had throughout the project, beginning with a wire frame and building our way up part by part until the finished project.
## What we learned
We learned how to work with drop down menus to hold multiple values of possible data for the user to choose from. And for one of our group members learned how to work with app development on the full size scale.
## What's next for Forkollege
In version 2.0 we plan on implementing a better settings page that allows the user to change their password, we also plan on fixing the for you page specifically for each recipe displayed we were not able to come up with a way to showcase the number of $ sign's and instead opted for using stars again. As an outside user this is a little confusing, so updating this aspect is of the most importance. | winning |
## Inspiration
Ethan: I have a fiery passion for the fields of AI, Nutrition, and Neuroscience, which led to an attempt to work together to create a tool for those who may not share the same passion, but share the same, well... anatomy & biology who can also benefit.
I brought my expertise from my current work as a Science Research Fellow @ CUNY Research Foundation, neuroscience lab Research Assistant @ SUNY Downstate (where we are running clinical trials on a vitamin medical therapy for Autism), and Nutrition lab Research Assistant @ Brooklyn College (where we do research on the epigenetic effects of prenatal and postnatal Choline supplementation & diet).
**Problem**
Personal Trainers: 60-120$ / hr
Dietitians: Insurance + can be upwards of $1,000
Appointment with Doctor to discuss diagnosis: Not accessible to all & wait times are insane.
**Common Issue**
A common reservation everyone in the group has was the current culture of referees vs do'ers.
More people are willing to post/tell you what needs to be done, but they are not providing something the average busy American with a family can actually implement. This leads to guilt, analysis paralysis, and self-pity. This is why we created a toolbox, rather than a booklet.
Another common experience was a lack of education in the fields of nutrition, fitness, and neuroscience. Without great knowledge, comes great levels of powerlessness; we've all spent countless hours looking up the right diet/routine, just to get sucked into the vortex of online fitness influencers providing unsound advice.
Ethan: The gut-brain axis, epigenetics, nutritional psychiatry, and healthspan are some of the main drivers of much of what I do, and it was an honor to have an amazing team that ran with the idea and built something of substantial value that anyone can use.
## What it does
**TheOpen Health app calculates your:**
* TDEE - Total Daily Energy Expenditure. This is the total amount of calories you burn on a daily basis, based on your height, weight, age, sex, bodyfat, and activity level.
* BMR - Basal Metabolic Rate. This is the total amount of calories you burn by simply being alive based on your height, weight, age, sex, and body fat.
* BMI - Body Mass Index.
* Calorie Deficit or Surplus Suggestions based on your stated goal.
**The Open Health app provides:**
* an AI Computer Vision feature that allows you to do 2 big things:
+ Track your reps without thinking
+ Track your calories by simply TAKING A PICTURE OF YOUR FOOD
* detailed + personalized workout routines that adhere to your current equipment, fitness, and motivational state.
* detailed nutritional tips, with the overarching goal of decreasing inflammation, and increasing healthspan + vitality!
* Motivational pieces of advice from Arnold, Mike Mentzer, David Goggins, Dietitians, and more!!
* a tool for you to track calories, macronutrients, and vitamins
## How we built it
We built this city on Rock & Roll
In all seriousness though, we first iterated our ideas off problems we and our family members face,
Considering our diverse backgrounds, we found overlap in the realm of fitness; whether that be through our troubles, lack of knowledge, or relatives with metabolic & mental issues.
We then figured out the modalities by which we wanted to make a tool to assist folks with their fitness journey, whether that be from starting, years down the line, working on lowering blood sugar or mood, or anything else!
After much trouble with the initial APIs we found, we filtered through and found some functional ones, which we implemented into the app.
We then fine-tuned several models randing from old-school body builders to dietitians on several conversations and prompts, to get our desired outputs.
Afterward, we made a code of the Mifflin-St Jeor Equation to calculate TDEE & BMR, as well as a BMI formula.
Finally, we worked on the UI/UX in order to make the app (1) more accessible (2) more enticing to... access.
## Challenges we ran into
Faulty APIs.
Several of the APIs we planned to use wound up not working; we found out long after we tried implementing them, big L.
## Accomplishments that we're proud of
We wanted to use Computer Vision from the start, but we gave up on the second day being that none of the APIs worked. Garv then came in clutch and found one that worked!! We had to pivot from a form checker/rep counter to a food identifier and calorie input model.
## What we learned
To a great extent, we learned that tools rule(s). Information and Content production are great for sharing a message, but when you want a mission to be implemented, you must provide the tools.
We all utilized the app and have learned more about our bodies, what to do, and how to improve than anytime we could've spent reading articles or watching videos.
Of course, it is great to expand your knowledge, but vanity creeps up when you have no plans or strategies to implement what is learned.
To quote Fight Club "Self-improvement is {you know what they said}. Now self-destruction {construction\*} is the answer."
We all now have tools to take with us in our lifelong fitness endeavors.
## What's next for Open Health
We plan to continue building Open Health and running small-scale research projects where we get participants to utilize CV, Calorie Counters, and AI-generated advice to better their health, and compare them against those who tried to navigate the vast health space on their own.
Another goal is to utilize our platform to expand the field of Nutritional Psychiatry, we believe it is going to be a prevailing field
1) because of its legitimacy and efficacy.
2) because the mental health crisis isn't easy up, and we don't want to continue losing our generation to what starts as surface mental health issues and progressively compounds into a Neurogenerative disease.
## Evidence-Based
\*\* Earlier studies have shown that symptoms of depression are linked to an increased risk of Alzheimer's disease.
* Alzinfo
Many studies have found a link between anxiety-prone personality and shortened lifespan.
* VeryWellMind
What do diabetes and dementia have in common? Studies have shown that type 2 diabetes can be a risk factor for Alzheimer's disease, vascular dementia and other types of dementia. This is because the same cardiovascular problems that increase the risk of type 2 diabetes also increase the risk dementia.
-Alzheimers.Ca
You may be surprised to learn that being overweight or having obesity are linked with a higher risk of getting 13 types of cancer. These cancers make up 40% of all cancers diagnosed in the United States each year.
-CDC
Several small human trials have suggested a time-limited benefit of ketosis in delaying cognitive decline in older adults in various stages of dementia. The benefit seems to be particularly true in people without the apolipoprotein E4 allele.
* SagePub
\*\*
## Previous works on the topic from our Hackers
Food & Depression - Exhaustive List
Happiness, Depression, Nutritional Psychiatry
<https://ethancastro.substack.com/p/food-and-mood-exhaustive-list>
A lil' Gut-Brain Axis and Suicide Parasites.
<https://ethancastro.substack.com/p/a-lil-gut-brain-axis-and-suicide> | ## Inspiration
In our busy lives, many of us forget to eat, overeat, or eat just enough- but not necessarily with a well-balanced composition of food groups. Researching precise nutrition values can be a hassle - so we set out to make an app that helps people too busy with their careers to balance their diets.
## What it does
A user is able to, at any time in the day, bring up a meal and take photos of the food items they eat. The app, using a Google Vision API (calling to Google Cloud) then confirms the identity of the food item(s) and cross-references the food with the MyFitnessPal API to receive detailed nutrition information. This data is then aggregated with the meal timestamp into a MongoDB database and displayed on the Caloric calendar.
## How we built it
We built a front-end in Node.js and React, which connects to a MongoDB backend via ExpressJS and Mongoose that stores the user's data (and eating habits).
The front-end additionally contains all the external API calls to Google Vision API and MyFitnessPal. We also have Twilio integration to send messages to users about their diet data, which we plan to extend in our next steps.
## Challenges we ran into
Mostly npm dependency conflicts!
## Accomplishments that we're proud of
We integrated many services, namely Google Vision API.
This integration brings a new perspective on diet-tracking and tailoring-it doesn't have to be a laborious process for users-we make it easy and simple for users. Our integrations with MongoDB also make the user experience fast and seamless- their data is quickly available and calorie counting is a fast and responsive experience. Moreover, we take advantage of tools the user already has- their own device's cameras!
## What we learned
We learned about the differences of making API calls from the front- and back-ends, namely where that data can get routed, and which cases are better for the user experience. We also learned about the power of using React in the browser-a much more powerful paradigm than simple html generation.
## What's next for Caloric
Integrating InterSystems to get health-data of a particular user, and then tailor health analytics and suggestions so that a user can see ways that they can improve their diet, depending on their goals and needs. | ## Inspiration
Gun violence is a dire problem in the United States. When looking at case studies of mass shootings in the US, there is often surveillance footage of the shooter *with their firearm* **before** they started to attack. That's both the problem and the solution. Right now, surveillance footage is used as an "after-the-fact" resource. It's used to *look back* at what transpired during a crisis. This is because even the biggest of surveillance systems only have a handful of human operators who simply can't monitor all the incoming footage. But think about it: most schools, malls, etc. have security cameras in almost every hallway and room. It's a wasted resource. What if we could use surveillance footage as an **active and preventive safety measure**? That's why we turned *surveillance* into **SmartVeillance**.
## What it does
SmartVeillance is a system of security cameras with *automated firearm detection*. Our system simulates a CCTV network that can intelligently classify and communicate threats for a single operator to easily understand and act upon. When a camera in the system detects a firearm, the camera number is announced and is displayed on every screen. The screen associated with the camera gains a red banner for the operator to easily find. The still image from the moment of detection is displayed so the operator can determine if a firearm is actually present or if it was a false positive. Lastly, the history of detections among cameras is displayed at the bottom of the screen so that the operator can understand the movement of the shooter when informing law enforcement.
## How we built it
Since we obviously can't have real firearms here at TreeHacks, we used IBM's Cloud Annotation tool to train an object detection model in TensorFlow for *printed cutouts of guns*. We integrated this into a React.js web app to detect firearms visible in the computer's webcam. We then used PubNub to communicate between computers in the system when a camera detected a firearm, the image from the moment of detection, and the recent history of detections. Lastly, we built onto the React app to add features like object highlighting, sounds, etc.
## Challenges we ran into
Our biggest challenge was creating our gun detection model. It was really poor the first two times we trained it, and it basically recognized everything as a gun. However, after some guidance from some lovely mentors, we understood the different angles, lightings, etc. that go into training a good model. On our third attempt, we were able to take that advice and create a very reliable model.
## Accomplishments that we're proud of
We're definitely proud of having excellent object detection at the core of our project despite coming here with no experience in the field. We're also proud of figuring out to transfer images between our devices by encoding and decoding them from base64 and sending the String through PubNub to make communication between cameras almost instantaneous. But above all, we're just proud to come here and build a 100% functional prototype of something we're passionate about. We're excited to demo!
## What we learned
We learned A LOT during this hackathon. At the forefront, we learned how to build a model for object detection, and we learned what kinds of data we should train it on to get the best model. We also learned how we can use data streaming networks, like PubNub, to have our devices communicate to each other without having to build a whole backend.
## What's next for SmartVeillance
Real cameras and real guns! Legitimate surveillance cameras are much better quality than our laptop webcams, and they usually capture a wider range too. We would love to see the extent of our object detection when run through these cameras. And obviously, we'd like to see how our system fares when trained to detect real firearms. Paper guns are definitely appropriate for a hackathon, but we have to make sure SmartVeillance can detect the real thing if we want to save lives in the real world :) | losing |
## Inspiration
Our inspiration emanated from a collective belief in the power of sustainable fashion to bring about impactful change and contribute to the preservation of our planet. The core idea was rooted in the profound understanding that each decision we make, especially in the realm of fashion, holds the potential for positive environmental impact. This inspiration was driven by the urgent need to protect our planet by slowing down global warming and mitigating the adverse effects of climate change.
## What it does
EcoSphere is a web application designed to promote the use of pre-owned Patagonia products, emphasizing their quality, value, and above all the environmental impact. The platform showcases a range of second-hand clothing items, encouraging users to make eco-friendly choices by purchasing reused apparel while seeing each article's carbon emission coefficient. The core feature is a Sustainability Tracker, based on Greenhouse Gas (GHG) Protocol, that allows Patagonia to accurately and comprehensively understand, quantify, and manage its GHG emissions. By knowing its emissions measurements, via our sustainability tracker, Patagonia can set its GHG emission reduction targets, track progress over time, and identify opportunities for emissions reductions and cost savings.
## How we built it
Technology Stack
ReactJS and NextJS: For an intuitive, responsive, and visually appealing interface.
HTML and CSS: Providing structure and style for a user-friendly environment.
Bootstrap: Streamlining development with a responsive grid system and pre-designed components.
## Challenges we ran into
Finalizing the idea proved to be a time-consuming challenge. Navigating through various concepts and aligning them with the overarching theme of sustainability required thorough discussion and iteration. However, overcoming this challenge allowed us to refine our vision and strengthen our commitment to the project's goals.
## Accomplishments that we're proud of
Having a forward-thinking vision that includes continuous refinement, expansion, and an emphasis on shared responsibility. EcoSphere is not a static project; it's a dynamic journey towards a greener and more sustainable future.
## What we learned
The Value of Time : The time limit of the Hackathon pushed us to the limits of our creativity and programming skills. Learning to hack and program within tight constraints was a skill in itself. It taught us to prioritize, make quick decisions, and find innovative solutions to unforeseen challenges. The pressure of the deadline became a catalyst for efficiency and collaboration.
Collaborating in diverse teams : One of the most enriching aspects of the experience was the opportunity to connect with people from diverse backgrounds and cultures
Value of Sustainability : Exploring the positive impact of reused clothing on the environment became a driving force behind our project. We uncovered the transformative potential of sustainable choices in the fashion industry and how they can contribute to a positive planetary impact.
## What's next for EcoSphere
To provide users with a tangible measure of their impact, we plan to further build out our Sustainability Tracker within EcoSphere. This feature will allow Patagonia to be able to calculate its greenhouse gas (GHG) emissions through an accurate, comprehensive and auditable emissions measurement on which they can base their climate strategy. This feature will offer real-time data on carbon footprint and water footprint reductions achieved through the use of pre-owned Patagonia products. In our vision, our app is not just a marketplace and a dashboard but a thriving community hub. Eventually, we intend to create a dedicated space within the platform for community engagement. Users will be encouraged to share their inspiring stories about sustainability, creating a collaborative space where experiences, tips, and challenges are openly discussed. | ## Inspiration
Our inspiration was the recent heatwave, a byproduct of the global warming that has been happening over the past few decades. As an individual, it is difficult to make substanstial change in the fight against climate change. The issues are highly systemic and influenced by large corporations who's energy usage overshadow the individual's. However, once connected to a large enough community, people can join together and make waves, which is what this app is all about.
## What it does
Users can select a region of the world with unique environmental issues, such as water shortages, plastic pollution, or deforestation, and complete daily tasks that help contribute to resolving the issue. Some tasks include researching information about questions or reducing your carbon footprint by eating vegan for a meal. It aims to create a community of eco conscious members, allowing them to connect to organize wide scale events. You can also compete with them through the leaderboard system.
## How we built it
We built EcoTracker completely with SwiftUI. From the start, we built one feature at a time and added on top of them.
## Challenges we ran into
Our lack of familiarity with Swift and IOS development made the entire project pretty challenging. Debugging was incredibly difficult and time consuming, especially while sleep deprived, leading to some frustrating moments and features that were never implemented. Getting the layout right for the daily tasks was difficult, as well. Finally, managing all of the Views and how everything connected was very confusing, especially because we did almost everything on a single file.
## Accomplishments that we're proud of
We are proud of getting a functioning app done within the time period with only two people in the team. We didn't have much experience with Swift beforehand, so it was a struggle at first—the learning curve was slightly steep. The whole UI turned out better than expected as well, which was a nice bonus.
## What we learned
To use multiple files! It makes life much easier and finding things quicker, something that would have been helpful when tiried. We also learned a lot about IOS development in general, ranging from how the different V, H, Z Stacks worked to the different Views available. Being our first and second in-person hackathon between our team members, we also learned to fit in naps to optimize perfomance.
## What's next for EcoTasker
There are many features we'd like to implement in the future given more time. By giving some advertising space to companies, we are able to further incentivize users to complete tasks by offering monetary rewards. There could be coins users can earn and then donate to a resolving an issue.
We also want to have a chatbot using the OpenAI API that allows users to ask questions about environmental stewardship (eg: what bin do glass bottles go into?). We could let them customize the avatar based on areas they have completed/saved, like a coral reef pet for Australia. | ## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur. | losing |
## Inspiration
In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens.
## What it does
Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient.
## How we built it
On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore,
For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values.
To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C.
While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API.
## Challenges we ran into
As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file.
Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel.
## Accomplishments that we're proud of
We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning.
## What we learned
The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data.
## What's next for Rescue Ranger
Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives.
The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment.
* Satellite Mapping -> This would be more accurate than GPS.
* LIDAR Sensors -> Can create a 3D render of the area where the person was found.
* Heat Sensors -> We could detect people stuck under debris.
* Better Cameras -> Would enhance our usage of computer vision technology.
* Drones -> Would navigate debris more efficiently than rovers. | ## Inspiration
We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible!
## What it does
This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.)
## How we built it
Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone.
## Challenges we ran into
It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture
## Accomplishments that we're proud of
After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition.
## What we learned
Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware.
## What's next for i4Noi
We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people. | ## Inspiration
While on our way to McHacks, we watched the movie Lion (2016). This showed us the huge problems refugees and their families experience when they are constantly separated with little to no way of reconnecting with one another.
Past data depicts that a lot of kids get lost and are separated from the family for a very long time.There are amber alerts but the success rate is not that high.
## What it does
Web-app that takes a photograph of a user using the computers webcam. It then uses the facial recognition API to find the potential or similar faces stored in our database.
If it finds a match, it will output the known information of the individual in the photo, if not, then it will create a prompt for the user to input information about the person in the picture which will then be stored in the database for future use.The app is open to public, not only restricted for government to help find the lost ones.
## How I built it
The program uses the computers webcam and KAIROS facial recognition API to train the dataset in the KAIROS database to recognise various faces. It then runs ML algorithms using the KAIROS API to detect similarities in between different photos. On the back end we used django MVC framework to fetch the results from the database and store the information in mySql. Front end part we used javascript and ajax to send the images and required data to be stored in the database.
## Challenges I ran into
Initially the application was hard to architect.Most of the debugging took place while taking a photo from webcam from the browser and send it to the Kairos Api database for recognition.Hard to get the concept of base64 encoding and decoding for the image verification and sending it to database. Learnt ajax on spot to send the image and required data on the server.
## Accomplishments that I'm proud of
We made an app that can help reduce a big problem and help save some lives.
We were able to teach ourselves various different frameworks and technologies over a short period of time, to be able to make this project work. We learnt team management and how to cooperate with team members for the success of the project.Got more exposure in the field of Machine learning.
## What I learned
We learned Rest API, JSON , Kairos facial recognition API , mySQL Database,Django MVC framework. And most importantly learned DEBUGGING.
## What's next for Find.ai
Use of other API to help fetch the data as more data more lives can be saved like Facebook Graph API to increase the possible scope of the ML training as well as the overall usefulness of the app.
Use of the Twilio SMS API to send reports and updates over SMS to potential parents or guardians of the lost ones. | winning |
## Inspiration
One issue that we all seem to face is interpreting large sensor-based datasets. Whether it be financial or environmental data, we saw an opportunity to use LLMs to allow for better understanding of big data. As a proof of concept, taking care of a house plant or gardens was interesting because we could collect data and take actions based on physical metrics like soil moisture and sunlight. We were then inspired to take managing plants to the next level by being able to talk to the data you just collected in a fun way, like asking how each of your plants are doing. This is how RoBotany came to be.
## What it does
Through our web app, you can ask RoBotany about how your plants are doing - whether they need more water, have been overwatered, need to be in the shade, and many more questions. Each of your plants has a name, and you can ask specifically how your plant Jerry is faring for example. When you ask for a status update on your plants, our web app fetches data stored in our database, which gets a constant feed of information from the light and soil moisture sensors. If your plants are in need of water, you can even ask RoBotany to water your plants autonomously!
## How we built it
**Hardware**
The hardware portion uses an Arduino, a photoresistor, and a soil moisture sensor to measure the quintessential environmental conditions of any indoor or outdoor plants. We even 3D-printed a flower pot specially made to hold a small plant and the Arduino system!
**Frontend**
Our frontend was built with React and uses the Chat UI Kit from chatscope.
**Backend**
Our project requires the use of two CockroachDBs. One of the databases is continuously read and updated for the soil moisture and light level, while the other database is updated less frequently to toggle the plant sprinkler system. Our simple yet practical endpoints allow us to seamlessly send information back and forth between our arduino and AWS EC2 instance, using technologies such as pm2 and nginx to keep the API up and running.
**NLP**
To process user requests via our chatbot, we used a combination of a classification model on *BentoML* to categorize requests, as well as Cohere Generation for entity extraction and responding to more generic requests.
The process goes as follows:
1. The user enters a prompt.
2. The prompt gets sent to be categorized via BentoML.
3. The input and category get sent to Cohere Generation, along with some training datasets, to extract entities.
4. The category and entity get sent to a small class that processes and queries our CockroachDB via a Flask mini api.
5. The response gets forwarded back to the user that sent the initial prompt.
## Challenges we ran into
One of the main challenges that we struggled with was working with LLMs, something none of our team was very familiar with. Despite being extremely challenging, we were glad we dove into the subject as deep as we did because it was equally rewarding to finally get it working.
In addition, given that our electronic system was handling water, we wanted to make sure that our packaging protected our ESP32 and sensor boards. We started by designing a 3D printed compartment that would house everything from the electronics, to the Motor, to the plant itself. We quickly discovered a compartment that size would take well over 12 hours (we are at T-10 hours at that point). We modified our design to make it more compact, and were able to get a beautiful packaging done in time!
Finally, from CockroachDB to Cohere, our group was managing a couple different authentication systems. Between refreshing tokens, as well as group members constantly hopping on and off different components, we ran into an issue quickly in terms of how to share the tokens. The solution - was to use Syro’s secret management software.
## Accomplishments that we're proud of
Our project had over a dozen unique technologies as our team looked to develop new skills and use new tech stacks during this hackathon.
## What we learned
* Large Language Models (LLMs)
* How to connect multiple distinct technologies together in a single project
* Using a strongly-consistent database in CockroachDB for the first time
* Using object-relational mapping
## What's next for RoBotany
Some possible next steps include diversifying our plant sensor data, as well as making it more scalable, allowing users to potentially talk to an entire crop field!
In addition, our system was designed with modularity in mind. Expanding to new, very different avenues of monitoring, shouldn’t be complex tasks. RoBotany lays the groundwork for a smart platform to talk to variable sensor data. | ## Inspiration
Greenhouses require increased disease control and need to closely monitor their plants to ensure they're healthy. In particular, the project aims to capitalize on the recent cannabis interest.
## What it Does
It's a sensor system composed of cameras, temperature and humidity sensors layered with smart analytics that allows the user to tell when plants in his/her greenhouse are diseased.
## How We built it
We used the Telus IoT Dev Kit to build the sensor platform along with Twillio to send emergency texts (pending installation of the IoT edge runtime as of 8 am today).
Then we used azure to do transfer learning on vggnet to identify diseased plants and identify them to the user. The model is deployed to be used with IoT edge. Moreover, there is a web app that can be used to show that the
## Challenges We Ran Into
The data-sets for greenhouse plants are in fairly short supply so we had to use an existing network to help with saliency detection. Moreover, the low light conditions in the dataset were in direct contrast (pun intended) to the PlantVillage dataset used to train for diseased plants. As a result, we had to implement a few image preprocessing methods, including something that's been used for plant health detection in the past: eulerian magnification.
## Accomplishments that We're Proud of
Training a pytorch model at a hackathon and sending sensor data from the STM Nucleo board to Azure IoT Hub and Twilio SMS.
## What We Learned
When your model doesn't do what you want it to, hyperparameter tuning shouldn't always be the go to option. There might be (in this case, was) some intrinsic aspect of the model that needed to be looked over.
## What's next for Intelligent Agriculture Analytics with IoT Edge | ## Inspiration
Our inspiration for this project stemmed from our deep-rooted passion for cryptocurrency trading and the insatiable curiosity surrounding artificial intelligence (AI) in finance. With the cryptocurrency market's volatility, we saw an opportunity to marry data science and trading to create a robust, AI-powered trading assistant. Our goal was to build a cutting-edge solution that would empower traders with personalized insights and forecasts, and we believed that the MindsDB platform could be the perfect foundation for this endeavor.
## What it does
Our project, the AI-Enhanced Crypto Trading Forecast, is a complex synergy of data collection, AI model training, and predictive analytics, all geared towards optimizing cryptocurrency trading strategies. Here's a closer look at how it works:
**Handler Development** : We embarked on the formidable task of crafting a custom handler for the Kraken cryptocurrency exchange. This handler would serve as the gateway to access traders' historical transaction data, a treasure trove of insights waiting to be unlocked.
**Data Collection**: With our custom handler in place, we ventured into the world of cryptocurrency trading data. We retrieved detailed trade histories from Kraken, encompassing countless transactions and intricate market dynamics.
**AI Model Training**: The heart of our project lies in our AI model, meticulously trained using MindsDB. We designed this model to be an astute observer of traders' behavior. It analyzed patterns, considered risk tolerance, and discerned preferences, such as the inclination toward altcoins or stablecoins.
**Unit Testing**: Ensuring the robustness of our solution was paramount. To this end, we meticulously implemented unit tests that scrutinized every aspect of our handler, model, and data pipeline.
**User Interface**: To make our insights accessible, we crafted a user-friendly interface. It ingested data from Kraken, fed it to our AI model for analysis, and displayed personalized trading recommendations. This elegant interface served as the portal to our AI-powered trading assistant.
## Challenges we ran into
Our journey was marked by numerous formidable challenges, each met with determination and technical prowess:
**Handler Development**: Crafting a custom handler for the Kraken API required a deep dive into the intricacies of cryptocurrency data retrieval. It was a formidable challenge that we tackled head-on.
**Environment Setup**: Setting up MindsDB on our local host was no walk in the park, but our relentless research and collaborative effort ultimately paid off.
**Data Scarcity**: Cryptocurrency trading data is not easily available, and open data endpoints are a rarity. Our quest for suitable datasets tested our resourcefulness.
**Custom AI Model Integration**: The hurdle of integrating our meticulously crafted AI model with MindsDB proved to be our Achilles' heel, as it necessitated the use of the paid version.
**Personalization**: Creating a personalized trading prediction model was no trivial task. It involved a complex dance of fine-tuning and feature engineering, akin to deciphering the enigma of individual trading preferences.
## Accomplishments that we're proud of
Our journey was filled with learning and growth, and we take pride in several notable achievements:
1. We successfully engineered a custom Kraken handler, enabling seamless access to trading data.
2. The creation of an AI model capable of offering personalized trading recommendations was a significant milestone.
3. Our commitment to reliability led us to implement comprehensive unit tests that ensured the functionality of every component.
4. Our user-friendly interface showcased the tangible results of our AI-powered trading predictions.
## What we learned
The journey was as much about learning as it was about innovation:
We honed our skills in developing custom handlers for external APIs, making data retrieval and integration a seamless process.
Our mastery over MindsDB for AI model development and training expanded significantly.
Challenges related to data scarcity, model personalization, and environment setup sharpened our problem-solving abilities.
The complexities of cryptocurrency trading and the nuances of building AI models for financial forecasting deepened our understanding.
## What's next for AI-Enhanced Crypto Trading Forecast
Though our project faced limitations due to the requirement of the paid version for custom model integration with MindsDB, we are eager to pursue the following avenues in our quest for AI-enhanced trading:
Advanced Personalization: We envision further refinement of our AI model, incorporating even more nuanced user-specific factors and preferences.
Diverse Data Sources: Exploring additional data sources beyond Kraken to improve model accuracy and diversify the dataset.
Multi-Exchange Support: Expanding our custom handler to encompass multiple cryptocurrency exchanges, providing users with a broader array of data and trading options.
Deployment and Monetization: Strategically planning the deployment of our AI-powered trading platform and exploring monetization strategies, such as subscription models or exchange partnerships.
Community Engagement: Seeking feedback and insights from the cryptocurrency trading community to continuously enhance our AI model and user interface based on real-world needs and feedback. | winning |
## Inspiration
As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus.
## What it does
A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings.
## How we built it
Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes.
## Challenges we ran into
* Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible
* Bypassing CORS permissions when accessing the user's camera
## Accomplishments that we're proud of
* Using a new SDK purposely to address an issue that was relevant to our team
* Going through the development process, and gaining a range of experiences over a short period of time
## What we learned
* Planning time effectively and redirecting our goals accordingly
* How to learn by collaborating with team members to SDK experts, as well as reading documentation.
* Our tech stack
## What's next for LooGuessr
* creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years
* considering freemium components, such as extra guesses, 360-view, and interpersonal wagers
* showcasing 360-picture view by stitching together a video recording from the user
* addressing privacy concerns with image face blur and an option for delaying showing the image | ## Inspiration
Every day, more than 130 people in the United States alone die from an overdose of opioids, which includes prescription painkillers and other addictive drugs. The rate of opioid abuse has been rising steadily since the 1990s and has developed into a serious public health problem since. Roughly 21-29 percent of patients prescribed painkillers end up misusing them, and 4-6 percent of those transition to heroin. Not all of this misuse is intentional, some people simply forget when they took their pills or how many to take and end up hurting themselves by accident. Additionally, due to the addictive nature of the drugs and the chronic pain they seek to solve, many people take more than necessary due to a dependency on them. The U.S. Department of Health and Human Services has taken steps to help this crisis, but by and large, misuse of opioids is still a rapidly growing problem in the United States. Project Bedrock seeks to help solve some of these problems.
Our team was inspired by previous hackathon-built automated pill dispensers, but we wanted to take it a step further with a tamper-proof system. Our capsule is pressurized, so upon anyone breaking through to access their pills at the wrong time, a change in pressure is detected and emergency services are notified. Our convenient app allows for scheduling, dosages, and data analytics from the perspective of a health care administrator.
## Components
ABS Plastic
Acetone ABS Slurry - used as internal sealant
Silicone sealant - used for the cap and as a failsafe
Pnuematics: ball valves, pex pipe, 1/8 and 1/4 inch NPT pipe
Raspberry Pi 3
BMC180 Barometer
Standard Servo Motors
Overall, our hardware list is simple, and we worked to maximize functionality out of few components. Our system is modular, so parts can be replaced and repaired, rather than having to replace the entire unit.
## What it does
Bedrock can be explained in two parts: the security system and the dispenser system.
The security:
The chassis is made of Acrylonitrile butadiene styrene (ABS) plastic. We chose ABS because of its high tensile strength and excellent resistance to physical impact and chemical corrosion. Its high durability rating makes it difficult to physically break into the system in the event that an addict wants access to their pills at the wrong time.
The main compartment kept at a pressure of 20PSI, compared to atmospheric pressure's 14.7 PSI. Inside the compartment is a barometric (pressure) sensor that constantly reads the internal pressure in the container. If a user were to attempt to break the dispenser to gain access to their pills, the sealed compartment would be exposed to atmospheric pressure and drop in pressure. Once the barometer detects this pressure drop, it would immediately contact emergency services to investigate the potential overdose so they can be treated as soon as possible.
The dispenser:
The dispenser can be timed with the interval a doctor sets based on the medication. To maintain the internal air pressure of the compartment, there is a two part dispenser to release a pill. There are two ball valves that can shut to be airtight. First, the innermost valve opens and releases a pill into a chamber. Then, that valve closes and the outermost valve opens. The pill is now accessible and the compartment has never lost any pressure throughout the process.
## How we built it
We used a 3-D printer to make all ABS parts including the main enclosure and part of the release mechanism. We used an Acetone ABS slurry to seal the inside of the enclosure to make it airtight and ensure there is minimal fluctuation in pressure during the lifetime of the unit. Other than that, most parts are stock.
## What's next for Bedrock
We hope to take Bedrock further on the software side and utilize IoT and wireless software to wirelessly control dosages and timing. Additionally we would like to utilize data analytics with user permission to see what proportions of people are taking their proper dosages at the right times, attempting to consume medication at incorrect time intervals, forget their medication, or attempt to break into their Bedrock device. With this data we would be able to communicate with the pharmaceutical industry and optimize concentrations of medicine for different people's memory periods. Through this, we can work with people's memory timing and patience to ensure proper consumption of potentially dangerous drugs. | ## Inspiration
Do you remember what’s in your fridge and why you bought it?
Fridges have remained as a “white box” for centuries, and with IoT and image classification technologies, it has still remained the same despite some unsuccessful attempts like the expensive Samsung Family Hub.
We want to create a cheap IoT camera that’s attached to your ordinary scanner. By recognizing what’s in your fridge, we can remind you about the expiration date, offer recipe advice, and most importantly, advise your daily nutrition.
## How we built it/What it does
We created a Node.js server that receives images from a laptop webcam. It then spawns a python process that queries the Google Vision API to determine the type of food that the user wants to put in the fridge. We then used socket.io to push the label to our React web app, where we display the current contents of the fridge and expiry dates to the user. We also have a sqlite database of nutritional information that we could display.
## Challenges we ran into
Our project idea was slightly derailed when we couldn't acquire a Raspberry Pi camera at the hackathon (there was a lottery and the organizers ran out) and the Raspberry Pi kit we received was non-functional. We pivoted by deciding to use the laptop webcam instead to create a proof of concept.
We originally planned to use PaddlePaddle and VGG-16 for classification of food. However, PaddlePaddle is not well documented and there's no image classification model that we can use. After switching to Tensorflow, we found VGG-16's architecture is not suitable for identifying the distinctions between our categories.
## Accomplishments that we're proud of
Team cooperation. We all got along well together. :)
## What we learned
1. Bring your own hardware
2. Bring your own wifi
3. Bring an umbrella
## What's next for Smartest FridgeCam
The main thing is to redo this project using Raspberry Pi and the Pi camera instead of the laptop webcam. We plan on developing this project on the side beyond this hackathon because we believe this idea is both feasible to implement and useful for users. | partial |
## Inspiration
The inspiration for this project likely comes from the need to create a music platform that addresses **privacy**, **safety**, and plagiarism concerns. By storing music on a **blockchain database**, the platform ensures that users' music is safe and secure and cannot be tampered with or stolen. In addition, the platform likely addresses plagiarism concerns by calculating the similarity of uploaded tracks and ensuring that they are not plagiarized. The use of blockchain technology ensures that the platform is decentralized and there is no single point of failure, making it more resistant to hacking and other security threats. The comparison algorithm helps users discover similar tracks and explore new music without compromising their privacy or the security of their data.
## What it does
Our project is aimed at creating a website where users can upload their music tracks to a blockchain database. The website will use advanced algorithms to compare the uploaded tracks with all the music tracks already stored on the blockchain database. The website will then display the biggest similarity rate of the uploaded track with the music stored on the blockchain.
## How we built it
We are using the provided APIs and tools like **Estuary**. Also, we create a advanced comparison algorithm to compute the similarity rate comparing to all the music stored on the blockchain. We have implemented the following features: Music Upload, Blockchain Database, Music Comparison, Similarity Rate, Music Player and User Dashboard.
## Challenges we ran into
There are several challenges that we may encounter when building Musichain that utilizes blockchain and advanced algorithms.
One of the main challenges is ensuring that the platform is scalable and can handle large amounts of data. Storing music on a blockchain database can be resource-intensive, and as more users upload tracks, the platform must be able to handle the increased load. Gladly, we use **Estuary** as our blockchain database to avoid a lot of unnecessary problems and significantly improve read and run speeds.
Another challenge is ensuring that the comparison algorithm is accurate and effective. The algorithm must be able to analyze a large amount of data quickly and accurately. While there are many similar applications on the market, our focus is on providing reliable comparisons rather than recommendations. To achieve this, we have streamlined and simplified the music extraction feature, resulting in a higher accuracy rate and faster program performance. By prioritizing simplicity and efficiency, we aim to provide a superior user experience compared to other applications with similar features.
Additionally, ensuring that the platform is secure and free from hacking or other security threats is critical. With sensitive user data and intellectual property at stake, the platform must be designed with security in mind, and appropriate measures must be taken to ensure that the platform is protected from external threats.
Overall, building a music platform that utilizes blockchain and advanced algorithms is a complex undertaking that requires careful consideration of scalability, accuracy, security, and copyright issues.
## Accomplishments that we're proud of
The following are the main features of Musichain:
1. Music Upload: Users will be able to upload their music tracks to the website. The website will accept various file formats such as MP3, WAV, and FLAC.
2. Blockchain Database: The music tracks uploaded by the users will be stored on a blockchain database. This will ensure the security and immutability of the music tracks.
3. Music Comparison: The website will use advanced algorithms to compare the uploaded music track with all the music tracks already stored on the blockchain database. The comparison algorithm will look for similarities in various parameters such as rhythm, melody, and harmonies.
4. Similarity Rate: The website will display the biggest similarity rate of the uploaded track with the music stored on the blockchain. This will help users identify similar tracks and explore new music.
5. Music Player: The website will have a built-in music player that will allow users to play the uploaded music tracks. The music player will have various features such as volume control, playback speed, and equalizer.
6. User Dashboard: The website will have a user dashboard where users can manage their uploaded tracks, view their play count, and see their similarity rates.
In conclusion, our project aims to create a music sharing platform that leverages the power of blockchain technology to ensure the security and immutability of the uploaded tracks. The music comparison feature will allow users to discover new music and connect with other artists.
## What we learned
There are several things that we have learnt from building Musichain platform that utilizes blockchain and advanced algorithms:
1. The power of blockchain: Using blockchain technology to store and manage music content provides a high level of security and immutability, making it a powerful tool for data management and storage.
2. The importance of privacy and security: When dealing with sensitive user data and intellectual property, privacy and security must be prioritized to ensure that user data is protected and secure.
3. The benefits of advanced algorithms: Advanced algorithms can be used to analyze large amounts of data quickly and accurately, providing meaningful recommendations to users and helping them discover new music. In the meantime, we need to streamline our algorithm to achieve pinpoint accuracy and efficiency.
## What's next for Musichain
After two-day hard work, there are still some improvement to be done for Musichain:
1. Refining and improving the compression algorithm: Developing a more efficient and effective compression algorithm could help to reduce the size of music files and improve the platform's storage capabilities.
2. Integrating artificial intelligence: Incorporating artificial intelligence could help to improve the accuracy of the platform's ability to analyze music, as well as enhance the platform's search capabilities.
3. Building partnerships and collaborations: As the platform grows, building partnerships with other companies and organizations in the music industry, as well as with AI and compression algorithm experts, could help to further expand the platform's capabilities. | ## Inspiration
Over the past year I'd encountered plenty of Spotify related websites that would list your stats, but none of them allowed me to compare my taste with my friends, which I found to be the most fun aspect of music. So, for this project I set out to make a website that would allow users to compare their music tastes with their friends.
## What it does
Syncify will analyze your top tracks and artists and then convert that into a customized image for you to share on social media with your friends.
## How we built it
The main technology is a node.js server that runs the website and interacts with the Spotify API. The Information is then sent to a Python Script which will take your unique spotify information and generate an image personalized to you with the information and a QR Code that further encodes information.
## Challenges we ran into
* Installing Node.JS took too long with various different compatibility issues
* Getting the Spotify API to work was a major challenge because of how the Node.JS didn't work well with it.
* Generating the QR Code as well as manipulating the image to include personalized text required multiple Python Packages, and approaches.
* Putting the site online was incredibly difficult because there were so many compatibility issues and package installation issues, in addition to my inexperience with hosting sites, so I had to learn that completely.
## Accomplishments that we're proud of
Everything I did today was completely new to me and being able to learn the skills I did and not give up despite how tempting it was. Being able to utilize the APIs, and learn NodeJS as well as develop some skills with web hosting felt really impressive because of how much I struggled with them throughout the hackathon.
## What we learned
I learnt a lot about documenting code, how to search for help, what approach to the workflow I should take and of course some of the technical skills.
## What's next for Syncify
I plan on uploading Syncify online so it's available for everyone and finishing the feature of allowing users to determine how compatible their music tastes are, as well as redesigning the shareable image so that the QR Code is less obtrusive to the design. | ## Inspiration
Each year, art forgery causes over **$6 billion in losses**. Museums, for example, cannot afford such detrimental costs. In an industry that has spanned centuries, it is crucial that transactions of art pieces can be completed securely and confidently.
Introducing **Artful, a virtual marketplace for physical art which connects real-life artworks to secure NFTs on our own private blockchain**. With the power of the blockchain, the legitimacy of high-value real-world art pieces can be verified through an elegant and efficient web application, which with its scalability and decentralized framework, also proves as an efficient and secure measure for art dealership for all artists worldwide.
## What it does
To join our system, art owners can upload a picture of their art through our portal. Once authenticated in person by our team of art verification experts, the art piece is automatically generated into an NFT and uploaded on the Eluv.io Ethereum blockchain. In essence, ownership of the NFT represents ownership of the real life artwork.
From this point on, prospective buyers no longer need to consult expensive consultants, who charge hundreds of thousands to millions of dollars – they can simply visit the piece on our webapp and purchase it with full confidence in its legitimacy.
Artful serves a second purpose, namely for museums. According to the Museum Association, museum grant funding has dropped by over 20% over the last few years. As a result, museums have been forced to drop collections entirely, preventing public citizens from appreciating their beauty.
Artful enables museums to create NFT and experiential bundles, which can be sold to the public as a method of fundraising. Through the Eluv.io fabric, experiences ranging from AR trips to games can be easily deployed on the blockchain, allowing for museums to sustain their offerings for years to come.
## How we built it
We built a stylish and sleek frontend with Next.js, React, and Material UI. For our main backend, we utilized node.js and cockroachDB. At the core of our project is Eluv.io, powering our underlying Ethereum blockchain and personal marketplace.
## Challenges we ran into
Initially, our largest challenge was developing a private blockchain that we could use for development. We tested out various services and ensuring the packages worked as expected was a common obstacle. Additionally, we were attempting to develop a custom NFT transfer smart contract with Solana, which was quite difficult. However, we soon found Eluv.io which eliminated these challenges and allowed us to focus development on our own platform.
Overall, our largest challenge was automation. Specifically, the integration of automatically processing an uploaded image, utilizing the Eluv.io content fabric to create marketplaces and content objects in a manner that worked well with our existing frontend modules, generating the NFTs using an automated workflow, and publishing the NFT to the blockchain proved to be quite difficult due to the number of moving parts.
## Accomplishments that we're proud of
We’re incredibly proud of the scope and end-to-end completion of our website and application. Specifically, we’ve made a functional, working system which users can (today!) use to upload and purchase art through NFTs on our marketplace on an automated, scalable basis, including functionality for transactions, proof of ownership, and public listings.
While it may have been possible to quit in the face of relentless issues in the back-end coding and instead pursue a more theoretical approach (in which we suggest functionality rather than implement it), we chose to persist, and it paid off. The whole chain of commands which previously required manual input through command line and terminal has been condensed into an automated process and contained to a single file of new code.
## What we learned
Through initially starting with a completely from scratch Ethereum private blockchain using geth and smart contracts in Solidity, we developed a grounding in how blockchains actually work and the extensive infrastructure that enables decentralization. Moreover, we recognized the power of APIs in using Eluv.io’s architecture after learning it from the ground up. The theme of our project was fundamentally integration—finding ways to integrate our frontend user authentication and backend Eluv.io Ethereum blockchain, and seeing how to integrate the Eluv.io interface with our own custom web app. There were many technical challenges along the way in learning a whole new API, but through this journey, we feel much more comfortable with both our ability as programmers and our understanding of blockchain, a topic which before this hackathon, none of us had really developed familiarity with. By talking a lot with the Eluv.io CEO and founder who helped us tremendously with our project too, we learned a lot about their own goals and aspirations, and we can safely say that we’ve emerged from this hackathon with a much deeper appreciation and curiosity for blockchain and use cases of dapps.
## What's next for Artful
Artful plans to incorporate direct communications with museums by building a more robust fundraising network, where donators can contribute to the restoration of art or the renovation of an exhibit by purchasing one of the many available copies of a specific NFT. Also, we have begun implementing a database and blockchain tracking system, which museums can purchase to streamline their global collaboration as they showcase especially-famous pieces on a rotating basis. Fundamentally, we hope that our virtual center can act as a centralized hub for high-end art transfer worldwide, which through blockchain’s security, ensures the solidity (haha) of transactions will redefine the art buying/selling industry. Moreover, our website also acts as a proof of credibility as well — by connecting transactions with corresponding NFTs on the blockchain, we can ensure that every transaction occurring on our website is credible, and so as it scales up, the act of not using our website and dealing art underhandedly represents a loss of credibility. And most importantly, by integrating decentralization with the way high-end art NFTs are stored, we hope to bring the beautiful yet esoteric world of art to more people, further creating a platform for up and coming artists to establish their mark on a new age of digital creators. | losing |
## Inspiration
A deep and unreasonable love of xylophones
## What it does
An air xylophone right in your browser!
Play such classic songs as twinkle twinkle little star, ba ba rainbow sheep and the alphabet song or come up with the next club banger in free play.
We also added an air guitar mode where you can play any classic 4 chord song such as Wonderwall
## How we built it
We built a static website using React which utilised Posenet from TensorflowJS to track the users hand positions and translate these to specific xylophone keys.
We then extended this by creating Xylophone Hero, a fun game that lets you play your favourite tunes without requiring any physical instruments.
## Challenges we ran into
Fine tuning the machine learning model to provide a good balance of speed and accuracy
## Accomplishments that we're proud of
I can get 100% on Never Gonna Give You Up on XylophoneHero (I've practised since the video)
## What we learned
We learnt about fine tuning neural nets to achieve maximum performance for real time rendering in the browser.
## What's next for XylophoneHero
We would like to:
* Add further instruments including a ~~guitar~~ and drum set in both freeplay and hero modes
* Allow for dynamic tuning of Posenet based on individual hardware configurations
* Add new and exciting songs to Xylophone
* Add a multiplayer jam mode | ## Intro and Idea
For our team of First Year UofT Engineering Science students, this was our first Makeathon and first project as a team. We have varying ranges of experience with software and hardware within our team and decided to approach this competition as both a challenge, and a learning experience.
After a couple hours of brainstorming based on our collective interests, our team arrived on an idea we were all excited about: An interactive orchestra experience to allow players to more easily play together. Jazz ensembles are easily able to improvise together, because they usually play in the same keys. Classical musicians on the other hand, are often not able to predict key changes.
Our design provides a platform for conductors to change the orchestra in real-time according to their vision. By playing chords on a Midi keyboard, they can “play the orchestra” by transposing and transmitting the chords to the members of the orchestra through the use of wireless connectivity to individual displays powered by Raspberry Pi’s.
## Planning and Summary
As briefly identified in our introduction, our primary stakeholders for this project are:
Ourselves (a team of first year engineering students attempting their first Makeathon)
Orchestra performers and conductors
MakeUofT Organizers, Sponsors, and Judges
Based on this, we were able to develop some rough objectives to keep us on track for 24 hours:
To create a unique but achievable product
To improve our software and hardware integration skills
To incorporate sponsor innovations and technologies
To ensure we had something to show after 24 hours, we decided to aim for a minimum viable project (MVP) before adding any bells and whistles. We were able to reach our goals of Midi communication to the serving computer and having note name communication to the player displays. After we reached our MVP, we expanded on our design to have visualization of the notes on the staff, differing transposition options, and automated chord analysis. Finally, we implemented chord suggestion and prediction using Azure Machine Learning.
## Features:
Real time note communication between Midi and player displays
Visual display of notes on staff
Automated Chord Analysis
Chord suggestion and predictions using Azure Machine Learning
Multiple differently transposed sections available to accommodate a variety of instruments simultaneously
## Applications:
Large group improvisation and composition
Teaching and training
Creating new pieces using Azure Machine Learning
## The Process
## Raspberry Pi Setup and Enclosure:
To act as the receiving devices, we use four Raspberry Pi’s in our project. Each Pi is set up with Raspbian Stretch version 4.14.
A 7” touch display screen is attached to one of our Pi’s, and monitors to the remaining three. Initially, we were going to use small LCD graphic displays, but ruled these out due to size. A 7” touch display was offered to us to borrow, but to keep our cost down, we opted to use monitors for the remaining Pi’s. Ideally all the Pi’s would have 7” touch display screens, but we decided for a MVP prototype, one was sufficient.
Beyond our MVP prototype we added a push-button that allows the player to cycle through the available sections (more on sections to follow) on the Raspberry Pi. A simple python script was created to map the GPIO pin connected to the button to a keyboard stroke (the F5 refresh button for web pages). We left room in the box for more features to be added.
We initially hoped to modify the open source PIvena Raspberry Pi enclosure, but after beginning our modifications, we realized laser cutting was not being offered as a service at MakeUofT. Based on limited fabrication lab hours available to us, we opted to design our enclosure out of foamcore.
The display is set at an angle in order to allow notes to be read easily while playing an instrument. Hardware is located in a box behind the display, making it discrete but easily accessible.
## Network/IoT Setup:
The Network setup remained simple throughout the project. The basic concept was to have one computer that acted as a master, then any number of displays of all shapes and sizes that could receive instructions from the master and join in on the joys of music. To accomplish this, we used a Node.js framework and wrote predominantly JavaScript to manage the interactivity of different clients. As a result of our objectives, the final product that we have produced is capable of being run on any platform with a web browser, making it highly accessible and scalable. Furthermore, as the result works off of a wireless network, it is capable of accepting a high volume of hosts without added latency, as well as being very easy to connect to. The network setup has been geared towards the IoT model by connecting devices in a collaborative way to enable people to help and support each other in harmony.
Technically speaking, the socket works using Socket.io, integrating native HTTPS for requests to the Azure Cloud for learned suggestions of chords based on machine intelligence. The socketing breaks the network into a series of sections, which may all operate and play in different keys, requiring transposition for accessible harmony. The number of sections that can be created is theoretically undefined, though for our basic demonstration we make use of four different sections operating in different keys.They are all shown the classic chord data and have available to them the key in which the orchestra is playing giving the player further liberty for experimentation within the piece.
## Machine Learning:
The Azure Machine Learning framework was the center of a feature for the master controller of the project. Provided with historical data of chords played, it would recommend a good follow-up chord to harmonize. Our machine learning algorithm was fed by approximately 3 million data points from pop songs. Though in retrospect we should have trained it with music that finds more standard and less repetitive chords, the concept still worked well enough, though at our hand there was slight underfitting. The structure that worked well for our means was the feeding of 3 points of historical data predicting a fourth point of data to a fair degree of reason. This is a useful feature of our design for those who would use this concept to collaborate with others or create something new, as it would support them further in a desire for a good, harmonic sound. | ## Inspiration
Like most university students, we understand and experience the turbulence that comes with relocating every 4 months due to coop sequences while keeping personal spendings to a minimum. It is essential for students to be able to obtain an affordable and effortless way to release their stresses and have hobbies during these pressing times of student life.
## What it does
AirDrum uses computer vision to mirror the standard drum set without the need for heavy equipment, high costs and is accessible in any environment.
## How we built it
We used python (NumPy, OpenCV, MatPlotLib, PyGame, WinSound) to build the entire project.
## Challenges we ran into
The documentation for OpenCV is less robust than what we wanted, which lead to a lot of deep dives on Stack Overflow.
## Accomplishments that we're proud of
We're really happy that we managed to actually get something done.
## What we learned
It was our first time ever trying to do anything with OpenCV, so we learned a lot about the library, and how it works in conjunction with NumPy.
## What's next for AirDrums
The next step for AirDrums is to add more functionality, allowing the user to have more freedom with choosing which drums parts they would like and to be able to save beats created by the user. We also envision a guitar hero type mode where users could try to play the drum part of a song or two. We could also expand to different instruments. | winning |
## Inspiration
Jeremy, one of our group members, always buys new house plants with excitement and confidence that he will take care of them this time.. He unfortunately disregards his plant every time, though, and lets it die within three weeks. We decided to give our plant a persona, and give him frequent reminders whenever the soil does not have enough moisture, and also through personalized conversations whenever Jeremy walks by.
## What it does
Using four Arduino sensors, including soil moisture, temperature, humidity, and light, users can see an up-to-date overview of how their plant is doing. This is shown on the display and bar graph with an animal of choice's emotions! Using the webcam which is built-in into the the device, your pet will have in-depth conversations with you using ChatGPT and image recognition.
For example, if you were holding a water bottle and the soil moisture levels were low, your sassy cat plant might ask if the water is for them since they haven't been watered in so long!
## How we built it
The project is comprised of Python and C++. The 4 sensors and 2 displays on the front are connected through an Arduino and monitor the stats of the plant and also send them to our Python code. The Python code utilizes chatGPT API, openCV, text-to-speech, speech-to-text, as well as data from the sensors to have a conversation with the user based on their mood.
## Challenges we ran into
Our project consisted of two very distinct parts. The software was challenging as it was difficult to tame an AI like chatGPT and get it to behave like we wanted. Figuring out the exact prompt to give it was a meticulous process. Additionally, the hardware posed a challenge as we were working with new IO parts. Another challenge was combining these two distinct but complex components to send and receive data in a smooth manner.
## Accomplishments that we're proud of
We're very proud of how sleek the final product looks as well as how smoothly the hardware and software connect. Most of all we're proud of how the plant really feels alive and responds to its environment.
## What we learned
Making this project, we definitely learned a lot about sending and receiving messages from chatGPT API, TTS, STT, configuring different Arduino IO methods, and communicating between the Arduino and Python code using serial.
## What's next for Botanical Bestie
We have many plans for the future of Botanical Bestie. We'd like to make the product more diverse and include different language options to be applicable to international markets. We'd also like to collab with big brands to include their characters as AI plant personalities (Batman plant? Spongebob plant?). On the hardware side, we'd obviously want to put speakers and microphones on the plant/plant pot itself, since we used the laptop speaker and phone microphone for this hackathon. We also have plans for the plant pot to detect what kind of plant is in it, and change its personality accordingly. | ## Inspiration
Typically plants in nature have little control over their own fate; organism growth is constrained by physical and environmental factors. Based on ambient sensor data as well as plant-specific information, we can better understand the state of the organism and adjust its spatial position and water level.
## Implementation
#### Overview
The system consists of multiple parts, including:
* Arduino Nano, which receives and interprets sensor data
* Motor and wheel assembly, to move the plant to a suitable light source
* Photodiodes to measure ambient light levels at four equidistant points around the plant
* Moisture sensor to measure soil moisture levels, determining when the plant should receive water
* Ultrasonic sensors to determine the spatial position of the plant as well as to avoid collisions with stationary or incoming objects
* Wireless receiver and transmitter to send and receive plant sensor data for data analytics and analysis (not internet connected, although we have the option of doing so in the future)
* A React-based dashboard which can receive data from multiple plants and run data analytics to better understand current plant state at a glance as well as past trends
* Motion-sensitive RGB LED
#### Circuit Design
We took care to lay out the design of the circuit to accommodate the sensors we wanted, although what we created is a rough prototype. Photos are available below.
## What We Learned
As students primarily focused on studying software engineering, this project helped us push our boundaries and get a better understanding of both hardware and low-level software. We learned about circuit design and designing systems for various types of sensors.
## Challenges Faced
We weren't able to fit everything onto one single breadboard because we were lacking in some hardware resources required to make the circuit design more compact and efficient.
## Future
We're excited by a future where plants can integrate themselves more usefully into our man-made environment, and wish to improve the capabilities of the system, including using plants natively as sensors themselves. | ## Inspiration
We wanted to build a sustainable project which gave us the idea to plant crops on a farmland in a way that would give the farmer the maximum profit. The program also accounts for crop rotation which means that the land gets time to replenish its nutrients and increase the quality of the soil.
## What it does
It does many things, It first checks what crops can be grown in that area or land depending on the weather of the area, the soil, the nutrients in the soil, the amount of precipitation, and much more information that we have got from the APIs that we have used in the project. It then forms a plan which accounts for the crop rotation process. This helps the land regain its lost nutrients while increasing the profits that the farmer is getting from his or her land. This means that without stopping the process of harvesting we are regaining the lost nutrients. It also gives the farmer daily updates on the weather in that area so that he can be prepared for severe weather.
## How we built it
For most of the backend of the program, we used Python.
For the front end of the website, we used HTML. To format the website we used CSS. we have also used Javascript for formates and to connect Python to HTML.
We used the API of Twilio in order to send daily messages to the user in order to help the user be ready for severe weather conditions.
## Challenges we ran into
The biggest challenge that we faced during the making of this project was the connection of the Python code with the HTML code. so that the website can display crop rotation patterns after executing the Python back end script.
## Accomplishments that we're proud of
While making this each of us in the group has accomplished a lot of things. This project as a whole was a great learning experience for all of us. We got to know a lot of things about the different APIs that we have used throughout the project. We also accomplished making predictions on which crops can be grown in an area depending on the weather of the area in the past years and what would be the best crop rotation patterns. On the whole, it was cool to see how the project went from data collection to processing to finally presentation.
## What we learned
We have learned a lot of things in the course of this hackathon. We learned team management and time management, Moreover, we got hands on experience in Machine Learning. We got to implement Linear Regression, Random decision trees, SVM models. Finally, using APIs became second nature to us because of the number of them we had to use to pull data.
## What's next for ECO-HARVEST
For now, the data we have is only limited to the United States, in the future we plan to increase it to the whole world and also increase our accuracy in predicting which crops can be grown in the area. Using the crops that we can grow in the area we want to give better crop rotation models so that the soil will gain its lost nutrients faster. We also plan to give better and more informative daily messages to the user in the future. | losing |
## Inspiration
We were inspired by the fact that **diversity in disability is often overlooked** - individuals who are hard-of-hearing or deaf and use **American Sign Language** do not have many tools that support them in learning their language. Because of the visual nature of ASL, it's difficult to translate between it and written languages, so many forms of language software, whether it is for education or translation, do not support ASL. We wanted to provide a way for ASL-speakers to be supported in learning and speaking their language.
Additionally, we were inspired by recent news stories about fake ASL interpreters - individuals who defrauded companies and even government agencies to be hired as ASL interpreters, only to be later revealed as frauds. Rather than accurately translate spoken English, they 'signed' random symbols that prevented the hard-of-hearing community from being able to access crucial information. We realized that it was too easy for individuals to claim their competence in ASL without actually being verified.
All of this inspired the idea of EasyASL - a web app that helps you learn ASL vocabulary, translate between spoken English and ASL, and get certified in ASL.
## What it does
EasyASL provides three key functionalities: learning, certifying, and translating.
**Learning:** We created an ASL library - individuals who are learning ASL can type in the vocabulary word they want to learn to see a series of images or a GIF demonstrating the motions required to sign the word. Current ASL dictionaries lack this dynamic ability, so our platform lowers the barriers in learning ASL, allowing more members from both the hard-of-hearing community and the general population to improve their skills.
**Certifying:** Individuals can get their mastery of ASL certified by taking a test on EasyASL. Once they start the test, a random word will appear on the screen and the individual must sign the word in ASL within 5 seconds. Their movements are captured by their webcam, and these images are run through OpenAI's API to check what they signed. If the user is able to sign a majority of the words correctly, they will be issued a unique certificate ID that can certify their mastery of ASL. This certificate can be verified by prospective employers, helping them choose trustworthy candidates.
**Translating:** EasyASL supports three forms of translation: translating from spoken English to text, translating from ASL to spoken English, and translating in both directions. EasyASL aims to make conversations between ASL-speakers and English-speakers more fluid and natural.
## How we built it
EasyASL was built primarily with **typescript and next.js**. We captured images using the user's webcam, then processed the images to reduce the file size while maintaining quality. Then, we ran the images through **Picsart's API** to filter background clutter for easier image recognition and host images in temporary storages. These were formatted to be accessible to **OpenAI's API**, which was trained to recognize the ASL signs and identify the word being signed. This was used in both our certification stream, where the user's ASL sign was compared against the prompt they were given, and in the translation stream, where ASL phrases were written as a transcript then read aloud in real time. We also used **Google's web speech API** in the translation stream, which converted English to written text. Finally, the education stream's dictionary was built using typescript and a directory of open-source web images.
## Challenges we ran into
We faced many challenges while working on EasyASL, but we were able to persist through them to come to our finished product. One of our biggest challenges was working with OpenAI's API: we only had a set number of tokens, which were used each time we ran the program, meaning we couldn't test the program too many times. Also, many of our team members were using TypeScript and Next.js for the first time - though there was a bit of a learning curve, we found that its similarities with JavaScript helped us adapt to the new language. Finally, we were originally converting our images to a UTF-8 string, but got strings that were over 500,000 characters long, making them difficult to store. We were able to find a workaround by keeping the images as URLs and passing these URLs directly into our functions instead.
## Accomplishments that we're proud of
We were very proud to be able to integrate APIs into our project. We learned how to use them in different languages, including TypeScript. By integrating various APIs, we were able to streamline processes, improve functionality, and deliver a more dynamic user experience. Additionally, we were able to see how tools like AI and text-to-speech could have real-world applications.
## What we learned
We learned a lot about using Git to work collaboratively and resolve conflicts like separate branches or merge conflicts. We also learned to use Next.js to expand what we could do beyond JavaScript and HTML/CSS. Finally, we learned to use APIs like Open AI API and Google Web Speech API.
## What's next for EasyASL
We'd like to continue developing EasyASL and potentially replacing the Open AI framework with a neural network model that we would train ourselves. Currently processing inputs via API has token limits reached quickly due to the character count of Base64 converted image. This results in a noticeable delay between image capture and model output. By implementing our own model, we hope to speed this process up to recreate natural language flow more readily. We'd also like to continue to improve the UI/UX experience by updating our web app interface. | Currently, about 600,000 people in the United States have some form of hearing impairment. Through personal experiences, we understand the guidance necessary to communicate with a person through ASL. Our software eliminates this and promotes a more connected community - one with a lower barrier entry for sign language users.
Our web-based project detects signs using the live feed from the camera and features like autocorrect and autocomplete reduce the communication time so that the focus is more on communication rather than the modes. Furthermore, the Learn feature enables users to explore and improve their sign language skills in a fun and engaging way. Because of limited time and computing power, we chose to train an ML model on ASL, one of the most popular sign languages - but the extrapolation to other sign languages is easily achievable.
With an extrapolated model, this could be a huge step towards bridging the chasm between the worlds of sign and spoken languages. | ## Inspiration
A week or so ago, Nyle DiMarco, the model/actor/deaf activist, visited my school and enlightened our students about how his experience as a deaf person was in shows like Dancing with the Stars (where he danced without hearing the music) and America's Next Top Model. Many people on those sets and in the cast could not use American Sign Language (ASL) to communicate with him, and so he had no idea what was going on at times.
## What it does
SpeakAR listens to speech around the user and translates it into a language the user can understand. Especially for deaf users, the visual ASL translations are helpful because that is often their native language.

## How we built it
We utilized Unity's built-in dictation recognizer API to convert spoken word into text. Then, using a combination of gifs and 3D-modeled hands, we translated that text into ASL. The scripts were written in C#.
## Challenges we ran into
The Microsoft HoloLens requires very specific development tools, and because we had never used the laptops we brought to develop for it before, we had to start setting everything up from scratch. One of our laptops could not "Build" properly at all, so all four of us were stuck using only one laptop. Our original idea of implementing facial recognition could not work due to technical challenges, so we actually had to start over with a **completely new** idea at 5:30PM on Saturday night. The time constraint as well as the technical restrictions on Unity3D reduced the number of features/quality of features we could include in the app.
## Accomplishments that we're proud of
This was our first time developing with the HoloLens at a hackathon. We were completely new to using GIFs in Unity and displaying it in the Hololens. We are proud of overcoming our technical challenges and adapting the idea to suit the technology.
## What we learned
Make sure you have a laptop (with the proper software installed) that's compatible with the device you want to build for.
## What's next for SpeakAR
In the future, we hope to implement reverse communication to enable those fluent in ASL to sign in front of the HoloLens and automatically translate their movements into speech. We also want to include foreign language translation so it provides more value to **deaf and hearing** users. The overall quality and speed of translations can also be improved greatly. | partial |
## Inspiration
The inspiration for Sabr ignited by the increasing prices of food. With food security becoming more prevalent in our society, we were determined to create an application that would help tie together local restaurants as well as the communities local to them in a way that corporate food delivery apps cannot. Through ensuring that both restaurants and customers get paid fairly and lowering the reliance on eternal sources, we are able to minimize costs for both consumers and businesses.
## What it does
Sabr is a platform where businesses are able to offer discounted S-Dollars which would be used to pay for meals or products from their stores.
## How we built it
This application was built using react-native, this is so that this app would be easily cross compatible between iOS and Android devices. We developed for iOS, but if we decided to expand the scope of this project, it would not be an arduous task.
## Challenges we ran into
Some challenges we ran into were figuring out how to set up the environment for one of the laptops, as it seemed to have some issues. Moreover, not everyone on the team was familiar with React-Native, and as such, required us to learn on the fly.
## Accomplishments that we're proud of
Something that we are proud of is that we have a relatively visual front-end that reflects what our intentions with the app are.
## What we learned
We learned how to use React-Native as well as developing with an iOS simulator
## What's next for sabr
We will polish the application and make it work fluidly | ## What Inspired Us
As four young men navigating the complexities of modern life, we are empathetic to stark realities of the mental health challenges facing society today. As four students passionate about the power of data and technology, we are also well aware of the potential of the Internet of Things to inform psychiatric intervention — a potential which has been validated by clinical literature [1]. On one hand, the inspiration for our project emerged from a deeply personal place—a realization that each of us, or someone we know, could silently be on the brink of despair. On the other hand, our idea emerged from the growing body of literature about the potential for digital phenotyping in psychology, whereby our digital data can inform diagnoses. We decided we wanted to tackle the issue of reactive psychological intervention, inspring us to develop a more proactive, data-based method to prevent the many horrible consequences of mental illness, specifically depression. Our project is more than an app or a service; it's a movement toward creating a safe, inclusive, and accurate mental-health tool that furthers trust, accessibility, and feasibility in .
## What We Learned
One of the most enlightening aspects of our journey was the discovery and understanding of digital phenotypes and their potential in identifying mental health issues. This concept quickly became a cornerstone of our project. Digital phenotypes refer to the collection of digital data that relates to an individual's behavior and interactions with digital devices and platforms [1]. We learned that these data points, when analyzed thoughtfully, offer a wealth of quantitative information that can help in the early detection of mental health issues. Moreover research in this space is showing staggeringly promising results in how good digital phenotypes are as predictors of mental illness.
## The Power of Quantitative Data
Our exploration into the realm of digital phenotypes revealed the vast potential of using quantitative data to gain insights into an individual's mental health. This data isn't just numbers and statistics; it's a reflection of behavior patterns, social interactions, and even changes in mood and mental state over time. For instance, variations in movement, sleep, and patterns of device usage can all serve as indicators of an individual's psychological well-being [1]. Numerous instances of research pointed to the fact we could find concrete, validated ways to utilize available quantitative data such as location to create metrics on key mental illness predictors like sociability. For instance, one group was able to employ machine learning on GPS- and phone usage derived user features to predict depression with 80% accuracy. This study reported population means for these features (such as location variance, location entropy, and mean screen time per day), which informed the statistical methods that we utilized in our app [2].
## Gamifying Mental Health: A Creative Fusion
Harnessing the power of data science and innovative algorithms, we embarked on a journey to redefine mental health support by integrating the concept of gamification. Our aim was to transform the daunting task of managing mental health into an engaging and motivating experience. By gamifying mental health, we sought to break down the barriers of traditional mental health care and create a more accessible and appealing approach for users through a simple daily score that could be tracked. We want to create a positive reinforcement mechanism for behavior associated with good mental health and provide personalized motivational feedback as soon as any increase in negative behavior occurs
## The Culminating Solution: CURA
We monitor several key digital data markers of our users such as location, screen time, sleep and more, all of which were chosen due to their strong peer-reviewed researched correlation with positive or negative increased risk of depression [2]. We used many data science methods such as advanced unsupervised learning algorithms to cluster user data such as location to get actionable insights such as the number of places the user has visited in the last week, which have been shown to correlate to mental state. We then present the user with an aggregated score on how good their habits have been in the past week based on 8 key metrics, as well as their habits generally compared to the population means of depressed and non-depressed individuals [2]. We present these two data points to the user in a very gamified manner to reward increases in positive habits. To also leverage the benefits of personalized based medical interventions [3], we have carefully prompted and created a chat to support the user and give personalized motivational and informative messages to the user. If the user exhibits abnormal negative habits and behaviors we alert them to resources where they can get help.
## How We Built Our Project
Team Structure and Roles
Data Science and Algorithms Expert: One of us took on the challenge of diving deep into data science and algorithms. This role was crucial for developing the scoring system that underpins our gamification approach. By analyzing user interactions and behaviors, we created a sophisticated model that personalizes the user experience, offering tailored challenges and feedback.
## Application Development Trio:
API Integration Specialist: Focused on connecting our app with various external services and data sources. This role involved making API calls to fetch and send data specifically using Apple’s API calls for phone data and GPT API for our feedback. This ensured our app could interact seamlessly with third-party services and our backend infrastructure.
Frontend Developer: Dedicated to crafting the user interface and experience. This involved using Swift to create intuitive and engaging layouts that would keep our users motivated and engaged in their mental health journey.
Full Stack Architect: Served as the bridge between frontend and backend development. This role entailed overseeing the app's overall architecture, ensuring that both the client-facing and server-side components worked harmoniously together.
Technical Architecture
Our project's backbone was a carefully planned technical architecture that emphasized efficiency, scalability, and user experience.
Swift for Frontend and Backend: We chose Swift as our primary development language due to its robustness, performance, and seamless integration with Apple's ecosystem. Swift allowed us to create a fluid and responsive interface while also handling backend logic with efficiency.
Firebase Database: For our database needs, we utilized Firebase. Its real-time database capabilities and easy integration with Swift made it an ideal choice for storing and retrieving user data quickly and securely.
Google Cloud Functions: To incorporate our data science models into the app, we leveraged Google Cloud Functions. This allowed us to execute our algorithm-driven scoring system in the cloud on a timer to receive scores daily while seamlessly updating the firebase data.
## Development Process
Our development process was iterative and agile, focusing first on establishing a solid architecture that would support both the immediate and future needs of the project.
Architecture Planning: We started by laying out the architecture, ensuring that our choice of technologies would allow us to build a scalable and maintainable app.
Division of Labor: With our architecture in place, we divided the workload according to our individual strengths and areas of expertise. This division allowed us to work efficiently, with each team member focusing on their respective domain.
Integration and Testing: As the app began to take shape, we prioritized integration and testing, ensuring that each component functioned as expected and that the user experience was smooth and engaging.
Incorporating Data Science: The final step was to integrate the data science algorithms via Google Cloud Functions, allowing our app to offer a truly personalized and gamified mental health experience.
## Challenges We Faced
Ideation: Finding the Sweet Spot
The initial phase of ideation was perhaps the most daunting challenge we faced. Our goal was not only to innovate but also to ensure that our project would have a real impact on mental health. Striking the right balance between novelty, technical feasibility, and genuine usefulness took countless brainstorming sessions, research, and discussions. We aimed to create something that wasn't just another app but a revolutionary approach to mental health support.
Navigating API and Privacy Constraints with Apple
Integrating with third-party services through APIs while adhering to Apple's stringent privacy guidelines presented a significant hurdle. Apple's ecosystem prioritizes user privacy and security, which, while beneficial for users, imposed limitations on how we could collect and process data. We had to meticulously plan our API calls and data handling processes to ensure compliance with these guidelines without compromising the functionality and user experience of our app. One of the drawbacks of our app is that it requires user data that they might prefer to keep private, so we are sure to notify them what it is for and when we are using said data.
Algorithm Creation and Data Integration
Developing the algorithm that underpins our scoring and recommendation system was a complex task. Balancing the desire to provide valuable, tailored advice with the understanding that we are not the actual individuals experiencing these mental states required a thoughtful approach to algorithm design. We leveraged extensive studies and data science techniques to analyze patterns and behaviors while ensuring our recommendations remained sensitive to the nuances of individual experiences.
Personalization vs. Generalization
A critical challenge in our development process was finding the right balance between personalizing the user experience and acknowledging the vast differences in mental health conditions and responses to treatment. We recognized that while our algorithm could identify patterns and suggest actions, the subjective nature of mental health meant that one size does not fit all. This was perhaps the most difficult. The last thing we wanted to do was tell someone how they were feeling, or what mental state they were in. To mitigate this, we focused solely on quantitative data that we could present in a way to show them how they might be changing their behavior – and if it was for the worse by our measures – we notified them of this.
Facing and overcoming these challenges was a pivotal part of our project's journey. Each obstacle provided us with valuable lessons on the importance of flexibility, user-centered design, and the ethical considerations of developing health-related technology. Further, the technical challenges forced us all to take an open to learning approach in order to adapt to every part of the project.
* Built with
What languages, frameworks, platforms, cloud services, databases, APIs, or other technologies did you use?
Citations:
[1] Montag, C., Sindermann, C., & Baumeister, H. (2020). Digital phenotyping in psychological and medical sciences: A reflection about necessary prerequisites to reduce harm and increase benefits. Current Opinion in Psychology, 36, 19–24. <https://doi.org/10.1016/j.copsyc.2020.03.013>
[2] Opoku Asare, K., Moshe, I., Terhorst, Y., Vega, J., Hosio, S., Baumeister, H., Pulkki-Råback, L., & Ferreira, D. (2022). Mood ratings and digital biomarkers from smartphone and wearable data differentiates and predicts depression status: A longitudinal data analysis. Pervasive and Mobile Computing, 83, 101621. <https://doi.org/10.1016/j.pmcj.2022.101621>
[3] Teeny, J. D., Siev, J. J., Briñol, P., & Petty, R. E. (2021). A Review and Conceptual Framework for Understanding Personalized Matching Effects in Persuasion. Journal of Consumer Psychology, 31(2), 382–414. <https://doi.org/10.1002/jcpy.1198> | ## Inspiration
We wanted to explore what GCP has to offer more in a practical sense, while trying to save money as poor students
## What it does
The app tracks you, and using Google Map's API, calculates a geofence that notifies the restaurants you are within vicinity to, and lets you load coupons that are valid.
## How we built it
React-native, Google Maps for pulling the location, python for the webscraper (*<https://www.retailmenot.ca/>*), Node.js for the backend, MongoDB to store authentication, location and coupons
## Challenges we ran into
React-Native was fairly new, linking a python script to a Node backend, connecting Node.js to react-native
## What we learned
New exposure APIs and gained experience on linking tools together
## What's next for Scrappy.io
Improvements to the web scraper, potentially expanding beyond restaurants. | losing |
## 💡 Inspiration
**Waiting on the phone** can be a painful and tedious experience. Nothing is more annoying than hearing ***the same elevator song*** restart over and over... Why does a **mundane activity** that we all have to go through have to be so unproductive?
With the onset of the pandemic, people are calling in for consultations, advisors and support now more than ever before. Already stressed parents, students and seniors are being forced to wait upwards of several hours to get the help they need.
We all agreed that this time would be better spent in a more **enjoyable** and **productive** way. With so many individuals now unemployed and tight on money, choosing the right financial service is crucial. This is where we thought, **what better opportunity to improve the financial literacy of individuals in need than while on hold?**
By improving the financial literacy of customers while on hold, we realized we could help them become aware of critical services that they can discuss with representatives. This helps customers capitalize on their time spent with representatives. As well, we enable financial institutions to deliver more services and collect helpful information about their customers to shape their marketing and **improve future experiences**.
Through a **better understanding** of customers, their needs, and **financial literacy**, we believe that banks will become better equipped with tools and resources to **serve everyone**.
## 🤔 What it does
Happy Holdings is a **caller interface** that helps users **test their knowledge** in components of personal finance in a **fun, trivia-like** way. After users respond with their answers, Happy Holdings **stores this information in a database** for banks to analyze and better understand the needs and knowledge of customers. We then provide a beautiful interface that visualizes and aggregates the data from these responses for financial institutions to analyze.
Moreover, the test serves as a customer diagnostic. Questions test a **variety of topics** such as savings, retirement and investing. Users can answer through **selecting keys on their dial pad**. Once an agent is available to speak, the user will be taken off Happy Holdings and **transferred to their call**.
Additionally, an **SMS text message** is sent to users after their experience with Happy Holdings, to **summarize their performance** and give them something to read over in case they missed anything during the call.
You might be wondering, why would users participate? Considering the large and highly informative dataset banks could gain, for **questions answered correctly, users could earn ballots** for a monthly raffle prize from the bank! Moreover, we think a quick game of trivia is **much more fun** than listening to an incessant loop of "calming" (more like agitating!) music.
## 🧰 How we built it
Using **Python and Twilio's APIs**, we created a backend interface with a registered phone number that users can call. To store user input and provide questions, we created a **database using sqlite3** that we linked up through **Flask** to serve phone requests. We then used *ngrok* to serve these requests to the public internet. For our dashboard, we used **HTML, CSS, JSX and React** to build elements of the front-end. **Figma** was used to create prototypes and map out **screen mockups**.
## 😅 Challenges we ran into
This hack was something new to all of us. Building **a voice interface** was something completely different from any previous projects we've tackled, which made it all the more exciting. Actually, most of us have **never worked with a terminal**, so configuring Python and pip was also a bit of a blocker at first.
With **5 hours left** of our hackathon, we decided to **switch from JSON to sqlite3** for storing our information, and a new learning curve was thrown at us. It took quite a bit of time and online tutorials to figure out how to use sqlite3 to access our database of questions and user responses in a voice interface.
As we worked on the project, another unexpected challenge that we ran into was the time needed to test the voice interface. It quickly became very time consuming to have to call our test number and navigate through our entire voice menu to tweak a small fix (trust us, we have 8 hours of outgoing call logs to prove it).
## 💜 Accomplishments that we're proud of
Something we are really proud of is definitely that we **pushed ourselves out of our comfort zone** into areas we had never worked with before. This includes working with the backend in sqlite3, learning completely foreign Twilio APIs, as well with using React in our front-end.
This was our first time working with Twilio, and our first time ever attempting to create a phone call interface. Through **trial and error**, and reading tons of documentation, we are extremely proud of what we were able to accomplish in the given time frame, and that we managed to sneak in our own original music 😊
**Teaching one another** is an incredibly valuable component to teamwork, and we are so grateful that one member was able to help us get set up with Python, running terminals and debugging. Through her **expertise and knowledge**, we were all able to get up on our feet and work together.
To prepare for our project, we **reached out to our bank sponsors** to understand the industry and see what information they would find valuable. We even **called a few bank lines** to see how long the wait queues are in the middle of a pandemic and have a clear sense of the current customer journey. Using this **market research**, we are proud of the product that we designed and confident it **addresses needs** on both the bank and users' ends.
## 🎓 What we learned
Since this was our **first time ever working with Twilio and creating a phone call interface**, we had a lot to learn, in almost every aspect – whether that be with brand-new APIs, switching databases, working on a terminal, or even brushing up on our Python knowledge.
We took this **brand new venture** as an **opportunity** to broaden our scope of abilities and **try something new**. Although many challenges arose, we found the process of **resolving and debugging** extremely rewarding. We're super proud of how we all managed to build new skills in vastly different areas.
## ⏩ What's next for Happy Holding
We hope to **further develop our database of questions** and provide more insightful information to banks about their users. Moreover, we hope to implement a **tracking system** that allows us to know the frequency of calls from users and try to address recurring issues that lead them to call.
Moreover, we’d love to create an interface that allows banks to **easily add or remove questions** from their current queues and even **conduct A/B testing** to analyze which questions are most effective in educating and **maximizing customer satisfaction**.
To make Happy Holdings even more productive, we want to **partner with banks** to actually see what their wait queues look like and **receive feedback** on how we can tailor our product to better suit their needs.
We would love to create a **live component** of Happy Holdings, where we can group people together in a game of trivia. Different users can **compete against each other**, which could get interesting 👀 | ## Inspiration
Climate change, deforestation. It is important to be able to track animals. This provides them with another layer of security and helps us see when there's something wrong. A change in migration patterns or sudden movements can indicate illegal activity.
## What it does
It is able to track movement and animals efficiently through a forest. These nodes can be placed at different points to track the movement of different species
## How we built it
works with 2 esp8266's. They are attached to a PIR motion sensor. When activated it sends location data to the database. It is then retrieved and plotted onto google maps.
## Challenges we ran into
Many issues were encountered during this project. Mainly firebase, and the esp32 cam. We were having extreme difficulties pulling the right data at the right time from the firebase database. The esp32 was also causing many issues. It was hard to set up and was hard to program. Due to this, we could no longer implement the object detection model and were forced to use only an esp8266 and no camera.
## Accomplishments that we're proud of
This was our first time using Firebase, and we figured it out
## What we learned
Firebase, IoT things gmaps api
## What's next for Animal Tracker
Integrate esp32cam and image classification model. Add more features to the site. | ## Inspiration
In today's fast-paced world, the average person often finds it challenging to keep up with the constant flow of news and financial updates. With demanding schedules and numerous responsibilities, many individuals simply don't have the time to sift through countless news articles and financial reports to stay informed about stock market trends. Despite this, they still desire a way to quickly grasp which stocks are performing well and make informed investment decisions.
Moreover, the sheer volume of news articles, financial analyses and market updates is overwhelming. For most people finding the time to read through and interpret this information is not feasible. Recognizing this challenge, there is a growing need for solutions that distill complex financial information into actionable insights. Our solution addresses this need by leveraging advanced technology to provide streamlined financial insights. Through web scraping, sentiment analysis, and intelligent data processing we can condense vast amounts of news data into key metrics and trends to deliver a clear picture of which stocks are performing well.
Traditional financial systems often exclude marginalized communities due to barriers such as lack of information. We envision a solution that bridges this gap by integrating advanced technologies with a deep commitment to inclusivity.
## What it does
This website automatically scrapes news articles from the domain of the user's choosing to gather the latests updates and reports on various companies. It scans the collected articles to identify mentions of the top 100 companies. This allows users to focus on high-profile stocks that are relevant to major market indices. Each article or sentence mentioning a company is analyzed for sentiment using advanced sentiment analysis tools. This determines whether the sentiment is positive, negative, or neutral. Based on the sentiment scores, the platform generates recommendations for potential stock actions such as buying, selling, or holding.
## How we built it
Our platform was developed using a combination of robust technologies and tools. Express served as the backbone of our backend server. Next.js was used to enable server-side rendering and routing. We used React to build the dynamic frontend. Our scraping was done with beautiful-soup. For our sentiment analysis we used TensorFlow, Pandas and NumPy.
## Challenges we ran into
The original dataset we intended to use for training our model was too small to provide meaningful results so we had to pivot and search for a more substantial alternative. However, the different formats of available datasets made this adjustment more complex. Also, designing a user interface that was aesthetically pleasing proved to be challenging and we worked diligently to refine the design, balancing usability with visual appeal.
## Accomplishments that we're proud of
We are proud to have successfully developed and deployed a project that leverages web scrapping and sentiment analysis to provide real-time, actionable insights into stock performances. Our solution simplifies complex financial data, making it accessible to users with varying levels of expertise. We are proud to offer a solution that delivers real-time insights and empowers users to stay informed and make confident investment decisions.
We are also proud to have designed an intuitive and user-friendly interface that caters to busy individuals. It was our team's first time training a model and performing sentiment analysis and we are satisfied with the result. As a team of 3, we are pleased to have developed our project in just 32 hours.
## What we learned
We learned how to effectively integrate various technologies and acquired skills in applying machine learning techniques, specifically sentiment analysis. We also honed our ability to develop and deploy a functional platform quickly.
## What's next for MoneyMoves
As we continue to enhance our financial tech platform, we're focusing on several key improvements. First, we plan to introduce an account system that will allow users to create personal accounts, view their past searches, and cache frequently visited websites. Second, we aim to integrate our platform with a stock trading API to enable users to buy stocks directly through the interface. This integration will facilitate real-time stock transactions and allow users to act on insights and make transactions in one unified platform. Finally, we plan to incorporate educational components into our platform which could include interactive tutorials, and accessible resources. | losing |
## Inspiration
We were inspired to build Schmart after researching pain points within Grocery Shopping. We realized how difficult it is to stick your health goals or have a reduced environmental impact while grocery shopping. Inspired by innovative technlogy that exists, we wanted to create an app which would conveniently allow anyone to feel empowered to shop by reaching their goals and reducing friction.
## What it does
Our solution, to gamify the grocery shopping experience by allowing the user to set goals before shopping and scan products in real time using AR and AI to find products that would meet their goals and earn badges and rewards (PC optimum points) by doing so.
## How we built it
This product was designed on Figma, and we built the backend using Flask and Python, with the database stored using SQLite3. We then built the front end with React Native.
## Challenges we ran into
Some team members had school deadlines during the hackathon, so we could not be fully concentrated on the Hackathon coding. In addition, our team was not too familiar with React Native, so development of the front end took longer than expected.
## Accomplishments that we're proud of
We are extremely proud that we were able to build an deployed an end-to-end product in such a short timeframe. We are happy to empower people while shopping and make the experience so much more enjoyable and problem solve areas that exist while shopping.
## What we learned
Communication is key. This project would not have been possible without the relentless work of all our team members striving to make the world a better place with our product. Whether it be using technology we have never used before or sharing our knowledge with the rest of the group, we all wanted to create a product that would have a positive impact and because of this we were successful in creating our product.
## What's next for Schmart
We hope everyone can use Schmart in the future on their phones as a mobile app. We can see it being used in Grocery (and hopefully all stores) in the future. Leaders. Meeting health and environmental goals should be barrier-free, and being an app that anyone can use, this makes this possible. | ## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app. | ## Inspiration
For many college students, finding time to socialize and make new friends is hard. Everyone's schedule seems perpetually busy, and arranging a dinner chat with someone you know can be a hard and unrewarding task. At the same time, however, having dinner alone is definitely not a rare thing. We've probably all had the experience of having social energy on a particular day, but it's too late to put anything on the calendar. Our SMS dinner matching project exactly aims to **address the missed socializing opportunities in impromptu same-day dinner arrangements**. Starting from a basic dining-hall dinner matching tool for Penn students only, we **envision an event-centered, multi-channel social platform** that would make organizing events among friend groups, hobby groups, and nearby strangers effortless and sustainable in the long term for its users.
## What it does
Our current MVP, built entirely within the timeframe of this hackathon, allows users to interact with our web server via **SMS text messages** and get **matched to other users for dinner** on the same day based on dining preferences and time availabilities.
### The user journey:
1. User texts anything to our SMS number
2. Server responds with a welcome message and lists out Penn's 5 dining halls for the user to choose from
3. The user texts a list of numbers corresponding to the dining halls the user wants to have dinner at
4. The server sends the user input parsing result to the user and then asks the user to choose between 7 time slots (every 30 minutes between 5:00 pm and 8:00 pm) to meet with their dinner buddy
5. The user texts a list of numbers corresponding to the available time slots
6. The server attempts to match the user with another user. If no match is currently found, the server sends a text to the user confirming that matching is ongoing. If a match is found, the server sends the matched dinner time and location, as well as the phone number of the matched user, to each of the two users matched
7. The user can either choose to confirm or decline the match
8. If the user confirms the match, the server sends the user a confirmation message; and if the other user hasn't confirmed, notifies the other user that their buddy has already confirmed the match
9. If both users in the match confirm, the server sends a final dinner arrangement confirmed message to both users
10. If a user decides to decline, a message will be sent to the other user that the server is working on making a different match
11. 30 minutes before the arranged time, the server sends each user a reminder
###Other notable backend features
12. The server conducts user input validation for each user text to the server; if the user input is invalid, it sends an error message to the user asking the user to enter again
13. The database maintains all requests and dinner matches made on that day; at 12:00 am each day, the server moves all requests and matches to a separate archive database
## How we built it
We used the Node.js framework and built an Express.js web server connected to a hosted MongoDB instance via Mongoose.
We used Twilio Node.js SDK to send and receive SMS text messages.
We used Cron for time-based tasks.
Our notable abstracted functionality modules include routes and the main web app to handle SMS webhook, a session manager that contains our main business logic, a send module that constructs text messages to send to users, time-based task modules, and MongoDB schema modules.
## Challenges we ran into
Writing and debugging async functions poses an additional challenge. Keeping track of potentially concurrent interaction with multiple users also required additional design work.
## Accomplishments that we're proud of
Our main design principle for this project is to keep the application **simple and accessible**. Compared with other common approaches that require users to download an app and register before they can start using the service, using our tool requires **minimal effort**. The user can easily start using our tool even on a busy day.
In terms of architecture, we built a **well-abstracted modern web application** that can be easily modified for new features and can become **highly scalable** without significant additional effort (by converting the current web server into the AWS Lambda framework).
## What we learned
1. How to use asynchronous functions to build a server - multi-client web application
2. How to use posts and webhooks to send and receive information
3. How to build a MongoDB-backed web application via Mongoose
4. How to use Cron to automate time-sensitive workflows
## What's next for SMS dinner matching
### Short-term feature expansion plan
1. Expand location options to all UCity restaurants by enabling users to search locations by name
2. Build a light-weight mobile app that operates in parallel with the SMS service as the basis to expand with more features
3. Implement friend group features to allow making dinner arrangements with friends
###Architecture optimization
4. Convert to AWS Lamdba serverless framework to ensure application scalability and reduce hosting cost
5. Use MongoDB indexes and additional data structures to optimize Cron workflow and reduce the number of times we need to run time-based queries
### Long-term vision
6. Expand to general event-making beyond just making dinner arrangements
7. Create explore (even list) functionality and event feed based on user profile
8. Expand to the general population beyond Penn students; event matching will be based on the users' preferences, location, and friend groups | winning |
## Inspiration
Building and maintaining software is complex, time-consuming, and can quickly become expensive, especially as your application scales. Developers, particularly those in startups, often overspend on tools, cloud services, and server infrastructure without realizing it. In fact, nearly 40% of server costs are wasted due to inefficient resource allocation, and servers often remain idle for up to 80% of their runtime.
As your traffic and data grow, so do your expenses. Managing these rising costs while ensuring your application's performance is critical—but it's not easy. This is where Underflow comes in. It automates the process of evaluating your tech stack and provides data-driven recommendations for cost-effective services and infrastructure. By analyzing your codebase and optimizing for traffic, Underflow helps you save money while maintaining the same performance and scaling capabilities.
## What it does
Underflow is a **command-line tool** that helps developers optimize their tech stack by analyzing the codebase and identifying opportunities to reduce costs while maintaining performance. With a single command, developers can input a **GitHub repository** and the number of monthly active users, and Underflow generates a detailed report comparing the current tech stack with an optimized version. The report highlights potential cost savings, performance improvements, and suggests more efficient external services. The tool also provides a clear breakdown of why certain services were recommended, making it easier for developers to make informed decisions about their infrastructure.
## How we built it

Underflow is a command-line tool designed for optimizing software architecture and minimizing costs based on projected user traffic. It is executed with a single command and two arguments:
```
underflow <github-repository-identifier> <monthly-active-users>
```
Upon execution, Underflow leverages the **OpenAI API** to analyze the provided codebase, identifying key third-party services integrated into the project. The extracted service list and the number of monthly active users are then sent to a **FastAPI backend** for further processing.
The backend queries an **AWS RDS**-hosted **MySQL** database, which contains a comprehensive inventory of external service providers, including cloud infrastructure, CI/CD platforms, container orchestration tools, distributed computing services, and more. The database stores detailed information such as pricing tiers, traffic limits, service categories, and performance characteristics. The backend uses this data to identify alternative services that provide equivalent functionality at a lower cost while supporting the required user traffic.
The results of this optimization process are cached, and a comparison report is generated using the OpenAI API. This report highlights the cost and performance differences between the original tech stack and the proposed optimized stack, along with a rationale for selecting the new services.
Finally, Underflow launches a GUI build with **Next.js** that presents a detailed analytics report comparing the original and optimized tech stacks. The report provides key insights into cost savings, performance improvements, and the reasoning behind service provider selections.
This technical solution offers developers a streamlined way to evaluate and optimize their tech stacks based on real-world cost and performance considerations.
## Accomplishments that we're proud of
We’re proud of creating a tool that simplifies the complex task of optimizing tech stacks while reducing costs for developers. Successfully integrating multiple components, such as the OpenAI API for codebase analysis, a FastAPI backend for processing, and an AWS-hosted MySQL database for querying external services, was a significant achievement. Additionally, building a user-friendly command-line interface that provides clear, data-driven reports about tech stack performance and cost optimization is something we're excited about. We also managed to create a streamlined workflow that allows developers to assess cost-saving opportunities without needing deep knowledge of infrastructure or services.
## What's next for Underflow
* Generating a better database containing more comprehensive list of available external services and dependencies
* Migrate traffic determination from user manual input to be based on server-level architecture, such as using elastic search on server logs to determine the true amount of third party service usages | # Inspiration
We’ve all been in a situation where collaborators’ strengths all lie in different areas, and finding “the perfect” team to work with is more difficult than expected. We wanted to make something that could help us find people with similar strengths without painstakingly scouring dozens of github accounts.
# What it does
MatchMadeIn.Tech is a platform where users are automatically matched with other github users who share similar commit frequencies, language familiarity, and more!
# How we built it
We used a modern stack that includes React for the front end and python Flask for the back end. Our model is a K-Means Cluster model, and we implemented it using scikit-learn, storing the trained model in a PickleDB. We leveraged GitHub's API to pull user contribution data and language preferences data for over 3 thousand users, optimizing our querying using GraphQL.
# Challenges we ran into
A big issue we faced was how to query the Github API to get full representation of all the users on the platform. Because there are over 100 million registered users on Github, many of which are bots and accounts that have no contribution history, we needed a way to parse these users.
Another obstacle we ran into was implementing the K-Means Cluster model. This was our first time using any machine learning algorithms other than Chat-GPT, so it was a very big learning curve. With multiple people working on the querying of data and training the model, our documentation regarding storing the data in code needed to be perfect, especially because the model required all the data to be in the same format.
# Accomplishments that we're proud of
Getting the backend to actually work! We decided to step out of our comfort zone and train our own statistical inference model. There were definitely times we felt discouraged, but we’re all proud of each other for pushing through and bringing this idea to life!
# What we learned
We learned that there's a real appetite for a more meaningful, niche-focused dating app in the tech community. We also learned that while the tech is essential, user privacy and experience are just as crucial for the success of a platform like this.
# What's next for MatchMadeIn.Tech
We’d love to add more metrics to determine user compatibility such as coding style, similar organizations, and similar feature use (such as the project board!). | The kidneys are a pair of bean-shaped organs on either side of your spine, below your ribs and behind your belly. Each kidney is about 4 or 5 inches long, roughly the size of a large fist. The kidneys' job is to filter your blood.They remove wastes, control the body's fluid balance, and keep the right levels of electrolytes. All of the blood in your body passes through them several times a day. Kidney disorders varies from Urinary tract infections,Kidney stones or even chronic kidney disorders. the aim of this project was to build an web enabled system to predict whether a patient is suffering from kidney disorder or not based on features such as Haemoglobin level, Pus cells, Blood pressure, Age etc.
I build this project using Machine learning techniques. the task was basically a binary classification problem statement.
I chose Random forest classifier for my problem statement and it worked quite well. I used sklearn library to apply random forest on my dataset. project is deployed on Heroku Cloud using Flask API.
I faced challenge of data cleaning the dataset initially was very messy and it required a lot of preprocessing to be model ready.
In building this project i gained a lot of experience of working with medical dataset along with that i read a lot about all the features of the dataset and thereby it helped my enhance my knowledge and get some domain knowledge about medical datasets.
Next i am looking for making an android app out of this project so that it can be used by large mass of people !! | partial |
## Inspiration
A cultural obsession with cute animal videos and fluffy creatures doing human things drastically contrasts with the amount of irresponsible pet ownership, animal rights violations, and significant populations of abandoned pets in American cities. As we head into a future with an widening wealth gap, it becomes increasingly apparent that responsible ownership of living creatures is a privilege for those who already have their necessities met and possess the financial bandwidth to care and provide - and to LEARN how to care and provide - for a furry dependent. If that wasn't bad enough, there is a pandemic of loneliness that is hitting people all over the world due to poor work life balance, urbanization creating smaller and smaller living spaces, and advancing technology that is distancing us from even the people next to us. That's where Petsi comes in - a way to connect people with the professionals with the know-how in an innovative way. Petsi is meant for people who are interested in a caring for an animal companion and want to connect with the professionals that are looking to home animals that are in need of love and safety. However, its not an easy process since people are imperfect and pet ownership is a big commitment, not to mention picky potential owners.
## What it does
Petsi is a React multi-page webapp that securely logs you in after the landing page and takes you to available animals in the form of swipe cards that you can swipe through. Then once you find a potential pet and swipe right, Petsi sends you to the decentralized chat app portion of the site to communicate your interest in your chosen animal and learn about the adoption and ownership process with a partnered rescue or shelter. People can use Petsi to quickly navigate through available animals in need of a home and learn about what it takes to make them apart of the family in an accessible, less time-consuming, and less intimidating platform. Petsi a way to adopt for the next generation of pet owners.
## How we built it
We used JavaScript, Next.js, React, Chakra UI, Auth0, IPFS, and CSS to design, build, and serve Petsi.
## Challenges we ran into
For the entire team, we had to learn what a decentralized app is and how it works, build one that works, and finally deploy a chat app within a React app. Additionally, React and creating authentication was brand new to many members. Half of our team was international which required adapting ourselves to different time zones.
## Accomplishments that we're proud of
We are proud that we came together as a group of strangers and were able to learn new technologies, make new friends across cultures and distance, and create an awesome project that has the potential to impact society positively and bring society closer together.
## What we learned
We learned how to design a multi-page web app using React and implement interactive features. Utilizing JavaScript, HTML, CSS, and React, we were able learn how to design a customize web applications that are easy and intuitive to use. Additionally, we learned how to incorporate a IPFS to connect users of our web app to places that have the knowledge, the professionals and the animals they are looking for. Lastly, we learned how to capitalize on Auth0 to create an easy way for people to log into our app with their preexisting email.
## What's next for Petsi
So, we've made a drop in the bucket in addressing the crisis of abandoned animals across the nation. Doomed to suffering fates, unsightly and dangerous to our health and our communities, and a financial burden to people and to local government, that frankly, is difficult to take on when there are what seem like more pressing issues everywhere. So the next step for Petsi would be to add a two pronged revenue producing plan to the existing app. The first would be monetize the app with Coil and replace images of animals with a quick video of that animal so as to keep the attention of the perusing viewer that perhaps isn't ready to go through with a connection, but the time they spend on the site with help the medical, training, and maintenance expenses of these animals. Contingent on the cooperation of animal rescues and animal protection agencies, a booking and appointments feature would be added to Petsi to allow for the leasing services of furry companions on a sliding scale. This would fill the void in market of single, young professionals or financially strained families or lonely people who are all seeking that endorphin rush that comes from a furry member of the family but aren't in a position to make a lifelong commitment. By reducing the barriers to adoption, reducing the costs of animal care and maintenance, and striving to make a self-sustaining model, we would want Petsi to help alleviate animal suffering and human mental health struggles well into the future. | ## Inspiration
When we heard about using food as a means of love and connection from Otsuka x VALUENEX’s Opening Ceremony presentation, our team was instantly inspired to create something that would connect Asian American Gen Z with our cultural roots and immigrant parents. Recently, there has been a surge of instant Asian food in American grocery stores. However, the love that exudes out of our mother’s piping hot dishes is irreplaceable, which is why it’s important for us, the loneliest demographic in the U.S., to cherish our immigrant parents’ traditional recipes. As Asian American Gen Z ourselves, we often fear losing out on beloved cultural dishes, as our parents have recipes ingrained in them out of years of repetition and thus, neglected documenting these precious recipes. As a result, many of us don’t have access to recreating these traditional dishes, so we wanted to create a web application that encourages sharing of traditional, cultural recipes from our immigrant parents to Asian American Gen Z. We hope that this will reinforce cross-generational relationships, alleviate feelings of disconnect and loneliness (especially in immigrant families), and preserve memories and traditions.
## What it does
Through this web application, users have the option to browse through previews of traditional Asian recipes, posted by Asian or Asian American parents, featured on the landing page. If choosing to browse through, users can filter (by culture) through recipes to get closer to finding their perfect dish that reminds them of home. In the previews of the dishes, users will find the difficulty of the dish (via the number of knives – greater is more difficult), the cultural type of dish, and will also have the option to favorite/save a dish. Once they click on the preview of a dish, they will be greeted by an expanded version of the recipe, featuring the name and image of the dish, ingredients, and instructions on how to prepare and cook this dish. For users that want to add recipes to *yumma*, they can utilize a modal box and input various details about the dish. Additionally, users can also supplement their recipes with stories about the meaning behind each dish, sparking warm memories that will last forever.
## How we built it
We built *yumma* using ReactJS as our frontend, Convex as our backend (made easy!), Material UI for the modal component, CSS for styling, GitHub to manage our version set, a lot of helpful tips and guidance from mentors and sponsors (♡), a lot of hydration from Pocari Sweat (♡), and a lot of love from puppies (♡).
## Challenges we ran into
Since we were all relatively beginners in programming, we initially struggled with simply being able to bring our ideas to life through successful, bug-free implementation. We turned to a lot of experienced React mentors and sponsors (shoutout to Convex) for assistance in debugging. We truly believe that learning from such experienced and friendly individuals was one of the biggest and most valuable takeaways from this hackathon. We additionally struggled with styling because we were incredibly ambitious with our design and wanted to create a high-fidelity functioning app, however HTML/CSS styling can take large amounts of time when you barely know what a flex box is. Additionally, we also struggled heavily with getting our app to function due to one of its main features being in a popup menu (Modal from material UI). We worked around this by creating an extra button in order for us to accomplish the functionality we needed.
## Accomplishments that we're proud of
This is all of our first hackathon! All of us also only recently started getting into app development, and each has around a year or less of experience–so this was kind of a big deal to each of us. We were excitedly anticipating the challenge of starting something new from the ground up. While we were not expecting to even be able to submit a working app, we ended up accomplishing some of our key functionality and creating high fidelity designs. Not only that, but each and every one of us got to explore interests we didn’t even know we had. We are not only proud of our hard work in actually making this app come to fruition, but that we were all so open to putting ourselves out of our comfort zone and realizing our passions for these new endeavors. We tried new tools, practiced new skills, and pushed our necks to the most physical strain they could handle. Another accomplishment that we were proud of is simply the fact that we never gave up. It could have been very easy to shut our laptops and run around the Main Quadrangle, but our personal ties and passion for this project kept us going.
## What we learned
On the technical side, Erin and Kaylee learned how to use Convex for the first time (woo!) and learned how to work with components they never knew could exist, while Megan tried her hand for the first time at React and CSS while coming up with some stellar wireframes. Galen was a double threat, going back to her roots as a designer while helping us develop our display component. Beyond those skills, our team was able to connect with some of the company sponsors and reinvigorate our passions on why we chose to go down the path of technology and development in the first place. We also learned more about ourselves–our interests, our strengths, and our ability to connect with each other through this unique struggle.
## What's next for yumma
Adding the option to upload private recipes that can only be visible to you and any other user you invite to view it (so that your Ba Ngoai–grandma’s—recipes stay a family secret!)
Adding more dropdown features to the input fields so that some will be easier and quicker to use
A messaging feature where you can talk to other users and connect with them, so that cooking meetups can happen and you can share this part of your identity with others
Allowing users to upload photos of what they make from recipes they make and post them, where the most recent of photos for each recipe will be displayed as part of a carousel on each recipe component.
An ingredients list that users can edit to keep track of things they want to grocery shop for while browsing | ## Inspiration
Every year, 7.6 million animals enter shelters, and approximately 2.7 million animals are euthanized. In addition, a large barrier to pet adoptions is the set of hidden fees and costs that are required to properly adopt (e.g. splaying and neutering, veterinary visits, microchipping, etc). We wanted to increase adoption rates by minimizing the barriers to entry to doing so.
## What it does
Welcome to PawBank, where generous individuals can contribute small amounts of money to help build credit for pets.
## How I built it
We built an iOS app in Swift that provided the donation front-end and connected it to a Flask backend with an SQL Alchemy database. Information was transitioned between Swift and Flask using JSON. Overall, the process allows for information transfer between the front + backend while maintaining transferability to other platforms should the need arise.
## Challenges I ran into
This was my first time working with substantial Python, as well as any backend, networking, and JSON. It was my partner’s first time working with Swift. We spent the weekend running into quite a few knowledge-related questions that slowed us down, but overall we were able to push through these roadblocks to create a functional application!
## Accomplishments that I'm proud of
We managed to build a functioning app with an SQL database serving our application. We were able to digest and comprehend new concepts, as well as turn those ideas into tangible code over the course of a weekend. The feeling I got when my database first got hit for information retrieval was awesome!
## What I learned
I learned that oftentimes the bottleneck in our project was general comprehension of industry applications of code. I didn’t really think about how much was out there until I started reading into all the different database formats, backend options, and code interactions.
## What's next for PawBank
The next steps are to take our prototype and turn it into a full-fledged app capable of accepting payments. Additionally, I want to move the server side onto a provider like AWS or GCP so that we can have consistent uptime. Finally, I’d love to pursue this further by looking into the Los Angeles pet adoption market and exploring potential opportunities to bring this kind of product to the forefront of the pet adoption landscape. | partial |
# InstaQuote
InstaQuote is an SMS based service that allows users to get a new car insurance quote without the hassle of calling their insurance provider and waiting in a long queue.
# What Inspired You
We wanted a more convenient way to get a quote on auto-insurance in the event of a change within your driver profile (i.e. demerit point change, license class increase, new car make, etc...)
Since insurance rates are not something that change often we found it appropriate to create an SMS based service, thus saving the hassle of installing an app that would rarely be used as well as the time of calling your insurance provider to get a simple quote. As a company, this service would be useful for clients because it gives them peace of mind that there is an overarching service which can be texted anytime for an instant quote.
# What We Learned
We learned how to connect API's using Standard Library and we also learned JavaScript. Additionally, we learned how to use backend databases to store information and manipulate that data within the database.
# Challenges We Faced
We had some trouble with understanding and getting used to JavaScript syntax | ## Inspiration
We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had.
## What it does
Insurity takes a video feed from a person driving and evaluates it for risky behavior.
## How we built it
We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors.
## Challenges we ran into
This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning. | ## Inspiration
After conducting extensive internal and external market research, our team discovered that customer experience is one of the biggest challenges the insurance industry faces. With the rapid increase in digitalization, **customers are seeking faster and higher quality services** where they can find answers, personalize their products and manage their policies instantly online.
## What it does
**Insur-AI** is a fully functional chatbot that mimics the role of an insurance broker through human-like conversation and provides an accurate insurance quote within minutes!
## You can check out a working version of our website on: insur-AI.tech
## How we built it
We used **ReactJS**, **Bootstrap** along with some basic **HTML & CSS** for our project! Some of the design elements were created using Photoshop and Canva.
.
## Accomplishments that we're proud of
Creation of a full personalized report of an **Intact** insurance premium estimate including graphical analysis of price, ways to reduce insurance premium costs, in a matter of minutes!
## What's next for Insur-Al
One of the things we could work on is the integration of Insur-AI into <https://www.intact.ca/> , so prospective customers can have a quick and easy way to get a home insurance quote! Moreover, the idea of a chatbot can be expanded into other kinds of insurance as well, allowing insurance companies to reach a broader customer base.
**NOTE:** There have been some domain issues due to configuration errors. If insur-AI.tech does not work, please try a (slightly) older copy here: aryamans.me/insur-AI
<https://www.youtube.com/watch?v=YEU5eBp_Um4&feature=youtu.be> | winning |
## Inspiration
Ethiscan was inspired by a fellow member of our Computer Science club here at Chapman who was looking for a way to drive social change and promote ethical consumerism.
## What it does
Ethiscan reads a barcode from a product and looks up the manufacturer and information about the company to provide consumers with information about the product they are buying and how the company impacts the environment and society as a whole. The information includes the parent company of the product, general information about the parent company, articles related to the company, and an Ethics Score between 0 and 100 giving a general idea of the nature of the company. This Ethics Score is created by using Sentiment Analysis on Web Scraped news articles, social media posts, and general information relating to the ethical nature of the company.
## How we built it
Our program is two parts. We built an android application using Android Studio which takes images of a barcode on a product and send that to our server. Our server processes the UPC (Universal Product Code) unique to each barcode and uses a sentiment analysis neural network and web scraping to populate the android client with relevant information related to the product's parent company and ethical information.
## Challenges we ran into
Android apps are significantly harder to develop than expected, especially when nobody on your team has any experience. Alongside this we ran into significant issues finding databases of product codes, parent/subsidiary relations, and relevant sentiment data.
The Android App development process was significantly more challenging than we anticipated. It took a lot of time and effort to create functioning parts of our application. Along with that, web scraping and sentiment analysis are precise and diligent tasks to accomplish. Given the time restraint, the accuracy of the Ethics Score is not as accurate as possible. Finally, not all barcodes will return accurate results simply due to the lack of relevant information online about the ethical actions of companies related to products.
## Accomplishments that we're proud of
We managed to load the computer vision into our original android app to read barcodes on a Pixel 6, proving we had a successful proof of concept app. While our scope was ambitious, we were able to successfully show that the server-side sentiment analysis and web scraping was a legitimate approach to solving our problem, as we've completed the production of a REST API which receives a barcode UPC and returns relevant information about the company of the product. We're also proud of how we were able to quickly turn around and change out full development stack in a few hours.
## What we learned
We have learned a great deal about the fullstack development process. There is a lot of work that needs to go into making a working Android application as well as a full REST API to deliver information from the server side. These are extremely valuable skills that can surely be put to use in the future.
## What's next for Ethiscan
We hope to transition from the web service to a full android app and possibly iOS app as well. We also hope to vastly improve the way we lookup companies and gather consumer scores alongside how we present the information. | ## Inspiration
One of the greatest challenges facing our society today is food waste. From an environmental perspective, Canadians waste about *183 kilograms of solid food* per person, per year. This amounts to more than six million tonnes of food a year, wasted. From an economic perspective, this amounts to *31 billion dollars worth of food wasted* annually.
For our hack, we wanted to tackle this problem and develop an app that would help people across the world do their part in the fight against food waste.
We wanted to work with voice recognition and computer vision - so we used these different tools to develop a user-friendly app to help track and manage food and expiration dates.
## What it does
greenEats is an all in one grocery and food waste management app. With greenEats, logging your groceries is as simple as taking a picture of your receipt or listing out purchases with your voice as you put them away. With this information, greenEats holds an inventory of your current groceries (called My Fridge) and notifies you when your items are about to expire.
Furthermore, greenEats can even make recipe recommendations based off of items you select from your inventory, inspiring creativity while promoting usage of items closer to expiration.
## How we built it
We built an Android app with Java, using Android studio for the front end, and Firebase for the backend. We worked with Microsoft Azure Speech Services to get our speech-to-text software working, and the Firebase MLKit Vision API for our optical character recognition of receipts. We also wrote a custom API with stdlib that takes ingredients as inputs and returns recipe recommendations.
## Challenges we ran into
With all of us being completely new to cloud computing it took us around 4 hours to just get our environments set up and start coding. Once we had our environments set up, we were able to take advantage of the help here and worked our way through.
When it came to reading the receipt, it was difficult to isolate only the desired items. For the custom API, the most painstaking task was managing the HTTP requests. Because we were new to Azure, it took us some time to get comfortable with using it.
To tackle these tasks, we decided to all split up and tackle them one-on-one. Alex worked with scanning the receipt, Sarvan built the custom API, Richard integrated the voice recognition, and Maxwell did most of the app development on Android studio.
## Accomplishments that we're proud of
We're super stoked that we offer 3 completely different grocery input methods: Camera, Speech, and Manual Input. We believe that the UI we created is very engaging and presents the data in a helpful way. Furthermore, we think that the app's ability to provide recipe recommendations really puts us over the edge and shows how we took on a wide variety of tasks in a small amount of time.
## What we learned
For most of us this is the first application that we built - we learned a lot about how to create a UI and how to consider mobile functionality. Furthermore, this was also our first experience with cloud computing and APIs. Creating our Android application introduced us to the impact these technologies can have, and how simple it really is for someone to build a fairly complex application.
## What's next for greenEats
We originally intended this to be an all-purpose grocery-management app, so we wanted to have a feature that could allow the user to easily order groceries online through the app, potentially based off of food that would expire soon.
We also wanted to implement a barcode scanner, using the Barcode Scanner API offered by Google Cloud, thus providing another option to allow for a more user-friendly experience. In addition, we wanted to transition to Firebase Realtime Database to refine the user experience.
These tasks were considered outside of our scope because of time constraints, so we decided to focus our efforts on the fundamental parts of our app. | ## Inspiration
Meet Josh. It is his first hackathon. During his first hack, he notices that there is a lot of litter around his work area. From cans to water bottles to everything else in between, there was a lot of garbage. He realizes that hackathons are a great place to create something that can impact and change people, the world, or even themselves. So why not go to the source and help hackers **be** that change?
## What it does
Our solution is to create a mobile application that leverages the benefits of AI and Machine Learning (using Google Vision AI from Google Cloud) to enable the user to scan items and let the user know which bin (i.e. Blue, Green or Black) to sort the item to, as well as providing more information on the item.
However, this is not merely a simple item scanner. We also gamified the design to create a more encouraging experience for hackers to be more environmentally friendly during a hackathon. We saw that by incentivizing hackers with awards such as "Earthcoins" whenever they pick up and recycle their litter, they can redeem these "coins" (or credits) for things like limited edition stickers, food (Red Bull anyone?) or more swag. Ultimately, our goal is to create a collaborative and clean space for hackers to more effectively interact with each other, creating relationships and helping each other out, while also maintaining a better environment.
## How we built it
We first prototyped the solution in Adobe XD. Then, we attempted to implement the idea with Android Studio and Kotlin for the main app, as well as Google Vision AI for the Machine Learning models.
## Challenges we ran into
We ran into a lot of challenges. We wanted to train our own AI by feeding it images of various things such as coffee cups, coffee lids, and things that are recyclable and compostable. Although, we realized that training the AI would be a lengthy process. Another challenge was to efficiently gathering images to feed into the AI, we used Python Selenium to automate this but it required a lot of coding. In order to increase the success rate of the AI identifying certain things, we would have to take our own pictures to train it. With this in mind, we quickly shifted to Google Cloud's Vision AI API, though we got a rough version working but was not able to code everything in time to make a prototype.
One more challenge was the coding aspect of the project. Our coder had problems converting Java code into Kotlin, in addition to integrating the Google Cloud Vision API into the app via Android Studio.
We had further challenges with the idea that we had. How do we incentivize those who don't care much about the environment to use this app? What is the motivation to do so? We had to answer these questions and eventually used the idea of hackers wanting to create change.
## Accomplishments that we're proud of
We're proud of each other for attempting to build such an idea from scratch, especially since this was their first hackathon for two of our team members. Trying to build an app using AI and training it ourselves is a big idea to tackle, considering our limited exposure to machine learning and unfamiliarity with new languages. We would say that our accomplishment of creating an actual product, although it may have been incomplete, was a significant achievement within this hack.
## What we learned
We gained a lot of first-hand insight into how machine learning is complex and takes a while to implement. We learned that building an app with external APIs such as Google Vision AI can be difficult to do compared to simply creating a standalone app. We also learned how to automate web browser tasks with Python Selenium so that we could be much more efficient with training our AI.
The most important thing that we learned was from our mentors was regarding the "meta" of a hackathon. We learnt that we have to always seriously consider our audience, the scope of the problem, and the feasibility of the solution. The usability, motivation, and the design are all major factors that we realized are game changers as one certain thing can completely overturn our idea. We gathered a lot of insight from our mentors from their past experiences and w are inspired to use what we learned through DeltaHacks 6 in other hackathons.
## What's next for BottleBot
The aim for BottleBot's future is to fully integrate the Google Cloud Vision API into the app, as well as to finish and polish the app in Android Studio. | winning |
## About the Project
We are a bunch of amateur players who love playing chess, but over time we noticed that our improvement has become stagnant. Like many college students, we neither have the time nor the financial means to invest in professional coaching to take our game to the next level. This frustration sparked the idea behind **Pawn Up**—a project built to help players like us break through the plateau and improve their chess skills in their own time, without expensive coaches or overwhelming resources.
### What Inspired Us
As passionate chess players, we struggled with finding affordable and effective ways to improve. Chess can be an expensive hobby if you want to seek professional help or guidance. The available tools often lacked the depth we needed or came with hefty price tags. We wanted something that would provide personalized feedback, targeted training, and insights that anyone could access—regardless of their financial situation.
### How We Built It
We started by integrating **Lichess authentication** to fetch a user's game history, allowing them to directly analyze their own performance. With **Groq** and **Llama3.1**, we leveraged AI to categorize mistakes, generate feedback, and suggest relevant puzzles to help users train and improve. We also levergae **ChromaDB** for vector search features and **Gemini pro** and **Gemini embedding**
Our project features four key components:
* **Analyze**: Fetches the user's last 10 games, provides analysis on each move, and visualizes a heatmap showing the performance of legal moves for each piece. Users can also interact with the game for deeper analysis.
* **Train**: Using AI, the system analyzes the user's past games and suggests categorized puzzles that target areas of improvement.
* **Search**: We created a vector database storing thousands of grandmaster games. Users can search for specific games and replay them with detailed analysis, just like with their own games.
* **Upload**: Users can upload their own chess games and perform the same analyses and training as with the **Search** feature.
### What We Learned
Throughout the development of **Pawn Up**, we gained a deeper understanding of AI-powered analysis and how to work with complex game datasets. We learned how to integrate chess engines, handle large amounts of data, and create user-friendly interfaces. Additionally, we explored how LLMs (large language models) can provide meaningful feedback and how vector databases can be used to store and retrieve massive datasets efficiently.
### Challenges We Faced
One of the main challenges we encountered was making the AI feedback meaningful for players across various skill levels. It was crucial that the system didn’t just provide generic advice but rather tailored suggestions that were both practical and actionable. Handling large amounts of chess data efficiently, without compromising on speed and usability, also posed a challenge. Building the vector database to store and search through grandmaster games was a particularly challenging but rewarding experience.
Despite these hurdles, we’re proud of what we’ve built. **Pawn Up** is the solution we wish we had when we first started hitting that plateau in our chess journeys, and we hope it can help others as well. | ## Inspiration
We took inspiration from our experience of how education can be hard. Studies conducted by EdX show that classes that teach quantitative subjects like Mathematics and Physics tend to receive lower ratings from students in terms of engagement and educational capacity than their qualitative counterparts. Of all advanced placement tests, AP Physics 1 receives on average the lowest scores year after year, according to College Board statistics. The fact is, across the board, many qualitative subjects are just more difficult to teach, a fact that is compounded by the isolation that came with remote working, as a result of the COVID-19 pandemic. So, we would like to find a way to promote learning in a fun way.
In keeping with the theme of Ctrl + Alt + Create, we took inspiration from another educational game from the history of computing. In 1991, Microsoft released a programming language and environment called QBASIC to teach first time programmers how to code. One of the demo programs they released with this development environment was a game called Gorillas, an artillery game where two players can guess the velocity and angle in order to try to hit their opponents. We decided to re-imagine this iconic little program from the 90s into a modern networked webgame, designed to teach students kinematics and projectile motion.
## What it does
The goal of our project was to create an educational entertainment game that allows students to better engage in qualitative subjects. We wanted to provide a tool for instructors for both in-classroom and remote education and provide a way to make education more accessible for students attending remotely. Specifically, we focused on introductory high school physics, one of the most challenging subjects to tackle. Similar to Kahoot, teachers can setup a classroom or lobby for students to join in from their devices. Students can join in either as individuals, or as a team. Once a competition begins, students use virtual tape measures to find distances in their surroundings, determining how far their opponent is and the size of obstacles that they need to overcome. Based on these parameters, they can then try out an appropriate angle and calculate an initial velocity to fire their projectiles. Although there is no timer, students are incentivized to work quickly in order to fire off their projectiles before their opponents. Students have a limited number of shots as well, incentivizing them to double-check their work wisely.
## How we built it
We built this web app using HTML, CSS, and Javascript. Our team split up into a Graphics Team and Logics Team. The Logics Team implemented the Kinematics and the game components of this modern recreation of QBASIC Gorillas. The Graphics Team created designs and programmed animations to represent the game logic as well as rendering the final imagery. The two teams came together to make sure everything worked well together.
## Challenges we ran into
We ran into many challenges which include time constraints and our lack of knowledge about certain concepts. We later realized we should have spent more time on planning and designing the game before splitting into teams because it caused problems in miscommunication between the teams about certain elements of the game. Due to time constraints, we did not have time to implement a multiplayer version of the game.
## Accomplishments that we're proud of
The game logically works in single player game. We are proud that we were able to logically implement the entire game, as well as having all the necessary graphics to show its functionality.
## What we learned
We learned the intricacies of game design and game development. Most of us have usually worked with more information-based websites and software technologies. We learned how to make a webapp game from scratch. We also improved our HTML/CSS/Javascript knowledge and our concepts of MVC.
## What's next for Gorillamatics
First we would like to add networking to this game to better meet the goals of increasing connectivity in the classroom as well as sparking a love for Physics in a fun way. We would also like to have better graphics. For the long term, we are planning on adding different obstacles to make different kinematics problems. | ## Inspiration
The inspiration behind Memory Melody stemmed from a shared desire to evoke nostalgic feelings and create a sense of connection through familiar tunes and visual memories. We've all experienced moments of trying to recall a forgotten show, the sensation of déjà vu, or the longing to revisit simpler times. These shared sentiments served as the driving force behind the creation of Memory Melody.
## What it does
Memory Melody is a unique project that combines a nostalgia playlist and image generator. It allows users to relive the past by generating personalized playlists filled with iconic songs from different eras and creating nostalgic images reminiscent of a bygone time.
## How we built it
The project was built using a combination of front-end and back-end technologies. The playlist feature leverages music databases and user preferences to curate playlists tailored to individual tastes and memories. The image generator utilizes advanced algorithms to transform modern photos into nostalgic visuals, applying vintage filters and overlays.
## Challenges we ran into
Building Memory Melody presented its share of challenges. Integrating various APIs for music data, ensuring seamless user interactions, and optimizing the image generation process were among the technical hurdles we faced. Additionally, maintaining a cohesive user experience that captured the essence of nostalgia required thoughtful design considerations.
## Accomplishments that we're proud of
We're proud to have created a platform that successfully brings together music and visuals to deliver a unique nostalgic experience. The seamless integration of playlist curation and image generation reflects our commitment to providing users with a comprehensive journey down memory lane.
## What we learned
Throughout the development of Memory Melody, we learned valuable lessons about API integrations, image processing, and user experience design. Collaborating on a project that aims to tap into emotions and memories reinforced the significance of thoughtful development and design decisions.
## What's next for Memory Melody
Looking ahead, we plan to expand Memory Melody by incorporating user feedback, adding more customization options for playlists and images, and exploring partnerships with content creators. We envision Memory Melody evolving into a platform that continues to bring joy and nostalgia to users worldwide. | partial |
## Inspiration
We love to travel and have found that we typically have to use multiple sources in the process of creating an itinerary. With Path Planner, the user can get their whole tripped planned for them in one place.
## What it does
Build you an itinerary for any destination you desire
## How we built it
Using react and nextjs we developed the webpage and used chatGpt API to pull an itinerary with out given prompts
## Challenges we ran into
Displaying a Map to help choose a destination
## Accomplishments that we're proud of
Engineering an efficient prompt that allows us to get detailed itinerary information and display it in a user friendly fashion.
## What we learned
How to use leaflet with react and next.js and utilizing leaflet to input an interactive map that can be visualized on our web pages.
How to engineer precise prompts using open AI's playground.
## What's next for Path Planner
Integrating prices for hotels, cars, and flights as well as having a login page so that you can store your different itineraries and preferences. This would require creating a backend as well. | ## Inspiration
There are 1.1 billion people without Official Identity (ID). Without this proof of identity, they can't get access to basic financial and medical services, and often face many human rights offences, due the lack of accountability.
The concept of a Digital Identity is extremely powerful.
In Estonia, for example, everyone has a digital identity, a solution was developed in tight cooperation between the public and private sector organizations.
Digital identities are also the foundation of our future, enabling:
* P2P Lending
* Fractional Home Ownership
* Selling Energy Back to the Grid
* Fan Sharing Revenue
* Monetizing data
* bringing the unbanked, banked.
## What it does
Our project starts by getting the user to take a photo of themselves. Through the use of Node JS and AWS Rekognize, we do facial recognition in order to allow the user to log in or create their own digital identity. Through the use of both S3 and Firebase, that information is passed to both our dash board and our blockchain network!
It is stored on the Ethereum blockchain, enabling one source of truth that corrupt governments nor hackers can edit.
From there, users can get access to a bank account.
## How we built it
Front End: | HTML | CSS | JS
APIs: AWS Rekognize | AWS S3 | Firebase
Back End: Node JS | mvn
Crypto: Ethereum
## Challenges we ran into
Connecting the front end to the back end!!!! We had many different databases and components. As well theres a lot of accessing issues for APIs which makes it incredibly hard to do things on the client side.
## Accomplishments that we're proud of
Building an application that can better the lives people!!
## What we learned
Blockchain, facial verification using AWS, databases
## What's next for CredID
Expand on our idea. | ## Inspiration
Ever sit through a long and excruciating video like a lecture or documentary? Is 2x speed too slow for youtube? TL;DW
## What it does
Just put in the link to the YouTube video you are watching, then wait as our Revlo and NLTK powered backend does natural language processing to give you the GIFs from GIPHY that best reflect the video!
## How I built it
The webapp takes in a link to a youtube video. We download the youtube video with pytube and convert the video into audio mp3 with ffmpeg. We upload the audio to Revspeech API to transcribe the video. Then, we used NLTK (natural language toolkit) for python in order to process the text. We first perform "part of speech" tagging and frequency detection of different words in order to identify key words in the video. In addition, we we identify key words from the title of the video. We pool these key words together in order to search for gifs on GIPHY. We then return these results on the React/Redux frontend of our app.
## Challenges I ran into
We experimented with different NLP algorithms to extract key words to search for gifs. One of which was RAKE keyword extraction. However, the algorithm relied on identifying uncommonly occurring words in the text, which did not line up well in finding relevant gifs.
tf-idf also did not work as well for our task because we had one document from the transcript rather than a library.
## Accomplishments that I'm proud of
We are proud of accomplishing the goal we set out to do. We were able to independently create different parts of the backend and frontend (NLP, flask server, and react/redux) and unify them together in the project.
## What I learned
We learned a lot about natural language processing and the applications it has with video. From the Rev API, we learned about how to handle large file transfer through multipart form data and to interface with API jobs.
## What's next for TLDW
Summarizing into 7 gifs (just kidding). We've discussed some of the limitations and bottlenecks of our app with the Rev team, who have told us about a faster API or a streaming API. This would be very useful to reduce wait times because our use case does not need to prioritize accuracy so much. We're also looking into a ranking system for sourced GIFs to provide funnier, more specific GIFs. | winning |
Plenty of people hope to travel and explore the world, and it's no surprise that traveling often ends up on bucket lists. However, flights are expensive, and trying to figure out when and to where airline tickets will be cheapest is exhausting.
Our project, Dream-Flight, hopes to make this process easier. By creating a visualization of flight prices and data, we hope to make planning those dream trips simpler. Dream-Flight allows the user to enter their departure location, as well as easily adjustable departure dates, travel duration, and budget. With just a few simple steps, users will see a mapped visualization of airports all over the world that offer flights that fit their travel criteria, marked by circles whose color reflects price point and size reflects destination popularity.
The flight visualization provides a crystal clear view of price points for flights to different locations at different times with just a quick glance. Whether it's a Spring Break vacation with friends, a trip to visit family, or an exploration abroad, finding a dream travel destination becomes easier with Dream-Flight!
Visit <https://dream-flights.herokuapp.com/main.html> to see Dream-Flight in action!
To check out our repo, please visit our GitHub: <https://github.com/PaliC/Dream-Flight> | ## Inspiration
Planning vacations can be hard. Traveling is a very fun experience but often comes with a lot of stress of curating the perfect itinerary with all the best sights to see, foods to eat, and shows to watch. You don't want to miss anything special, but you also want to make sure the trip is still up your alley in terms of your own interests - a balance that can be hard to find.
## What it does
explr.ai simplifies itinerary planning with just a few swipes. After selecting your destination, the duration of your visit, and a rough budget, explr.ai presents you with a curated list of up to 30 restaurants, attractions, and activities that could become part of your trip. With an easy-to-use swiping interface, you choose what sounds interesting or not to you, and after a minimum of 8 swipes, let explr.ai's power convert your opinions into a full itinerary of activities for your entire visit.
## How we built it
We built this app using React Typescript for the frontend and Convex for the backend. The app takes in user input from the homepage regarding the location, price point, and time frame. We pass the location and price range into the Google API to retrieve the highest-rated attractions and restaurants in the area. Those options are presented to the user on the frontend with React and CSS animations that allow you to swipe each card in a Tinder-style manner. Taking consideration of the user's swipes and initial preferences, we query the Google API once again to get additional similar locations that the user may like and pass this data into an LLM (using Together.ai's Llama2 model) to generate a custom itinerary for the user. For each location outputted, we string together images from the Google API to create a slideshow of what your trip would look like and an animated timeline with descriptions of the location.
## Challenges we ran into
Front-end and design require a LOT of skill. It took us quite a while to come up with our project, and we originally were planning on a mobile app, but it's also quite difficult to learn completely new languages such as swift along with new technologies all in a couple of days. Once we started on explr.ai's backend, we were also having trouble passing in the appropriate information to the LLM to get back proper data that we could inject back into our web app.
## Accomplishments that we're proud of
We're proud at the overall functionality and our ability to get something working by the end of the hacking period :') More specifically, we're proud of some of our frontend, including the card swiping and timeline animations as well as the ability to parse data from various APIs and put it together with lots of user input.
## What we learned
We learned a ton about full-stack development overall, whether that be the importance of Figma and UX design work, or how to best split up a project when every part is moving at the same time. We also learned how to use Convex and Together.ai productively!
## What's next for explr.ai
We would love to see explr.ai become smarter and support more features. explr.ai, in the future, could get information from hotels, attractions, and restaurants to be able to check availability and book reservations straight from the web app. Once you're on your trip, you should also be able to check in to various locations and provide feedback on each component. explr.ai could have a social media component of sharing your itineraries, plans, and feedback with friends and help each other better plan trips. | # Get the Flight Out helps you GTFO ASAP
## Inspiration
Constantly stuck in meetings, classes, exams, work, with nowhere to go, we started to think. What if we could just press a button, and in a few hours, go somewhere awesome? It doesn't matter where, as long as its not here. We'd need a plane ticket, a ride to the airport, and someplace to stay. So can I book a ticket?
Every online booking site asks for where to go, but we just want to go. What if we could just set a modest budget, and take advantage of last minute flight and hotel discounts, and have all the details taken care of for us?
## What it does
With a push of a button or the flick of an Apple Watch, we'll find you a hotel at a great location, tickets out of your preferred airport, and an Uber to the airport, and email you the details for reference. | partial |
## **opiCall**
## *the line between O.D. and O.K. is one opiCall away*
---
## What it does
Private AMBER alerts for either 911 or a naloxone carrying network
## How we built it
We used Twilio & Dasha AI to send texts and calls, and Firebase & Swift for the iOS app's database and UI itself.
## Challenges we ran into
We had lots of difficulties finding research on the topic, and conducting our own research due to the taboos and Reddit post removals we faced.
## What's next for opiCall
In depth research on First Nations' and opioids to guide our product further. | ## Inspiration
Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call.
This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies.
## What it does
DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers.
Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene.
Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone.
## How we built it
We developed a comprehensive systems architecture design to visualize the communication flow across different softwares.

We developed DispatchAI using a comprehensive tech stack:
### Frontend:
* Next.js with React for a responsive and dynamic user interface
* TailwindCSS and Shadcn for efficient, customizable styling
* Framer Motion for smooth animations
* Leaflet for interactive maps
### Backend:
* Python for server-side logic
* Twilio for handling calls
* Hume and Hume's EVI for emotion detection and understanding
* Retell for implementing a voice agent
* Google Maps geocoding API and Street View for location services
* Custom-finetuned Mistral model using our proprietary 911 call dataset
* Intel Dev Cloud for model fine-tuning and improved inference
## Challenges we ran into
* Curated a diverse 911 call dataset
* Integrating multiple APIs and services seamlessly
* Fine-tuning the Mistral model to understand and respond appropriately to emergency situations
* Balancing empathy and efficiency in AI responses
## Accomplishments that we're proud of
* Successfully fine-tuned Mistral model for emergency response scenarios
* Developed a custom 911 call dataset for training
* Integrated emotion detection to provide more empathetic responses
## Intel Dev Cloud Hackathon Submission
### Use of Intel Hardware
We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration:
* Leveraged IDC Jupyter Notebooks throughout the development process
* Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform
### Intel AI Tools/Libraries
We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project:
* Utilized Intel® Extension for PyTorch (IPEX) for model optimization
* Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds
* This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools
### Innovation
Our project breaks new ground in emergency response technology:
* Developed the first empathetic, AI-powered dispatcher agent
* Designed to support first responders during resource-constrained situations
* Introduces a novel approach to handling emergency calls with AI assistance
### Technical Complexity
* Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud
* Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI
* Developed real-time call processing capabilities
* Built an interactive operator dashboard for data summarization and oversight
### Design and User Experience
Our design focuses on operational efficiency and user-friendliness:
* Crafted a clean, intuitive UI tailored for experienced operators
* Prioritized comprehensive data visibility for quick decision-making
* Enabled immediate response capabilities for critical situations
* Interactive Operator Map
### Impact
DispatchAI addresses a critical need in emergency services:
* Targets the 82% of understaffed call centers
* Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times)
* Potential to save lives by ensuring every emergency call is answered promptly
### Bonus Points
* Open-sourced our fine-tuned LLM on HuggingFace with a complete model card
(<https://huggingface.co/spikecodes/ai-911-operator>)
+ And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts>
* Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>)
* Promoted the project on Twitter (X) using #HackwithIntel
(<https://x.com/spikecodes/status/1804826856354725941>)
## What we learned
* How to integrate multiple technologies to create a cohesive, functional system
* The potential of AI to augment and improve critical public services
## What's next for Dispatch AI
* Expand the training dataset with more diverse emergency scenarios
* Collaborate with local emergency services for real-world testing and feedback
* Explore future integration | ## Inspiration
I was walking in downtown Toronto on a bright summer day. I look around me at the view, and then the people, but something was wrong... the people seemed to be missing. The city seemed... Empty.
Everyone was looking down on their phone and living their own personal bubble.
I found it so ironic that technology was made to bring humans together, yet at the same time it pushes us so far apart.
Hence, my partner Matthew and I stepped forward to bring a solution to the table. We wanted to use the power of technology to bring people closer together than ever before.
## What it does
Imagine playing Manhunt, but on a city wide level and with people you don't know! You can start a game or join one from a list of local games in the area. Every 30 seconds, the location of every person in the game is displayed on map for a brief second. In this time your goal is either to hunt people down, or be hunted...
While playing, it is encouraged for you to get out there and talk to random people to figure out who your target may be. This encourages the social aspect of the game which is to meet new people. Once you have caught the person you were chasing, TAKE A SELFIE with them and SHARE IT on your Huntr Gallery. The more people you catch, the larger your gallery.
Make sure to also get some small talk into there as well!
## How we built it
The mobile application itself was built in React Native with the server and REST API implemented in Ruby on Rails.
## Challenges we ran into
React Native as a whole was a challenge due to the sheer number of errors we encountered and head-bashing we endured.
## Accomplishments that we're proud of
The overall idea itself is really creative way to bring people and communities together in the world. We are proud of moving humanity one step closer to each other together.
## What we learned
React native is a pain to work with.
## What's next for Human Huntr
* Complete the game
* Multiple game modes including
+ Huntr the Flag
+ Hunt and Seek
+ The Huntr Games
* Monetization models:
+ Corporate sponsored games with prizes, in return, players play in specific area (near a shopping mall for example)
+ HuntrSports (Annual massive scale games occurring in huge cities across the world, people rank in leaderboards, pay a small fee to play) | winning |
## Inspiration
It's too easy today to be focused on only the bad news and events of the world. In addition, with the ongoing COVID-19 pandemic, it is unfortunately common to feel increasingly isolated from others. That's why we wanted to create Memo(rable)—a website that allows people to feel connected to others by spreading positivity through encouraging messages!
## What it does
Memo(rable) is a website for people to send and receive encouraging messages. Users can send a positive message, as well as receive an encouraging message written by another user. Users must log in to use Memo(rable), and messages can be sent in with the user's name included in order to make the messages seem more personal and alive. However, there is also an option for the user to remain anonymous if they wish to do so. Once a message is sent in, it gets added to a database in realtime, and another user can receive a randomly chosen encouraging message from the database. There is also a page where users can view, edit, or delete the messages that they have sent into the database.
## How we built it
Memo(rable) was built with Google's Cloud Shell Editor. In order to authenticate a user and save all of the messages inputted, Firebase's realtime database was used. The pages were built with HTML, CSS, and JavaScript.
## Challenges we ran into
It was challenging to figure out how to use Cloud Shell Editor in conjunction with Firebase's realtime database. In addition, the application of CSS was difficult to navigate and implement.
## Accomplishments that we're proud of
We are proud that we managed to create a fully functional product within the given time span. Since a large portion of our team was inexperienced with Cloud Shell Editor and Firebase, learning how to use these tools was a huge accomplishment to us. We are also proud that we managed to implement all of our basic functionalities, and we also had enough time to implement more functionalities than we had originally planned for, such as the ability to view the history of the messages that the user has sent, as well as the ability to edit and delete those messages.
## What we learned
We learned more about how to use the Cloud Shell Editor and Firebase's realtime database to store data such as inputted messages and user login information.
## What's next for Memo(rable)
Next steps for Memo(rable) would be the ability to make Memo(rable) available to the public without needing to manually authenticate individual accounts. We would also like to implement a feature that senses and flags negative language that goes against our code of conduct, in order to ensure that Memo(rable) can be as positive and safe as possible. | ## Inspiration
When I was 14 years old, I, Carlos, faced one of the most terrifying moments of my life -- my friend attempting suicide. Fortunately, nothing bad happened. But it is those moments when you contemplate the frightening nature of suicide.
After dealing with this situation, and through the years, I heard a lot of stories from people feeling lonely. So, our team developed memo.
## What it does
memo is an AI mental health advisor that helps to prevent suicide and makes sure that you do not ever feel alone. memo uses natural language processing and users communicate with memo using their voices. If users are at risk of committing suicide, memo calls the user's contacts.
## How we built it
Memo is built using the Open AI API. We used this API to generate human like responses, converting speech to text and test to speech. We used the Django framework to manage all the backend. We built a minimalist modern interface using only HTML, CSS and JavaScript. We also used Twilio API to manage calls when risky situations are detected.
## Challenges we ran into
During the develop we faced a lot of challenges comprising converting speech to text, login to memo from the homepage, and having memo make phone calls. But the most challenging of all was connecting backend and frontend. As unexperienced developers, we didn't know even where to start. Some of us had previous experience with Django, but only the basic, so connecting the OpenAI generated responses to the interface and receiving the recording from the frontend to the backend was a pretty difficult challenge.
## Accomplishments that we're proud of
memo is a website compatible with mobile devices, regardless of screen size or operating software. memo is also free of charge and can talk to people using voice in several languages. This ensures that memo reaches a wide range of audiences. We are really proud to be building this, and we really think that this project can make a huge different. For us the worst thing that can happen to a depressed person is to be alone, so we provide something to talk to and take care of you.
## What we learned
Most importantly, we learnt to manage temporary files to make calls and generate accurate responses. This project also brought us a lot of new knowledge, such managing APIs, making better webpages, and building projects from scratch.
## What's next for memo
Memo is meant to be something helpful. We want to focus on improving both the backend and frontend of our project, especially focusing on improving the speed of the responses and the intuitivity of the page. We also want to partner up with psychologists and psychiatrists so users can get professional medical advice from them. | ## Inspiration 🌟
Nostalgia comes through small items that trigger our sweet old memories. It reminds us of simpler times when a sticky note was all we needed to remember something important and have fun. Those satisfying moments of peeling the last layer of the memo pad, the embarrassing history of putting online passwords around the computer, and the hilarious actions of putting a thousand window XP sticky notes on the home screen are tiny but significant memories. In today's digital age, we wanted to bring back that simplicity and tangible feeling of jotting down notes on a memo sticker, but with a twist. ScribbleSync is our homage to the past, reimagined with today's technology to enhance and organize our daily lives.
## What it does 📝
ScribbleSync takes the traditional office memo sticker into the digital era. It's an online interface where you can effortlessly scribble notes, ideas, or events. These digital sticky notes are then intelligently synced with your Google Calendar with Large Language Models and Computer Vision, turning your random notes into scheduled commitments and reminders, ensuring that the essence of the physical memo lives on in a more productive and organized digital format.
## How we built it 🛠️
We built ScribbleSync using a combination of modern web technologies and APIs. The front-end interface was designed with HTML/CSS/JS for simplistic beauty. For the backend, we used Flask mainly and integrated Google Calendar API to sync the notes with the user’s calendar. We use state-of-the-art models from Google Vision, Transformer models from Hugginface for image analysis and fine-tuned Cohere models with our custom dataset in a semi-supervised manner to achieve textual classification of tasks and time relation.
## Challenges we ran into 😓
One of our biggest challenges was mainly incorporating multiple ML models and making them communicate with the front end and back end. Meanwhile, all of us are very new to hackathons so we are still adapting to the high-intensity coding and eventful environment.
## Accomplishments that we're proud of 🏆
We're incredibly proud of developing a fully functional prototype within the limited timeframe. We managed to create an intuitive, cute UI and the real-time sync feature works and communicates flawlessly. Overcoming the technical challenges and seeing our idea come to life has been immensely rewarding.
## What we learned 📚
Throughout this journey, we've learned a great deal about API integration, real-time data handling, and creating user-centric designs. We also gained valuable insights into teamwork and problem-solving under pressure. Individually, we tried tech stacks that were unfamiliar to most of us such as Cohere and Google APIs, it is a long but fruitful process and we are now confident to explore other APIs provided by different companies.
## What's next for ScribbleSync 🚀
Our next step is to add practical and convenient functions such as allowing the sticky notes to set up drafts for email, schedules in Microsoft Teams and create Zoom links. We could also add features such as sticking to the home screen to enjoy those fun features from sticky notes in the good old days. | losing |
## Inspiration
As a startup founder, it is often difficult to raise money, but the amount of equity that is given up can be alarming for people who are unsure if they want the gasoline of traditional venture capital. With VentureBits, startup founders take a royalty deal and dictate exactly the amount of money they are comfortable raising. Also, everyone can take risks on startups as there are virtually no starting minimums to invest.
## What it does
VentureBits allows consumers to browse a plethora of early stage startups that are looking for funding. In exchange for giving them money anonymously, the investors will gain access to a royalty deal proportional to the amount of money they've put into a company's fund. Investors can support their favorite founders every month with a subscription, or they can stop giving money to less promising companies at any time. VentureBits also allows startup founders who feel competent to raise just enough money to sustain them and work full-time as well as their teams without losing a lot of long term value via an equity deal.
## How we built it
We drew out the schematics on the whiteboards after coming up with the idea at YHack. We thought about our own experiences as founders and used that to guide the UX design.
## Challenges we ran into
We ran into challenges with finance APIs as we were not familiar with them. A lot of finance APIs require approval to use in any official capacity outside of pure testing.
## Accomplishments that we're proud of
We're proud that we were able to create flows for our app and even get a lot of our app implemented in react native. We also began to work on structuring the data for all of the companies on the network in firebase.
## What we learned
We learned that finance backends and logic to manage small payments and crypto payments can take a lot of time and a lot of fees. It is a hot space to be in, but ultimately one that requires a lot of research and careful study.
## What's next for VentureBits
We plan to see where the project takes us if we run it by some people in the community who may be our target demographic. | ## Inspiration
In a world where finance is extremely important, everyone needs access to **banking services**. Citizens within **third world countries** are no exception, but they lack the banking technology infrastructure that many of us in first world countries take for granted. Mobile Applications and Web Portals don't work 100% for these people, so we decided to make software that requires nothing more than a **cellular connection to send SMS messages** in order to operate. This resulted in our hack, **UBank**.
## What it does
**UBank** allows users to operate their bank accounts entirely through **text messaging**. Users can deposit money, transfer funds between accounts, transfer accounts to other users, and even purchases shares of stock via SMS. In addition to this text messaging capability, UBank also provides a web portal so that when our users gain access to a steady internet connection or PC, they can view their financial information on a more comprehensive level.
## How I built it
We set up a backend HTTP server in **Node.js** to receive and fulfill requests. **Twilio** with ngrok was used to send and receive the text messages through a webhook on the backend Node.js server and applicant data was stored in firebase. The frontend was primarily built with **HTML, CSS, and Javascript** and HTTP requests were sent to the Node.js backend to receive applicant information and display it on the browser. We utilized Mozilla's speech to text library to incorporate speech commands and chart.js to display client data with intuitive graphs.
## Challenges I ran into
* Some team members were new to Node.js, and therefore working with some of the server coding was a little complicated. However, we were able to leverage the experience of other group members which allowed all of us to learn and figure out everything in the end.
* Using Twilio was a challenge because no team members had previous experience with the technology. We had difficulties making it communicate with our backend Node.js server, but after a few hours of hard work we eventually figured it out.
## Accomplishments that I'm proud of
We are proud of making a **functioning**, **dynamic**, finished product. It feels great to design software that is adaptable and begging for the next steps of development. We're also super proud that we made an attempt at tackling a problem that is having severe negative effects on people all around the world, and we hope that someday our product can make it to those people.
## What I learned
This was our first using **Twillio** so we learned a lot about utilizing that software. Front-end team members also got to learn and practice their **HTML/CSS/JS** skills which were a great experience.
## What's next for UBank
* The next step for UBank probably would be implementing an authentication/anti-fraud system. Being a banking service, it's imperative that our customers' transactions are secure at all times, and we would be unable to launch without such a feature.
* We hope to continue the development of UBank and gain some beta users so that we can test our product and incorporate customer feedback in order to improve our software before making an attempt at launching the service. | ## Inspiration
Amid the fast-paced rhythm of university life at Waterloo, one universal experience ties us all together: the geese. Whether you've encountered them on your way to class, been woken up by honking at 7 am, or spent your days trying to bypass flocks of geese during nesting season, the geese have established themselves as a central fixture of the Waterloo campus. How can we turn the staple bird of the university into a asset? Inspired by the quintessential role the geese play in campus life, we built an app to integrate our feather friends into our academic lives. Our app, Goose on the Loose allows you to take pictures of geese around the campus and turn them into your study buddies! Instead of being intimidated by the fowl fowl, we can now all be friends!
## What it does
Goose on the Loose allows the user to "capture" geese across the Waterloo campus and beyond by snapping a photo using their phone camera. If there is a goose in the image, it is uniquely converted into a sprite added to the player's collection. Each goose has its own student profile and midterm grade. The more geese in a player's collection, the higher each goose's final grade becomes, as they are all study buddies who help one another. The home page also contains a map where the player can see their own location, as well as locations of nearby goose sightings.
## How we built it
This project is made using Next.js with Typescript and TailwindCSS. The frontend was designed using Typescript React components and styled with TailwindCSS. MongoDB Atlas was used to store various data across our app, such as goose data and map data. We used the @React Google Maps library to integrate the Google maps display into our app. The player's location data is retrieved from the browser. Cohere was used to help generate names and quotations assigned to each goose. OpenAI was used for goose identification as well as converting the physical geese into sprites. All in all, we used a variety of different technologies to power our app, many of which we were beginners to.
## Challenges we ran into
We were very unfamiliar with Cohere and found ourselves struggling to use some of its generative AI technologies at first. After playing around with it for a bit, we were able to get it to do what we wanted, and this saved us a lot of head pain.
Another major challenge we underwent was getting the camera window to display properly on a smartphone. While it worked completely fine on computer, only a fraction of the window would be able to display on the phone and this really harmed the user experience in our app. After hours of struggle, debugging, and thinking, we were able to fix this problem and now our camera window is very functional and polished.
One severely unexpected challenge we went through was one of our computers' files corrupting. This caused us HOURS of headache and we spent a lot of effort in trying to identify and rectify this problem. What made this problem worse was that we were at first using Microsoft VS Code Live Share with that computer happening to be the host. This was a major setback in our initial development timeline and we were absolutely relieved to figure out and finally solve this problem.
A last minute issue that we discovered had to do with our Cohere API. Since the prompt did not always generate a response within the required bounds, looped it until it landed in the requirements. We fixed this by setting a max limit on the amount of tokens that could be used per response.
One final issue that we ran into was the Google Maps API. For some reason, we kept running into a problem where the map would force its centre to be where the user was located, effectively prohibiting the user from being able to view other areas of the map.
## Accomplishments that we're proud of
During this hacking period, we built long lasting relationships and an even more amazing project. There were many things throughout this event that were completely new to us: various APIs, frameworks, libraries, experiences; and most importantly: the sleep deprivation. We are extremely proud to have been able to construct, for the very first time, a mobile friendly website developed using Next.js, Typescript, and Tailwind. These were all entirely new to many of our team and we have learned a lot about full stack development throughout this weekend. We are also proud of our beautiful user interface. We were able to design extremely funny, punny, and visually appealing UIs, despite this being most of our's first time working with such things. Most importantly of all, we are proud of our perseverance; we never gave up throughout the entire hacking period, despite all of the challenges we faced, especially the stomach aches from staying up for two nights straight. This whole weekend has been an eye-opening experience, and has been one that will always live in our hearts and will remind us of why we should be proud of ourselves whenever we are working hard.
## What we learned
1. We learned how to use many new technologies that we never laid our eyes upon.
2. We learned of a new study spot in E7 that is open to any students of UWaterloo.
3. We learned how to problem solve and deal with problems that affected the workflow; namely those that caused our program to be unable to run properly.
4. We learned that the W store is open on weekend.
5. We learned one another's stories!
## What's next for GooseOnTheLoose
In the future, we hope to implement more visually captivating transitional animations which will really enhance the UX of our app. Furthermore, we would like to add more features surrounding the geese, such as having a "playground" where the geese can interact with one another in a funny and entertaining way. | winning |
## Inspiration
Walk from home due to the pandemic more or less decreases our productivity. Hence, we need some boosters.
## What it does
Players can rasie cat in the game by finishing their checklists everyday.
## How we built it
Via sacrificing sleep time and using react.js.
## Challenges we ran into
Lack of time for us to learn from scratch and do the implementation.
## Accomplishments that we're proud of
We finally made something.
## What we learned
More technique in javascript development and REACT.
## What's next for Checklist Cat
1. Add more animation
2. Enable recording the players' progress
3. Combine it with a scheduler | ## Inspiration
Reflecting on 2020, we were challenged with a lot of new experiences, such as online school. Hearing a lot of stories from our friends, as well as our own experiences, doing everything from home can be very distracting. Looking at a computer screen for such a long period of time can be difficult for many as well, and ultimately it's hard to maintain a consistent level of motivation. We wanted to create an application that helped to increase productivity through incentives.
## What it does
Our project is a functional to-do list application that also serves as a 5v5 multiplayer game. Players create a todo list of their own, and each completed task grants "todo points" that they can allocate towards their attributes (physical attack, physical defense, special attack, special defense, speed). However, tasks that are not completed serve as a punishment by reducing todo points.
Once everyone is ready, the team of 5 will be matched up against another team of 5 with a preview of everyone's stats. Clicking "Start Game" will run the stats through our algorithm that will determine a winner based on whichever team does more damage as a whole. While the game is extremely simple, it is effective in that players aren't distracted by the game itself because they would only need to spend a few minutes on the application. Furthermore, a team-based situation also provides incentive as you don't want to be the "slacker".
## How we built it
We used the Django framework, as it is our second time using it and we wanted to gain some additional practice. Therefore, the languages we used were Python for the backend, HTML and CSS for the frontend, as well as some SCSS.
## Challenges we ran into
As we all worked on different parts of the app, it was a challenge linking everything together. We also wanted to add many things to the game, such as additional in-game rewards, but unfortunately didn't have enough time to implement those.
## Accomplishments that we're proud of
As it is only our second hackathon, we're proud that we could create something fully functioning that connects many different parts together. We spent a good amount of time on the UI as well, so we're pretty proud of that. Finally, creating a game is something that was all outside of our comfort zone, so while our game is extremely simple, we're glad to see that it works.
## What we learned
We learned that game design is hard. It's hard to create an algorithm that is truly balanced (there's probably a way to figure out in our game which stat is by far the best to invest in), and we had doubts about how our application would do if we actually released it, if people would be inclined to play it or not.
## What's next for Battle To-Do
Firstly, we would look to create the registration functionality, so that player data can be generated. After that, we would look at improving the overall styling of the application. Finally, we would revisit game design - looking at how to improve the algorithm to make it more balanced, adding in-game rewards for more incentive for players to play, and looking at ways to add complexity. For example, we would look at implementing a feature where tasks that are not completed within a certain time frame leads to a reduction of todo points. | ## Inspiration
Our game stems from the current global pandemic we are grappling with and the importance of getting vaccinated. As many of our loved ones are getting sick, we believe it is important to stress the effectiveness of vaccines and staying protected from Covid in a fun and engaging game.
## What it does
An avatar runs through a school terrain while trying to avoid obstacles and falling Covid viruses. The player wins the game by collecting vaccines and accumulating points, successfully dodging Covid, and delivering the vaccines to the hospital.
Try out our game by following the link to github!
## How we built it
After brainstorming our game, we split the game components into 4 parts for each team member to work on. Emily created the educational terrain using various assets, Matt created the character and its movements, Veronica created the falling Covid virus spikes, and Ivy created the vaccines and point counter. After each of the components were made, we brought it all together, added music, and our game was completed.
## Challenges we ran into
As all our team members had never used Unity before, there was a big learning curve and we faced some difficulties while navigating the new platform.
As every team member worked on a different scene on our Unity project, we faced some tricky merge conflicts at the end when we were bringing our project together.
## Accomplishments that we're proud of
We're proud of creating a fun and educational game that teaches the importance of getting vaccinated and avoiding Covid.
## What we learned
For this project, it was all our first time using the Unity platform to create a game. We learned a lot about programming in C# and the game development process. Additionally, we learned a lot about git management through debugging and resolving merge conflicts.
## What's next for CovidRun
We want to especially educate the youth on the importance of vaccination, so we plan on introducing the game into k-12 schools and releasing the game on steam. We would like to add more levels and potentially have an infinite level that is procedurally generated. | partial |
## Inspiration
Many people want to stay in shape, so they really want to go workout to achieve their physique. However, most don't due to the hassle of creating unique workout plans because it could be time consuming and general to a specific body type, resulting in poor outcomes. What if you can build a plan that focuses on ***you***? Specifically, your body type, your schedule, the workouts you want to do and your dietary restrictions? Meet Gain+ where we create the fastest way to make big gains!
## What it does
Gain+ creates a custom workout and meal plan based on what you want to look like in the future (i.e. 12 weeks). You will interact with a personal trainer created with AI to discuss your goal. First, you would load two pictures: one based on what you look like now and another based on what you hope to somewhat achieve after you finish your plan. Then, you'll give answers to any questions your coach has before generating a full workout and meal plan. The workout plan is based on the number of days you want to go to the gym, while the meal plan is for every day. You can also add workouts and meals before finalizing your plan as well.
## How we built it
For our website, we've built the frontend in **React and Tailwind CSS** while **Firebase** provides out backend and database to store chats and users. As for the model creating the workout plans, there's a custom model that was created from a [Kaggle Dataset](https://www.kaggle.com/datasets/trainingdatapro/human-segmentation-dataset) and trained on **Roboflow** that classifies images based on gender, the three main types of bodies (ectomorph, mesomorph and endomorph) and the various subtypes. The best classes for that model is then sent to our chatbot, which was trained and deployed with **Databricks Mosaic AI** and based on **LLaMA 3.1**.
## Challenges we ran into
Some challenges we ran into were the integration of the frontend, backend, and AI and ML components. This was a quite large and upscaled project where we used a lot of new technologies that we had little to no experience with. For example, there was a huge CORS issue in the final hours of hacking that plagued our project that we tried to solve with some help from the internet, as well as getting help from our mentors, Paul and Sammy.
## Accomplishments that we're proud of
This was Kersh and Mike's first time doing something in Databricks and Ayan's first time using Firebase in a more professional scale. The fact that we actually implemented these technologies into a final project from little to no experience was a big accomplishment for all of us.
## What we learned
We learned a lot throughout this hackathon, like working with external APIs for LLMs and Databricks, gained hands on experience with prompt engineering and finally, adjusting to unexpected roadblocks that we faced throughout this hackathon
## What's next for Gain+
Next steps would definitely be to improve the UI and UX and also implement some new features. Some of them can include a significant focus for people who have bodybuilding or powerlifting meets, which we'll implement through a separate toggle. | ## Inspiration
We message each other every day but don't know the sheer distances that these messages have to travel. As our world becomes more interconnected, we wanted a way to appreciate the journeys all of our communications go through around the globe.
We also wanted to democratize the ability to fact check news by going directly to the source: the people of the country in concern. This would allow us to tackle the problem of fake news while also jumpstarting constructive conversations between serendipitous pen pals worldwide.
## What it does
Glonex is a visualization of the globe and the messages going around it. You can search the globe for a city or area you are interested in contacting, and send a message there. Your message joins the flight of paper airplanes orbiting the Earth until it touches down in its target destination where other users can pick up your letter, read it, and then send one back.
We tackled our news objective by adding the ability to see news in other areas of the world at a click, and then ask questions to the inhabitants right afterward.
You can also donate to our mission using the Checkbook API to keep the site running.
## How we built it
We used Svelte, a performant JavaScript framework that operates as a compiler with a memoized DOM rather than a Virtual DOM (like React/Vue/Angular) to *increase performance* and drastically *decrease js bundle size*. This was a necessary concern because we knew we would be using the Esri ArcGIS API with a visualization of the globe (which is and it would become quite slow if the Javascript framework that we used took up too much memory). We then worked on getting the Esri ArcGIS SceneView for the globe working, we used a custom basemap from NASA that showed the cities of the world at night to create a pleasing aesthetic.
We wrote code to calculate a geodesic around the Earth that spans between the user's current location and where they click, but then interpolates the elevation over time to create an arc around the world.
Then we worked on the Firebase Firestore integration where you can send a message to any geopoint on click. Then we had to create the paper airplanes for each message based on their timestamp created and timestamp for when they should arrive at their destination. Clientside we interpolate between the start and end locations to create the positions of each airplane, and then every timestep: move them toward their destinations by changing the geometry of each graphic in their GraphicsLayer.
In order to get the news at a specific location, we created an algorithm to scrape Google News for a search query related to that specific location. We developed this using Node.js & Express.js and hosted this as a separate web service. The front end calls our news api whenever the user clicks on a location on the globe. The api then finds all news articles relevant to the location and serves it back to the front end.
## Challenges we ran into
The hardest parts of the frontend of this project include creating the arcs that span the earth whenever you click a new position. We had to do a lot of math to figure out that we could trim a part of a geodesic (great circle) of the earth, and then construct a 3d polyline that interpolated each vertex's elevation by its distance to the target destination to form an arc shape.
Another challenge was the creative use of the Esri ArcGIS API for creating the moving paper airplanes. The GraphicsLayers aren't supposed to be moving, but we needed them to for this project, and we accomplished that efficiently by, on creation time (when sending to firestore), calculating the heading of each paper airplane, and then on each frame, adding that heading to its geometry (position) each frame multiplied by the speed calculated from the difference of its start and arrival times. With that code we were able to create the airplanes orbiting Earth.
Another challenge we faced was running the news search. Most existing news apis we found only have the option to report news by country. In order to get a more detailed view of local news by city, we ended up scraping google news search for our articles. Our implementation uses puppeteer (opening up Chromium), so we could not run this in the browser along with our other front-end code. We got around this by creating a separate web service hosted elsewhere, so that our front-end could call the api without having to worry about browser-compatibility.
## Accomplishments that we're proud of
* We are able to search and see the news all around the world just by clicking on any part of the globe and being able to send messages and chat with other people in that area of the world.
* We have the frontend of the globe running perfectly with the different brightness levels in any part of the world, and the message's plane all around the globe.
* We could figure out the Math to create the arcs that span the earth whenever you click a new position, creating the moving paper airplanes, and many of the front-end features.
## What we learned
* Svelte and how to implement all of the APIs that we used (Esri ArcGIS API and Checkbook API) using Svelte.
* Integrate all of our back-ends and databases using Firebase
* Using Math to figure out the front-end part of this web-app.
* Being able to web scrap the news and make our front-ends could call the news API to search the news all over the globe
## What's next for Glonex
We definitely would continue working on this project since we believe that this is a very useful thing if a web application like this exists in this world. Not only that, we could develop more into the mobile app version so the user can use it easier. | ## Inspiration
Living a healthy and balanced life style comes with many challenges. There were three primary challenges we sought to resolve with this hack.
* the knowledge barrier | *“I want to work out, but I don’t exactly know what to do”*
* the schedule barrier | *“I don’t know when to workout or for how long. I don't even know how long the workout I created is going to take.”*
* the motivation barrier | *“I don’t feel like working out because I’m tired.”*
Furthermore, sometimes you feel awful and don’t wish to work out. Sometimes you work out anyways and feel better. Sometimes you work out anyways and feel worse and suffer the next day. How can we optimize this sort of flow to get people consistently feeling good and wanting to workout?
## What it does
That's where Smart Fit comes in. This AI based web application takes input from the user such as time availability, focus, and mood to intelligently generate workouts and coach the user to health & happiness. The AI applies **sentimental analysis** using **AWS API**. Using keyword analysis and inputs from the user this AI predicts the perfectly desired workout, which fits into their availability. Once the AI generates the workout for the user, the user can either (1) schedule the workout for later or (2) workout now. **Twillio** is used to send the workout to the users phone and schedules the workouts. The application uses **facial emotional detection** through AWS to analyze the users' facial expression and provides the user with real-time feedback while they're exercising.
## How we built it
The website and front-end was built using HTML5, and styled using CSS, Adobe Photoshop, and Figma. Javascript (both vanilla and jQuery) was used to connect most HTML elements to our backend.
The backend was built as a Python Flask application. While responsible for serving up static assets, the backend was also in charge of crucial backend processes such as the AI engine utilized to intelligently generate workouts and give real-time feedback as well as the workout scheduler. We utilized technology such as AWS AI Services (Comprehend and Rekognition) and the Twilio API.
## Challenges we ran into
We found that the most difficult portion of our project were the less technical aspects: defining the exact problem we wanted to solve, deciding on features of our app, and narrowing scope enough to produce a minimum viable product. We resolved this be communicating extensively; in fact, we argued numerous times over the best design. Because of discussions like this, we were able to create a better product.
## Accomplishments that we're proud of
* engineering a workout scheduler and real-time feedback engine. It was amazing to be able to make an AI application that uses real-time data to give real-time feedback. This was a fun challenge to solve because of all of the processes communicating concurrently.
* becoming an extremely effective team and great friends, despite not knowing each other beforehand and having diverse backgrounds (we're a team of a chemical engineer, an 11th grader in high school, and a senior in college).
## What we learned
We learned many new technical skills like how to integrate APIs into a complex application, how to structure a multi-purpose server (web, AI engine, workout scheduler), and how to develop a full-stack application. We also learned how to effectively collaborate as a group and how to rapidly iterate and prototype.
## What's next for Smart Fit
The development of a mobile responsive app for more convenient/accessible use. We created a mockup of the user interface; check it out [here](https://www.youtube.com/watch?v=asirXH3Hxw4&feature=youtu.be)! Using google calendar API to allow for direct scheduling to your Google account and the use of google reminders. Bootstrap will be used in the future to allow for a better visual and user experience of the web application. Finally, deploying on a cloud platform like GCP and linking the app to a domain | partial |
## Inspiration–“💀” “that’s crazy” “lmao” just say you’re not funny.
The group chat is firing up. Messages are flying at Mach 3. Before you know it, a meme war has ensued, and you have no good response in your arsenal. Lucky for you, ChatMe-meT does.
## Explanation–As Easy as 1-2-Meme
ChatMe-meT is a Google Chrome extension that works hard at curating meme responses to messages so you don’t have to. The process is distilled down to three easy steps:
**1. Activate the extension in the window of your chat.** From here, ChatMe-meT reads the last 10 messages to gain context for the conversation.
**2. Select your desired meme genre.** Choose from *Facebook Mom*, *Dank*, *Surrealist*, *Wholesome*, and *Redditor* to get the exact style of meme you’re looking for. ChatMe-meT then uses these preferences to compile a gallery of real Internet memes, ripe for the picking.
**3. Add a desired meme to your clipboard.** Click on any image to send to your friends, relatives, and colleagues, and prove your unfaltering sense of humour.
## How We Built It
ChatMe-meT is built off of React, TS, and Vite. Using this tooling we developed a Chromium-based extension that can be easily added to most browsers and can read the current status of the chat you are in. It uses application-specific DOM query selectors to find the sender and message data that will be added as context for the meme finder algorithm™. When you need a fresh new meme for your chat, a request is made to OpenAI using GPT-4 to deliver the best memeable keywords it can find based on the context of the chat. Using the resulting keywords, a search is performed across the internet to find what you need. Cool memes are then found.
## Accomplishments
The process in which we scrape the DOM ensures we get all the data we need and none we don’t need. Many chat applications, often include many features like reactions, images and emojis and we need to make sure that we can interpret those but also not include unnecessary images or reactions in our prompt. By building out the robust scraping tool, we can build out the application to support as many chat messaging platforms as we want.
With the meme generation genres, we wanted to make sure the image results were unique. We tuned the model to ensure that every genre/personality would be different and funny in its own right.
With the extension-based approach we took instead of, for example, a bot, we built an application that isn’t just limited to public multiuser channels but also private channels as well where you will often need a funny meme as well. It is also highly expandable and can interact with multiple websites as well.
## Challenges we ran into
As it turns out, searching for memes is actually quite difficult. Often the funniest memes are not neatly categorized or labeled in a way that is search engine friendly, so we had to do a lot of tuning to our search prompts to avoid *all* of our memes looking like facebook mom memes.
## What's next for ChatMe-meT
While ChatMe-meT is a novelty idea, it sheds light on an unexplored area of integrated AI assistance. Many software tools ranging from email clients to code editors are rapidly integrating AI enhancements that serve as the user’s sidekick, but this has seldom applied to messaging platforms. A tool like ChatMe-meT that utilizes techniques found in AI autocomplete, but applies new media forms could remarkably enhance the way we communicate on a daily basis. | ## Inspiration
Going to the bathroom in the public is already not the most pleasant experience for everyone, but having to overcome inclusivity barriers such as looking for a unisex bathroom or one with wheelchair access makes the experience even more difficult. We wanted to create an application that makes going to the bathroom a safer experience for everyone.
## What it does
Finds the nearest washrooms to a user based on their location.
Washrooms can be filtered for different types such as Gender Neutral, Wheelchair Access, & has Diaper Changing Stations. Also rate washrooms and view different categories that ratings are based on (TP quality, cleanliness, etc.).
## Challenges we ran into
Aggregating data from 2 datasets to support various queries (based on user needs),
Deploying our app onto Microsoft Azure,
Connecting the various parts of our application
## Accomplishments that we're proud of
We had lots of fun as a team!
Being able to put together an app with a mission we believe in :)
## What we learned
Smarter ways to deploy and also learned how to work with
Azure!!! :D
## What's next for sPOTTYfy
In depth stats and analytics for our washroom datasets!
More filters, more queries!! | ## Inspiration
AI voices are stale and impersonal. Chrome extensions like "Free Text To Speech Online" use default voices to read text messages on the web out loud. While these default voices excel in cadence and clarity, they miss the nuance and emotion inherent in human speech. This emotional connection is important for a user, as it helps them feel engaged in online communication. Using personalized speech also helps users with special needs who rely on text-to-speech, as this feature assists them in identifying who is talking when vocalizing the messages.
## What it does
TeleSpeech is a Chrome Extension that converts Telegram messages into custom AI-generated speech, mimicking the distinct voice of each sender. Using the chrome extension and the web app, you can upload anyone's voice and use it to read messages out loud in a Telegram group chat.
## How we built it
We used a Chrome Extension (HTML/CSS, Vanilla JS) to read message data and run the text-to-speech, and a Next.js web app to manage the voices used for text-to-speech.
To use TeleSpeech, a user will first upload their voice on our Next.js web app (<https://telespeakto.us>), which will then use the Eleven-Labs Text-to-Speech API to send the AI-generated voice back to the Chrome extension. All user credentials and voice data are securely stored in a Firebase database.
On the Chrome extension, when a user has the Telegram Web App open, the extension's service worker will collect all the messages in a chat. When the Chrome extension is open and a user logs in, a "Play Sound" button appears. When pressed, the Chrome extension sends the web app all the message text, and the web app returns an audio file with an AI-generated voice saying the text data.
## Challenges we ran into
We struggled the most with communicating between the Chrome extension and the web app. Using Vanilla JS with the extension's strict CSP policies made it hard to transfer data between the 2. We also struggled with learning how to use the Eleven-Labs API because we'd never used it before. Finally, two of the members of our team didn't know typescript as well had a decently steep learning curve as we headed into the projects.
## Accomplishments that we're proud of
When we were first able to get one teammate's voice to come out of the speakers reading a message was so incredible. We all thought we could do this project before that happened, but after that, it felt so much more real and attainable. Another is that we built a fully functioning project despite it being our first time at a Hackathon.
## What we learned
Two of the members in the group did not know a lot of JavaScript or typescript going in. The short time was not enough to completely prepare them. But, over the last 36 hours, they were able to figure it out to a higher degree than thought. The other two members learned a lot about how to use Chrome extensions, such as how to use service workers and how to have it communicate with a web app. Besides coding, the four of us also learned a lot about accessibility on screens.
## What's next for TeleSpeech
The next big thing for TeleSpeech is for it to work for multiple platforms, not just Telegram. We want to expand it to WhatsApp, Instagram, and Facebook. It would also be nice if we could use it for news articles, where it would read news articles in the author's voice, or have the articles' quotes be read by the person's voice. | losing |
## Inspiration
Scrolling through a Facebook feed, you find countless articles written by big-name papers and third party sources that feed readers fake or biased news. The goal of this project was to counteract that by getting rid of the big names all together. The Root is a platform where users can get real-time news articles from local news sources only. If something happens in a region, readers can stay informed by reading the papers of that region; no third party commentary, no big-name news. And all while giving struggling local newspapers a larger platform.
## What it does
The Root is an interactive map with markers for various cities across America. When you click on a city's marker, you get access to that city's local news feed. You can read about how New Yorkers feel about their President's latest announcement and then hop over to the other side of the country to read the perspective of an Arizonian.
## How we built it
We used Flask as our main webapp framework and coded in python. To generate the images of the maps, pointers, and popups, we used an API called Mapbox, and we used JavaScript, HTML, and CSS for that. Our JavaScript file places all of the pointers on certain regions and retrieves information from news rss feeds to display and update local news. We used an rss widget creator to generate the popups. Our CSS file adds some of the display features, and our HTML file links the JavaScript file to our main python file.
## Challenges we ran into
We didn't have much experience before coming to this hackathon so every step was a challenge for us. The biggest challenge was probably starting our project and trying to head in the right direction--we had plenty of ideas in mind, but didn't know how we could implement any of it. We knew enough to know what we were looking for, but not enough to actually implement it. Talking to other hackers and mentors, researching, and testing out all of our possible solutions was time consuming but rewarding.
## Accomplishments that we're proud of
As we previously mentioned, none of us had been to hackathon or built a website/app before, so everything we learned this weekend was completely new to us. We are most proud of how much we were able to learn in a short amount of time and being able to collaborate on a project that resulted in something real and useful.
## What we learned
In terms of technical skills, we learned a lot. None of us came in with knowledge of html, css, or javascript, but we managed to use all three, we learned how to use an api, we learned how to create a web application, and we learned how to put all those pieces together (which is really daunting if you've never done it before!). However, our most valuable lesson wasn't at all a technical skill -- it was the realization that we were able to learn a lot in a short period of time. It's easy to think that you're not qualified for this field, especially given the qualifications of those around you, but we learned this weekend that we know more than we think we do, and we're capable of learning on the job.
## What's next for The Root
Because of the nature of the hackathon, we didn't get to make the program as dynamic as we had hoped. As of right now, there are only feeds for a few cities on the east coast and one on the west, but we'd like to expand this so that local news from across the country is presented on the platform, and hopefully we can do this globally with the help of a few translation API's we found. Ideally, we want to be able to filter the news sources so that we get only the most important headlines. We'd also like to add a Trending Now feature that would allow you to see the new articles that are gaining traction in the news currently. We hope this feature could provide more direction to users. We'd like to make some modifications to improve user experience, and all in all we'd like this to be a resource that can be attached to a Facebook page so that people can get access to accurate news sources. | ## Inspiration
As a group of university students from across North America, COVID-19 has put into perspective the uncertainty and instability that comes with online education. To ease this transition, we were inspired to create Notate — an unparalleled speech-to-text transcription platform backed by the power of Google Cloud’s Machine Learning algorithms. Although our team has come from different walks of life, we easily related to each others’ values for accessibility, equality, and education.
## What it does
Notate is a multi-user web conferencing app which allows for students to create and access various study rooms to virtually interact with others worldwide and revolutionize the way notes are taken. It has the capacity to host up to 50 unique channels each with 100+ attendees so students can get help and advice from a multitude of sources. With the use of ML techniques, it allows for real time text-to-speech transcribing so lectures and conversations are stored and categorized in different ways. Our smart hub system makes studying more effective and efficient as we are able to sort and decipher conversations and extract relevant data which other students can use to learn all in real time.
## How we built it
For the front end, we found an open source gatsby dashboard template to embed our content and features into quickly and efficiently. We used the daily.co video APIs to embed real-time video conferencing into our application, allowing users to actually create and join rooms. For the real time speech-to-text note taker, we made use of Google’s Cloud Speech-to-Text Api, and decided to use Express and Node.js to be able to access the API from our gatsby and react front end. To improve the UI and UX, we used bootstrap throughout our app.
## Challenges we ran into
In the span of 36 hours, perfecting the functionality of a multi-faceted application was a challenge in and of itself. Our obstacles ranged from API complexities to unexpected bugs from various features. Integrating a video streaming API which suited our needs and leveraging Machine Learning techniques to transcribe speech was a new feature which challenged us all.
## Accomplishments that we're proud of
As a team we had successfully integrated all the features we planned out with good functionality while allowing room to scale for the future. Having thought extensively regarding our goal and purpose, it was clear we had to immerse ourselves with new technologies in order to be successful. Creating a video streaming platform was new to all of us and it taught us a lot about API’s and integrating them into modern frontend technologies. At the end we were able to deliver an application which we believe would be an essential tool for all students experiencing remote learning due to COVID-19.
## What we learned
Having access to mentors who were just a click away opened up many doors for us. We were effectively able to learn to integrate a variety of APIs (Google Cloud Speech-to-Text and Daily Co.) into Notate. Furthermore, we were exposed to a myriad of new programming languages, such as Bootstrap, Express, and Gatsby. As university students, we shared the frustration of a lack of web development tools provided in computer science courses. Hack the 6ix helped further our knowledge in front-end programming. We also learned a lot from one another because we all brought different skill sets to the team. Resources like informative workshops, other hackers, and online tutorials truly gave us long-lasting and impactful skills beyond this hackathon.
## What's next for Notate?
As society enters the digital era, it is evident Notate has the potential to grow and expand worldwide for various demographics. The market for education-based video conferencing applications has grown immensely over the past few months, and will likely continue to do so. It is crucial that students can adapt to this sudden change in routine and Notate will be able to mitigate this change and help students continue to prosper.
We’d like to add more functionality and integration into Notate. In particular, we’d like to embed the Google Vision API to detect and extract text from user’s physical notes and add them to the user’s database of notes. This would improve one of our primary goals of being the hub for student notes.
We also see Notate expanding across platforms, and becoming a mobile app. As well, as the need for Notate grows we plan to create a Notate browser extension - if the Notate extension is turned on, a user can have their Zoom call transcribed in real time and added to their hub of notes on Notate. | ## Inspiration
The prevalence of fake news has been on the rise. It has led to the public's inability to receive accurate information and has placed a heightened amount of distrust on the media. With it being easier than ever to propagate and spread information, the line between what is fact and fiction has become blurred in the public sphere. Concerned by this situation, we built a mobile application to detect fake news on its websites and alert people when information is found to be false or unreliable, thereby hopefully bringing about a more informed electorate.
## What it does
enlightN is a mobile browser with built-in functionality to detect fake news and alert users when the information they are reading - on Facebook or Twitter - is either sourced from a website known for disseminating fake news or known to be false itself. The browser highlights which information has been found to be false and provides the user sources to learn more about that particular article.
## How we built it
**Front-end** is built using Swift and Xcode. The app uses Alamofire for HTTP networking, and WebKit for the browser functionality. Alamofire is the only external dependency used by the front end; other than that it's all Apple's SDK's. The webpage HTML is parsed and sent to the backend, and the response is parsed on the front end.
**Back-end** is built using Python, Google App Engine, Microsoft Cognitive Services, HTML, JavaScript, CSS, BeautifulSoup, Hoaxy API, and Snopes Archives. After receiving the whole HTML text from front-end, we scrape texts from Facebook and Twitter posts with the use of the BeautifulSoup module in Python. Using the keywords of the texts by Microsoft Key Phrase Extraction API (which uses Microsoft Office's Natural Language Processing toolkit) as an anchor, we extract relevant information (tags for latent fake news) from both Snopes.com's Database and the results getting back from the hoaxy API and send this information back to the front-end.
**Database** contains about 950 websites that are known for unreliable (e.g. fake/conspiracy/satire) news sources and about 15 well-known trustworthy news source websites.
## Challenges we ran into
One challenge we ran into was with implementing the real-time text search in order to cross-reference article headlines and Tweets with fact-checking websites. Our initial idea was to utilize Google’s ClaimReview feature on their public search, but Google does not have an API for their public search feature and after talking to some of the Google representatives, automating this with a script would not have been feasible. We then decided to implement this feature by utilizing Snopes. Snopes does not have an API to access their article information and loads their webpage dynamically, but we were able to isolate the Snopes’ API call that they use to provide their website with results from an article query. The difficult part of recreating this API call was figuring out the proper way to encode the POST payload and request header information before the HTTP function call.
## Accomplishments that we're proud of
We were able to successfully detect false information from any site after especially handling facebook and twitter. The app works and makes people aware of disinformation in real-time!
## What we learned
We applied APIs that are completely new for us - Snopes’ API, hoaxy API, and Key Phrase Extraction API - in our project within the past 36 hours.
## What's next for enlightN
Building a fully-functional browser and an app which detects false information on any 3rd party app. We also plan to publicize our API as it matures. | losing |
## Inspiration
After observing different hardware options, the dust sensor was especially outstanding in its versatility and struck us as exotic. Dust-particulates in our breaths are an ever present threat that is too often overlooked and the importance of raising awareness for this issue became apparent. But retaining interest in an elusive topic would require an innovative form of expression, which left us stumped. After much deliberation, we realized that many of us had a subconscious recognition for pets, and their demanding needs. Applying this concept, Pollute-A-Pet reaches a difficult topic with care and concern.
## What it does
Pollute-A-Pet tracks the particulates in a person's breaths and records them in the behavior of adorable online pets. With a variety of pets, your concern may grow seeing the suffering that polluted air causes them, no matter your taste in companions.
## How we built it
Beginning in two groups, a portion of us focused on connecting the dust sensor using Arduino and using python to connect Arduino using Bluetooth to Firebase, and then reading and updating Firebase from our website using javascript. Our other group first created gifs of our companions in Blender and Adobe before creating the website with HTML and data-controlled behaviors, using javascript, that dictated the pets’ actions.
## Challenges we ran into
The Dust-Sensor was a novel experience for us, and the specifications for it were being researched before any work began. Firebase communication also became stubborn throughout development, as javascript was counterintuitive to object-oriented languages most of us were used to. Not only was animating more tedious than expected, transparent gifs are also incredibly difficult to make through Blender. In the final moments, our team also ran into problems uploading our videos, narrowly avoiding disaster.
## Accomplishments that we're proud of
All the animations of the virtual pets we made were hand-drawn over the course of the competition. This was also our first time working with the feather esp32 v2, and we are proud of overcoming the initial difficulties we had with the hardware.
## What we learned
While we had previous experience with Arduino, we had not previously known how to use a feather esp32 v2. We also used skills we had only learned in beginner courses with detailed instructions, so while we may not have “learned” these things during the hackathon, this was the first time we had to do these things in a practical setting.
## What's next for Dustables
When it comes to convincing people to use a product such as this, it must be designed to be both visually appealing and not physically cumbersome. This cannot be said for our prototype for the hardware element of our project, which focused completely on functionality. Making this more user-friendly would be a top priority for team Dustables. We also have improvements to functionality that we could make, such as using Wi-Fi instead of Bluetooth for the sensors, which would allow the user greater freedom in using the device. Finally, more pets and different types of sensors would allow for more comprehensive readings and an enhanced user experience. | ## Inspiration
As the demand for developers increases worldwide, providing high quality learning resources is critical for beginners in coding. However, the majority of programming resources are written in English, which may introduce a potential language barrier for individuals that do not have English as a first language. Incode was designed to make programming resources more inclusive and accessible to the community of developers around the globe, whether they are beginners or seasoned professionals.
## How it works
Powered by OpenAI's GPT-3, Incode utilizes language translation in combination with code generation to provide programming material in just a few seconds. You can provide a command in any spoken language and our application will generate code in response while providing helpful comments and variable names in the same language that you prompted with!
### Example
We want to generate C code to solve the Dining Philosophers problem. The below generates C code in response to the following prompt in French:
Écrire un programme pour résoudre le problème des philosophes qui dînent.
*Response*
```
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>
#include <semaphore.h>
#define N 5
sem_t fourchettes[N];
void *philosophe(void *num)
{
int i = *(int *)num;
int gauche = (i + N - 1) % N;
int droite = (i + 1) % N;
while (1)
{
sem_wait(&fourchettes[gauche]);
sem_wait(&fourchettes[droite]);
printf("Le philosophe %d mange\n", i);
sleep(1);
sem_post(&fourchettes[gauche]);
sem_post(&fourchettes[droite]);
}
}
int main()
{
pthread_t tid[N];
int i;
for (i = 0; i < N; i++)
sem_init(&fourchettes[i], 0, 1);
for (i = 0; i < N; i++)
pthread_create(&tid[i], NULL, philosophe, &i);
for (i = 0; i < N; i++)
pthread_join(tid[i], NULL);
return 0;
}
```
## Tech Stack
### Frontend
The Frontend was developed using React and Typescript.
### Backend
The Backend was developed using Flask and Python. In addition, the models that we used for language translation and code generation are from [OpenAI's GPT-3](https://beta.openai.com/docs/models/gpt-3). Finally, we deployed using Microsoft Azure. | ## Inspiration
Jeremy, one of our group members, always buys new house plants with excitement and confidence that he will take care of them this time.. He unfortunately disregards his plant every time, though, and lets it die within three weeks. We decided to give our plant a persona, and give him frequent reminders whenever the soil does not have enough moisture, and also through personalized conversations whenever Jeremy walks by.
## What it does
Using four Arduino sensors, including soil moisture, temperature, humidity, and light, users can see an up-to-date overview of how their plant is doing. This is shown on the display and bar graph with an animal of choice's emotions! Using the webcam which is built-in into the the device, your pet will have in-depth conversations with you using ChatGPT and image recognition.
For example, if you were holding a water bottle and the soil moisture levels were low, your sassy cat plant might ask if the water is for them since they haven't been watered in so long!
## How we built it
The project is comprised of Python and C++. The 4 sensors and 2 displays on the front are connected through an Arduino and monitor the stats of the plant and also send them to our Python code. The Python code utilizes chatGPT API, openCV, text-to-speech, speech-to-text, as well as data from the sensors to have a conversation with the user based on their mood.
## Challenges we ran into
Our project consisted of two very distinct parts. The software was challenging as it was difficult to tame an AI like chatGPT and get it to behave like we wanted. Figuring out the exact prompt to give it was a meticulous process. Additionally, the hardware posed a challenge as we were working with new IO parts. Another challenge was combining these two distinct but complex components to send and receive data in a smooth manner.
## Accomplishments that we're proud of
We're very proud of how sleek the final product looks as well as how smoothly the hardware and software connect. Most of all we're proud of how the plant really feels alive and responds to its environment.
## What we learned
Making this project, we definitely learned a lot about sending and receiving messages from chatGPT API, TTS, STT, configuring different Arduino IO methods, and communicating between the Arduino and Python code using serial.
## What's next for Botanical Bestie
We have many plans for the future of Botanical Bestie. We'd like to make the product more diverse and include different language options to be applicable to international markets. We'd also like to collab with big brands to include their characters as AI plant personalities (Batman plant? Spongebob plant?). On the hardware side, we'd obviously want to put speakers and microphones on the plant/plant pot itself, since we used the laptop speaker and phone microphone for this hackathon. We also have plans for the plant pot to detect what kind of plant is in it, and change its personality accordingly. | partial |
## Inspiration
Plan to participate in driving course
## What it does
Recognize the traffic sign, (currently only with stop sign and school zone sign)
## How I built it
Unity with AR vuforia
## Challenges I ran into
Unity Location Service is not accurate
Looks good on the computer and looks bad on the phone
It is difficult to use a new language to make API call
## Accomplishments that I'm proud of
Understand how api is working
## What I learned
Machine learning is better than AR
## What's next for TrafficSignRecognizer
Post location update for sign in the database when it is new with API post
Get sign when passing location when location is stored in the database with API get
Add more sign
Use Tensorflow Object Detection instead of AR | # 🎶 SongSmith | New remixes at your *fingertips*! 🎶
SongSmith is your one-stop-shop to **create new remixes from your favourite songs!** 🎼
**State-of-the-art AI Machine Learning Neural Networks** are used to generate remixes with similar styles🎨 to your best-loved songs🎶.
**📚Discover! 👨🎤Inspire! 🎧Listen! SongSmith!**
# Inspiration 💭⚡
Ever listen to your favourite artists🧑🎤 and songs⏭️ and think **"Damn, I wish there was similar music to this?"** We have this exact feeling, which is why we've developed SongSmith for the Music loving people!🧑🤝🧑
# How we built it 🏗️
* SongSmith⚒️ was built using the latest and greatest technology!! 💻
* Our **music generative neural network** was developed using a state-of-the-art architecture🏢 called **Attention Mechanism Networks**.
## Tech Stack 🔨
* AI Model: **Tensorflow**, Keras, Google Colab
* BackEnd: **Express.js, Flask** to run our **microservices**, and inference servers.
* FrontEnd: Developed using **React.js**, and Bootstrap; to allow for a quick **MVP development cycle🔃**.
* Storage: MongoDB, **Firebase**, Firestore
* Moral Support: Coffee☕, Bubble Tea🥤 , Pizza🍕 and **Passion💜 for AI** <3
# Challenges we ran into 🧱🤔
* Converting the output from the neural network to a playable format in the browser
* Allowing for CORS interaction between the frontend and our microservices
# What we learned🏫
* Bleeding💉 Edge Generative Neural Network for Music🎙️Production
* Different Python🐍 and Node.js Music Production Related Libraries📚
# What's next for SongSmith ➡️
* Offering wider genres of music to attract a wider range of **music-loving users**!🧑🎤
* Refining our Neural Network to generate more **high-quality REMIXES**!
\*See Github ReadMe for Video Link | # 🎯 The Project Story
### 🔍 **About Vanguard**
In today's fast-paced digital landscape, **cybersecurity** is not just important—it's essential! As threats multiply and evolve, security teams need tools that are **agile**, **compact**, and **powerful**. Enter **Vanguard**, our groundbreaking Raspberry Pi-powered vulnerability scanner and WiFi hacker.
Whether you’re defending **air-gapped networks** or working on **autonomous systems**, Vanguard adapts seamlessly, delivering real-time insights into network vulnerabilities. It's more than a tool; it's a **cybersecurity swiss army knife** for both **blue** and **purple teams**! 🛡️🔐
---
### **Air Gapped Network Deployability (CSE Challenge)**
* Databases
Having a dedicated database of vulnerabilities in the cloud for vulnerability scanning could pose a problem for deployments within air-gapped networks. Luckily, Vanguard can be deployed without the need for an external vulnerability database. A local database is stored on disk and contains precisely the information needed to identify vulnerable services. If necessary, Vanguard can be connected to a station with controlled access and data flow to reach the internet; this station could be used to periodically update Vanguard’s databases.
* Data Flow
Data flow is crucial in an embedded cybersecurity project. The simplest approach would be to send all data to a dedicated cloud server for remote storage and processing. However, Vanguard is designed to operate in air-gapped networks, meaning it must manage its own data flow for processing collected information. Different data sources are scraped by a Prometheus server, which then feeds into a Grafana server. This setup allows data to be organized and visualized, enabling users to be notified if a vulnerable service is detected on their network. Additionally, more modular services can be integrated with Vanguard, and the data flow will be compatible and supported.
* Remote Control
It is important for Vanguard to be able to receive tasks. Our solution provides various methods for controlling Vanguard's operations. Vanguard can be pre-packaged with scripts that run periodically to collect and process data. Similar to the Assemblyline product, Vanguard can use cron jobs to create a sequence of scripts that parse or gather data. If Vanguard goes down, it will reboot and all its services will restart automatically. Services can also be ran as containers. Within an air-gapped network, Vanguard can still be controlled and managed effectively.
* Network Discovery
Vanguard will scan the internal air-gapped network and keep track of active IP addresses. This information is then fed into Grafana, where it serves as a valuable indicator for networks that should have only a limited number of devices online.
---
### **Air Gapped Network Scanning (Example)**
Context: Raspberri Pi is connected to a hotspot network to mimic an air gapped network. Docker containers are run to simulate devices being on the air gapped network. This example will show how Vanguard identifies a vulnerable device on the air gapped network.
* Step 1: Docker Container
A vulnerable docker is running on 10.0.0.9

* Step 2: Automated Scanning on Vanguard picks up new IP
Vanguard will automatically scan our network and store the information if its contains important information.
Here are the cron scripts:

In the /var/log Vanguard Logged a new IP:

Vanguard's port scanner found open ports on our vulnerable device:

* Step 3: Prometheus scrapes results and Grafana displays
IP Activity history show how many time an IP was seen:

Vulnerability logs are displayed on our Grafana dashboard and we can see that our ports were scanned as running a vulnerable serivce. (2 red blocks on the right) (Only Port 21 and 22

* Conclusion
All this data flow was able to detect a new device and vulnerable services without the need of cloud or internet services. Vanguard's automated script's ran and detected the anomaly!
### 💡 **Inspiration**
Our team was fascinated by the idea of blending **IoT** with **cybersecurity** to create something truly **disruptive**. Inspired by the open-source community and projects like dxa4481’s WPA2 handshake crack, we saw an opportunity to build something that could change the way we handle network vulnerabilities.
We didn’t just want a simple network scanner—we wanted **Vanguard** to be **versatile**, **portable**, and **powerful** enough to handle even the most **secure environments**, like air-gapped industrial networks or autonomous vehicles 🚗💻.
---
### 🏆 **Accomplishments**
* **Nmap** automates network scans, finding open ports and vulnerable services 🕵️♂️.
* A **SQLite database** of CVEs cross-references scan results, identifying vulnerabilities in real time 🔓📊.
* **Grafana** dashboards monitor the Raspberry Pi, providing metrics on **CPU usage**, **network traffic**, and much more 📈.
* Wifi Cracking Module captures WPA2 handshakes and cracks them using open-source techniques, automating the process 🔑📶.
* Usage of different services that will run automatically and return data.
And everything comes together seamlessly in the vangaurd dashboard. Additionally, we integrated **Convex** as our backend data store to keep things **fast**, **reliable**, and easy to adapt for air-gapped networks (swap Convex for MongoDB with a breeze 🌬️ we really wanted to do take part in the convex challenge).
---
### 🔧 **Challenges We Faced**
Building **Vanguard** wasn’t without its obstacles. Here's what we had to overcome:
* 💻 **Air-gapped testing**: Ensuring Nmap runs flawlessly without external network access was tricky. We fine-tuned cron jobs to make the scanning smooth and reliable.
* 🚦 **Data efficiency**: Working with a Raspberry Pi means limited resources. Optimizing how we process and store data was key.
* 🛠️ **Seamless WiFi hacking**: Integrating WPA2 half-handshake cracking without impacting Pi performance required some creative problem-solving.
---
### 🏗️ **How We Built It**
* **Hardware**: Raspberry Pi 🥧 with an external WiFi adapter 🔌.
* **Backend**: We used **Convex** for data storage, with the option to switch to **MongoDB** for air-gapped use 🗃️.
* **Scanning & Exploiting**: Nmap runs on a schedule to scan, and CVEs are stored in **SQLite** for mapping vulnerabilities 🔗.
* **Frontend**: Built with **React** and **Next.js 14**, the user interface is sleek and efficient 🎨.
* **Monitoring**: Metrics and performance insights are visualized through **Grafana**, keeping everything transparent and easy to manage 📊.
A big thanks to <https://github.com/dxa4481> for the open source code for WPA2 Handshake PoC's
---
### 🚀 **What’s Next for Vanguard?**
We're just getting started! Here’s what’s in store for Vanguard:
* 🤖 **AI-driven vulnerability prediction**: Imagine knowing where a breach might happen **before** it occurs. We'll use machine learning to predict vulnerabilities based on historical data.
* ⚙️ **Modular add-ons**: Integrate tools like **Metasploit** or **Snort** for more specialized attacks, making Vanguard a **customizable powerhouse**.
* 🧳 **Enhanced portability**: We're optimizing Raspberry Pi hardware to push Vanguard’s limits even further, and exploring even more **compact** versions to make it the ultimate on-the-go tool!
---
Vanguard isn’t just a project; it’s the **future** of portable, proactive **cybersecurity**. 🌐🔐
**Stay secure, stay ahead!** | losing |
## Inspiration
We decided to create this project out of a personal want; A want to enhance relaxation at home. With Online Schooling and Quarantine, we have spent an increasing amount of time in our rooms constantly studying. To avoid the stresses of school, what’s better than some chill vibes? Introducing, Mood Lights!
## What it does
Our project, Mood Lights, is a smart Light-controlled Mood Ambient Device that is designed to reduce stress by using light patterns. It has different modes including weather mode, relaxing animations, notifications, etc. It allows you to control it using a remote control or by using Google Home's Voice Commands.
## How we built it
Our project uses a/an:
* LED strip
* Arduino Uno
* Raspberry Pi 3
* Google Home
* Remote
We built this lamp by connecting a strip of individually addressable LEDS to an Arduino Uno. The Arduino controls each individual led using logic and a library called FAST Led. Since the Arduino has no on-board WiFi and doesn’t have much processing power, we hooked up a Raspberry Pi to it in order to send in the required data to the Arduino. The Raspberry Pi is responsible for fetching data from OpenWeatherMaps and doing the heavy computations. It receives instructions from the Google Home. We connected the Google Home to the lamp via a website named IFTTT. This website sends info to my Adafruit feed, which I then use my Python program to fetch data from.
## Challenges we ran into
We ran into many problems ranging from logistics to troubles debugging the hardware. One of
the challenges we faced on the logistical front, was the wrong number of ping-pong balls showing up. We managed to use Amazon Prime to get the parts on time, but it was still a headache.
Another big challenge for us was to get the Raspberry Pi to connect to the Arduino. They have different voltages, and the Arduino wasn't reading the raspberry Pi's output. After a few hours of trial and error, we found that tying the ground from the Arduino and Raspberry Pi together seemed to fix the problem.
## Accomplishments that we're proud of
We managed to make our project:
* Integrate Custom Voice Commands with Google Home
* Display some cool animations
* Show Calendar Notifications
* Integrated Remote Control
* Display certain animations based on the Weather
## What we learned
We learned about the challenges involved when combining software and hardware; Things will not work as you expect it to first try. We also learned about how we can better coordinate our team by learning from the mistakes we made in this project.
## What's next for Mood Lights
We will continue to expand the functionality of Mood Lights, potentially adding features like:
* Audio Visualization
* More Animation
* More extensive Google Home Integration | ## Inspiration
We wanted to provide an easy, interactive, and ultimately fun way to learn American Sign Language (ASL). We had the opportunity to work with the Leap Motion hardware which allowed us to track intricate real-time data surrounding hand movements. Using this data, we thought we would be able to decipher complex ASL gestures.
## What it does
Using the Leap Motion's motion tracking technology, it prompts to user to replicate various ASL gestures. With real-time feedback, it tells the user how accurate their gesture was compared to the actual hand motion. Using this feedback, users can immediately adjust their technique and ultimately better perfect their ASL!

## How I built it
Web app using Javascript, HTML, CSS. We had to train our data using various machine learning repositories to ensure accurate recognitions, as well as other plugins which allowed us to visualize the hand movements in real time.
## Challenges I ran into
Training the data was difficult as gestures are complex forms of data, composed of many different data points in the hand's joints and bones but also in the progression of hand "frames". As a result, we had to take in a lot of data to ensure a thorough data-set that matched these data features to an actual classification of the correct ASL label (or phrase)
## Accomplishments that I'm proud of
User Interface. Training the data. Working on a project that could actually potentially impact others!
## What I learned
Hard work and dedication. Computer vision. Machine Learning.
## What's next for Leap Motion ASL
More words? Game mode? Better training? More phrases? More complex combos of gestures?
 | ## Inspiration
Our team wanted to make a smart power bar device to tackle the challenge of phantom power consumption. Phantom power is the power consumed by devices when they are plugged in and idle, accounting for approximately 10% of a home’s power consumption. [1] The best solution for this so far has been for users to unplug their devices after use. However, this method is extremely inconvenient for the consumer as there can be innumerable household devices that require being unplugged, such as charging devices for phones, laptops, vacuums, as well as TV’s, monitors, and kitchen appliances. [2] We wanted to make a device that optimized convenience for the user while increasing electrical savings and reducing energy consumption.
## What It Does
The device monitors power consumption and based on continual readings automatically shuts off power to idle devices. In addition to reducing phantom power consumption, the smart power bar monitors real-time energy consumption and provides graphical analytics to the user through MongoDB. The user is sent weekly power consumption update-emails, and notifications whenever the power is shut off to the smart power bar. It also has built-in safety features, to automatically cut power when devices draw a dangerous amount of current, or a manual emergency shut off button should the user determine their power consumption is too high.
## How We Built It
We developed a device using an alternating current sensor wired in series with the hot terminal of a power cable. The sensor converts AC current readings into 5V logic that can be read by an Arduino to measure both effective current and voltage. In addition, a relay is also wired in series with the hot terminal, which can be controlled by the Arduino’s 5V logic. This allows for both the automatic and manual control of the circuit, to automatically control power consumption based on predefined thresholds, or to turn on or off the circuit if the user believes the power consumption to be too high. In addition to the product’s controls, the Arduino microcontroller is connected to the Qualcomm 410C DragonBoard, where we used Python to push data sensor data to MongoDB, which updates trends in real-time for the user to see. In addition, we also send the user email updates through Python with the time-stamps based on when the power bar is shut off. This adds an extended layer of user engagement and notification to ensure they are aware of the system’s status at critical events.
## Challenges We Ran Into
One of our major struggles was with operating and connecting the DragonBoard, such as setting up connection and recognition of the monitor to be able to program and install packages on the DragonBoard. In addition, connecting to the shell was difficult, as well as any interfacing in general with peripherals was difficult and not necessarily straightforward, though we did find solutions to all of our problems.
We struggled with establishing a two-way connection between the Arduino and the DragonBoard, due to the Arduino microntrontroller shield that was supplied with the kit. Due to unknown hardware or communication problems between the Arduino shield and DragonBoard, the DragonBoard would continually shut off, making troubleshooting and integration between the hardware and software impossible.
Another challenge was tuning and compensating for error in the AC sensor module, as due to lack of access to a multimeter or an oscilloscope for most of our build, it was difficult to pinpoint exactly what the characteristic of the AC current sinusoids we were measuring. For context, we measured the current draw of 2-prong devices such as our phone and laptop chargers. Therefore, a further complication to accurately measure the AC current draws of our devices would have been to cut open our charging cables, which was out of the question considering they are our important personal devices.
## Accomplishments That We Are Proud Of
We are particularly proud of our ability to have found and successfully used sensors to quantify power consumption in our electrical devices. Coming into the competition as a team of mostly strangers, we cycled through different ideas ahead of the Makeathon that we would like to pursue, and 1 of them happened to be how to reduce wasteful power consumption in consumer homes. Finally meeting on the day of, we realized we wanted to pursue the idea, but unfortunately had none of the necessary equipment, such as AC current sensors, available. With some resourcefulness and quick-calling to stores in Toronto, we were luckily able to find components at the local electronics stores, such as Creatron and the Home Hardware, to find the components we needed to make the project we wanted.
In a short period of time, we were able to leverage the use of MongoDB to create an HMI for the user, and also read values from the microcontroller into the database and trend the values.
In addition, we were proud of our research into understanding the operation of the AC current sensor modules and then applying the theory behind AC to DC current and voltage conversion to approximate sensor readings to calculate apparent power generation. In theory the physics are very straightforward, however in practice, troubleshooting and accounting for noise and error in the sensor readings can be confusing!
## What's Next for SmartBar
We would build a more precise and accurate analytics system with an extended and extensible user interface for practical everyday use. This could include real-time cost projections for user billing cycles and power use on top of raw consumption data. As well, this also includes developing our system with more accurate and higher resolution sensors to ensure our readings are as accurate as possible. This would include extended research and development using more sophisticated testing equipment such as power supplies and oscilloscopes to accurately measure and record AC current draw. Not to mention, developing a standardized suite of sensors to offer consumers, to account for different types of appliances that require different size sensors, ranging from washing machines and dryers, to ovens and kettles and other smaller electronic or kitchen devices. Furthermore, we would use additional testing to characterize maximum and minimum thresholds for different types of devices, or more simply stated recording when the devices were actually being useful as opposed to idle, to prompt the user with recommendations for when their devices could be automatically shut off to save power. That would make the device truly customizable for different consumer needs, for different devices.
## Sources
[1] <https://www.hydroone.com/saving-money-and-energy/residential/tips-and-tools/phantom-power>
[2] <http://www.hydroquebec.com/residential/energy-wise/electronics/phantom-power.html> | partial |
### Overview
Resililink is a node-based mesh network leveraging LoRa technology to facilitate communication in disaster-prone regions where traditional infrastructure, such as cell towers and internet services, is unavailable. The system is designed to operate in low-power environments and cover long distances, ensuring that essential communication can still occur when it is most needed. A key feature of this network is the integration of a "super" node equipped with satellite connectivity (via Skylo), which serves as the bridge between local nodes and a centralized server. The server processes the data and sends SMS notifications through Twilio to the intended recipients. Importantly, the system provides acknowledgment back to the originating node, confirming successful delivery of the message. This solution is aimed at enabling individuals to notify loved ones or emergency responders during critical times, such as natural disasters, when conventional communication channels are down.
### Project Inspiration
The inspiration for Resililink came from personal experiences of communication outages during hurricanes. In each instance, we found ourselves cut off from vital resources like the internet, making it impossible to check on family members, friends, or receive updates on the situation. These moments of helplessness highlighted the urgent need for a resilient communication network that could function even when the usual infrastructure fails.
### System Capabilities
Resililink is designed to be resilient, easy to deploy, and scalable, with several key features:
* **Ease of Deployment**: The network is fast to set up, making it particularly useful in emergency situations.
* **Dual Connectivity**: It allows communication both across the internet and in peer-to-peer fashion over long ranges, ensuring continuous data flow even in remote areas.
* **Cost-Efficiency**: The nodes are inexpensive to produce, as each consists of a single LoRa radio and an ESP32 microcontroller, keeping hardware costs to a minimum.
### Development Approach
The development of Resililink involved creating a custom communication protocol based on Protocol Buffers (protobufs) to efficiently manage data exchange. The core hardware components include LoRa radios, which provide long-range communication, and Skylo satellite connectivity, enabling nodes to transmit data to the internet using the MQTT protocol.
On the backend, a server hosted on Microsoft Azure handles the incoming MQTT messages, decrypts them, and forwards the relevant information to appropriate APIs, such as Twilio, for further processing and notification delivery. This seamless integration of satellite technology and cloud infrastructure ensures the reliability and scalability of the system.
### Key Challenges
Several challenges arose during the development process. One of the most significant issues was the lack of clear documentation for the AT commands on the Mutura evaluation board, which made it difficult to implement some of the core functionalities. Additionally, given the low-level nature of the project, debugging was particularly challenging, requiring in-depth tracing of system operations to identify and resolve issues. Another constraint was the limited packet size of 256 bytes, necessitating careful optimization to ensure efficient use of every byte of data transmitted.
### Achievements
Despite these challenges, we successfully developed a fully functional network, complete with a working demonstration. The system proved capable of delivering messages over long distances with low power consumption, validating the concept and laying the groundwork for future enhancements.
### Lessons Learned
Through this project, we gained a deeper understanding of computer networking, particularly in the context of low-power, long-range communication technologies like LoRa. The experience also provided valuable insights into the complexities of integrating satellite communication with terrestrial mesh networks.
### Future Plans for Resililink
Looking ahead, we plan to explore ways to scale the network, focusing on enhancing its reliability and expanding its reach to serve larger geographic areas. We are also interested in further refining the underlying protocol and exploring new applications for Resililink beyond disaster recovery scenarios, such as in rural connectivity or industrial IoT use cases. | **In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.**
## Inspiration
Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief.
## What it does
**Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter.
## How we built it
We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community.
## Challenges we ran into
Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon.
## Accomplishments that we're proud of
We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives.
## What we learned
We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs.
## What's next for Stronger Together
We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations. | ## Inspiration
We all realized that in situations where it is necessary to track the safety of individuals in a group trek by knowing if they are not hurt, have a healthy body temperature, and are not very far from the group (desired location). So, we decided to make a wristband for the purpose of a large, group trek that can help the group locate themselves and make sure everyone is safe.
## What it does
Our simple system allows you to create a group of individuals whose safety you want to track over time. The system allows everyone to monitor the group's status through a mobile application and the users can interact with the system through a wristband that has an accelerometer and a touch sensor.
## How we built it
For processing the wristband sensor data and connecting to the database, we used an ESP8266, which can also be connected to each other in a mesh topology for scaling purposes in a remote setting with a scarce network. Behind the scenes, we used Google Firebase to serve as a cloud platform to support the mobile application and for the sensors to send data to. If there is an anomaly (fall or temperature rise to dangerous levels) detected for any of the group members, an alert will be automatically be sent to everyone else. If there our accelerometer detects that a person has fallen an alert will be sent right away to the phone app, and the user has the option to press the touch sensor to call emergency services.
## Challenges we ran into
We first started with developing our project on the Telus IoT Starter Kit, but we, unfortunately, received a mismatched package with a burnt board. After hours of debugging, we decided to switch from that dev board to ESP8266 and from Microsoft Azure to Google Firebase to accelerate development. While working with the ESP8266, all our sensors were not working together because of power constraints, and we were having difficulties reading latitude and longitude from the database because we had to account for the non-string data type and this made making markers appear across the map difficult.
## Accomplishments that we're proud of
It is the first hardware hackathon for all of us and it was a fresh experience to work on a project in 24 hours and especially catching up with our original idea after a long delay debugging at the start. We had little experience with any of the cloud-platforms to start with, but now we learned how to setup Microsoft Azure IoT devices, and have a fully functioning Google Firebase that works with the ESP8266 and with our Android app.
## What we learned
One valuable lesson that we learned was that we should pull the plug on an idea, or fork to trying out a different method of achieving the goal if it does not work after a significant amount of time. Being too stuck on a single idea can result in losing a lot of precious time.
## What's next for FallNot
We hope to take FallNot to the next level by filling in all the places in which we did not have enough time to fully explore due to the nature of a hackathon. In the future, we could integrate and miniaturize the system to fit nicely in a wristband, add multi-platform support application, and reduce cost per unit by utilizing mesh networks instead of GPRS, to enable connectivity in remote areas. We can also develop a communication protocol, much like a walkie-talkie that can prove to be very useful in a trek setting. | winning |
## Inspiration
According to the National Council on Aging, every 11 seconds, an older adult is treated in the emergency room for falling, and every 19 minutes, an older adult dies from a fall. A third of Americans over the age of 65 accidentally fall each year, and this is estimated to cost more than $67.7 billion by 2020. We wanted to make an IoT solution for the elderly, taking advantage of the Google Cloud Platform to be able to detect these falls more easily, and more quickly bring an emergency response to an elderly person who has suffered a fall.
## What it does
Fallen is a security system that continuously analyzes its environment through a delayed video stream for people who may have fallen. Every 2 second frame is sent over to our Node.js server, which uploads it to Google Cloud Storage, and is then sent through Google Cloud Vision, which returns a set of features, that we filter. These features are passed to our own machine learning classifier to determine if the frame depicts a fall or not. If there is a fall, the system alerts all emergency contacts and sends an audio clip requesting help.
## How we built it
We have a Node.js server that monitors the image feed coming in, connecting to Google Cloud Storage and Google Cloud Vision, and a Flask server that provides the machine learning classifier. We used the Android Things development kit to build a cheap monitoring system that will take a continuous stream of images, which is sent to the Node.js server, uploaded to Google Cloud Storage, and passed in Google Cloud Vision to retrieve a set of features which we filter based off of how relevant it is to distinguishing between a fall and not, namely LABEL\_DETECTION and SAFE\_SEARCH\_DETECTION. These features are normalized and passed to the Flask server to classify whether it is a fall or not, and this response is sent back to the Node server. We used the Twilio API so that if a fall was detected, Twilio gives the emergency contact a call with an audio clip requesting for help.
## Challenges we ran into
The Android Things camera cable was unreliable and unstable, and not able to stably provide a stream of images. | ## Inspiration
Quarantine drove many of us to streaming platforms such as Netflix and Hulu. We wanted to bring a unique spin to recommending new movies to watch, reminiscent of the bookstores that wrap their books so you can't see their covers. Using our project provides you with just a limited summary of the movie, meaning you need to decide if it's worth watching on the premise alone.
## What it does
Thus, we wanted to enhance that experience by creating a website that generates film recommendations with a Tinder-like swiping selection. At its core, our project is a website that generates film recommendations with a Tinder-like swiping selection. You aren't able to see any information beyond the film's premise, but you can have faith that the recommendations will be related in the genre to films you've previously expressed interest in.
## How we built it
We used Code Sandbox to collaborate with React for front-end development. For back-end development we used Python and Flask. From a user’s perspective, a deck of cards appear as the main focus of the website displaying the title and summary of each film. The user can choose to accept or reject each film recommendation by swiping right or left respectively.
If the user accepts the movie, but there are no logged genres that they like yet, the program takes a random genre that the accepted movie is part of and sets it as preferred. Once that genre is set, all future recommendations MUST contain at least that genre in its list of genres.
If the user starts accepting more movies and already has that first genre logged, there is a 50% chance that a random genre from the liked movie will be added that that genre list, which will end up requiring all future recommendations to contain any genres in that list). Once that second genre preference is filled, it moves on to the third genre that is liked by the user, which has a 30% random probability of being logged as one of the preferred genres, but also a 40% chance that the selected movie does not require the third genre, due to potentially narrowing the scope of recommendations too much. All preferred genres also have a built in "counter" where if the user rejects a movie, the respective counter for the last genre logged would start ticking down by 1. For example, if there are two genres logged and the user rejects a movie, the counter for the second logged genre would tick down by one. Counters for the first, second, and third genres, which start at 10, 5, and 3 respectively, are based on their chances of being added to the list in the first place. Once a counter for one of the genres reaches 0, that genre is removed from its slot and its counter reset to its max, allowing the user to log a new genre in that slot, so that late genres are easier to fine tune than earlier ones, in order to compensate for the fact that the recommendations offered are strictly based on the user’s logged genres
## Challenges we ran into
We struggled most with connecting the back end to the front end and allowing information and requests to flow freely between both of them. It was also difficult to create the recommendation engine. We originally wanted the engine to use an AI, but this quickly proved to be out of our technical depth. As a result, we pivoted to a non-AI engine for recommendations, focusing on how to structure it so it could recommend with nearly as well as an AI could.
## Accomplishments that we're proud of
We're definitely proud of the way the recommendation engine turned out, as it is able to recommend movies very well despite not using an AI of any sort. Our front end also looks incredibly polished, and makes you feel like the real experience.
## What we learned
We learned that sending information between the front and back end (specifically with React and Python/Flask in our case) is very difficult, and required the majority of the time to figure out. Figuring out how all of the different methods and syntaxes correlated and worked together required many guides, mentors, and tutorials.
## What's next for Movie Generator
Possible improvements would be to use AI to generate descriptions for each movie. We had to hand-write out summaries for all of our sample movies which is terribly inefficient. Leveraging an NLP model to automatically create one-liner summaries may be an interesting development to pursue. Developing an AI recommendation engine would also be another step forward. We could explore content filtering or collaborative filtering approaches given more time. Other possible improvements would be adding movie pictures to each card, as well as flipping the back of each card to provide further information about the movie, such as its cast, rating, and nearby cinemas and dates in which the movie will be shown (provided they are recent releases). Adding an option in which the user could write reviews on each movie is also an idea. | ## Inspiration
Greenhouses require increased disease control and need to closely monitor their plants to ensure they're healthy. In particular, the project aims to capitalize on the recent cannabis interest.
## What it Does
It's a sensor system composed of cameras, temperature and humidity sensors layered with smart analytics that allows the user to tell when plants in his/her greenhouse are diseased.
## How We built it
We used the Telus IoT Dev Kit to build the sensor platform along with Twillio to send emergency texts (pending installation of the IoT edge runtime as of 8 am today).
Then we used azure to do transfer learning on vggnet to identify diseased plants and identify them to the user. The model is deployed to be used with IoT edge. Moreover, there is a web app that can be used to show that the
## Challenges We Ran Into
The data-sets for greenhouse plants are in fairly short supply so we had to use an existing network to help with saliency detection. Moreover, the low light conditions in the dataset were in direct contrast (pun intended) to the PlantVillage dataset used to train for diseased plants. As a result, we had to implement a few image preprocessing methods, including something that's been used for plant health detection in the past: eulerian magnification.
## Accomplishments that We're Proud of
Training a pytorch model at a hackathon and sending sensor data from the STM Nucleo board to Azure IoT Hub and Twilio SMS.
## What We Learned
When your model doesn't do what you want it to, hyperparameter tuning shouldn't always be the go to option. There might be (in this case, was) some intrinsic aspect of the model that needed to be looked over.
## What's next for Intelligent Agriculture Analytics with IoT Edge | losing |
## Inspiration
We live in increasingly polarizing times. A poll published by CNN in 2017 showed that around two thirds of Democrats and one half of Republicans have either a few, or no friends in the opposite party. Doozy seeks to improve political discourse by allowing for conversation surrounding differences while showcasing what makes us similar.
## What it does
Doozy is a social networking platform that considers someone's personal interests, as well as a few viewpoints on current issues in today's political sphere. Doozy then matches individuals on the site to chat that have shared interests, but some kind of disagreement in terms of political viewpoint. This way, Doozy users are united by their similarities, while still encouraging room for healthy, and productive, disagreement.
## How we built it
The front end of Doozy was developed using Angular web framework. Database management was done in Python and mySQL. Algorithm and data analytics were done in Python and Standard Library (Google Sheets API Communications).
## Challenges we ran into
Combining all the different frameworks/languages/softwares that we used.
## Accomplishments that we're proud of
Unifying three very different pieces of software.
## What we learned
How to create a full web app, including front end, back end, and data analysis.
## What's next for Doozy
Clean up | ## What it does
Uses machine learning sentiment analysis algorithms to determine the positive or negative characteristics of a comment or tweet from social media. This was use in large numbers to generate a meaningful average score for the popularity of any arbitrary search query.
## How we built it
Python was a core part of our framework, as it was used to intelligently scrap multiple social media sites and was used to calculate the sentiment score of comments that had keywords in them. Flask was also used to serve the data to a easily accessible and usable web application.
## Challenges we ran into
The main challenge we faced was that many APIs were changed or had outdated documentation, requiring us to read through their source code and come up with more creative solutions. We also initially tried to learn react.js, even though none of us had ever done front-end web development before, which turned out to be a daunting task in such a short amount of time.
## Accomplishments that we're proud of
We're very proud of the connections we made and creating an application on time!
## What's next for GlobalPublicOpinion
We hope to integrate more social media platforms, and run a statistical analysis to prevent potential bias. | ## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future. | partial |
## NOTE:
Received 2nd Place prize as well for Top Data Visualization from Qualtrics Inc. and two honourable mentions for best use of history in a hack and user interface.
## What it does
Finalytics is an all in one dashboard for financial portfolio visualization, analytics, and optimization. The platform makes it easy for anybody to see quantifiable data and how it impacts their investment decisions.
## How we built it
We built up Finalytics with a Python / Flask backend and React in the frontend. We were able to set up the visualizations by querying from multiple finance related APIs such as Yahoo Finance, NY Times, and Alladin API by BlackRock. The data and calculations given here were then put up with display on the dashboard for easy access and use in future insights.
## Challenges we ran into
This was our first time using React. As such, we found it difficult to get comfortable with the framework. Also a multitude of issues dealing with Flask and our server. However, we made big progress by talking to the Hacker Gurus and Sponsors who helped us solve our issues.
## Accomplishments that we're proud of
We built something cool that was finance-related and allowed users to view historical financial data in a new light (e.g. expected return on investment).
## What we learned
When you graph a financial portfolio - it looks very cool. Also very fun to see how investing in a certain set of stocks in the year 2004 and the risk and return factor that was at play from the numbers provided by our API calls.
## What's next for Finalytics
* Develop a more intuitive interface for the information we have integrated into our dashboard.
* Optimize the dashboard so that it may load faster.
* Integrate more of the Alladin API for financial calculations. | ## Inspiration
Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse.
We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data.
## What it does
On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses.
## How we built it
Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel.
The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js.
## Challenges we ran into
* It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked.
* There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end.
* Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end.
## Accomplishments that we're proud of
* We were able to create a full-fledged, functional product within the allotted time we were given.
* We utilized our knowledge of how APIs worked to incorporate multiple of them into our project.
* We worked positively as a team even though we had not met each other before.
## What we learned
* Learning how to incorporate multiple APIs into one product with Next.
* Learned a new tech-stack
* Learned how to work simultaneously on the same product with multiple people.
## What's next for DataDaddy
### Short Term
* Add a more diverse applicability to different types of datasets and statistical analyses.
* Add more compatibility with SQL/NoSQL commands from Natural Language.
* Attend more hackathons :)
### Long Term
* Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results.
* Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses. | ## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | partial |
## Inspiration
Our good friend's uncle was involved in a nearly-fatal injury. This led to him becoming deaf-blind at a very young age, without many ways to communicate with others. To help people like our friend's uncle, we decided to create HapticSpeak, a communication tool that transcends traditional barriers. As we have witnessed the challenges faced by deaf-blind individuals first hand, we were determined to bring help to these people.
## What it does
Our project HapticSpeak can take a users voice, and then converts the voice to text. The text is then converted to morse code. At this point, the morse code is sent to an arduino using the bluetooth module, where the arduino will decode the morse code into it's haptic feedback equivalents, allowing for the deafblind indivudals to understand what the user's said.
## How we built it
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
## What's next for HapticSpeak | ## Inspiration
As roommates, we found that keeping track of our weekly chore schedule and house expenses was a tedious process, more tedious than we initially figured.
Though we created a Google Doc to share among us to keep the weekly rotation in line with everyone, manually updating this became hectic and cumbersome--some of us rotated the chores clockwise, others in a zig-zag.
Collecting debts for small purchases for the house split between four other roommates was another pain point we wanted to address. We decided if we were to build technology to automate it, it must be accessible by all of us as we do not share a phone OS in common (half of us are on iPhone, the other half on Android).
## What it does
**Chores:**
Abode automatically assigns a weekly chore rotation and keeps track of expenses within a house. Only one person needs to be a part of the app for it to work--the others simply receive a text message detailing their chores for the week and reply “done” when they are finished.
If they do not finish by close to the deadline, they’ll receive another text reminding them to do their chores.
**Expenses:**
Expenses can be added and each amount owed is automatically calculated and transactions are automatically expensed to each roommates credit card using the Stripe API.
## How we built it
We started by defining user stories and simple user flow diagrams. We then designed the database where we were able to structure our user models. Mock designs were created for the iOS application and was implemented in two separate components (dashboard and the onboarding process). The front and back-end were completed separately where endpoints were defined clearly to allow for a seamless integration process thanks to Standard Library.
## Challenges we ran into
One of the significant challenges that the team faced was when the back-end database experienced technical difficulties at the tail end of the hackathon. This slowed down our ability to integrate our iOS app with our API. However, the team fought back while facing adversity and came out on top.
## Accomplishments that we're proud of
**Back-end:**
Using Standard Library we developed a comprehensive back-end for our iOS app consisting of 13 end-points, along with being able to interface via text messages using Twilio for users that do not necessarily want to download the app.
**Design:**
The team is particularly proud of the design that the application is based on. We decided to choose a relatively simplistic and modern approach through the use of a simple washed out colour palette. The team was inspired by material designs that are commonly found in many modern applications. It was imperative that the designs for each screen were consistent to ensure a seamless user experience and as a result a mock-up of design components was created prior to beginning to the project.
**Use case:**
Not only that, but our app has a real use case for us, and we look forward to iterating on our project for our own use and a potential future release.
## What we learned
This was the first time any of us had gone into a hackathon with no initial idea. There was a lot of startup-cost when fleshing out our design, and as a result a lot of back and forth between our front and back-end members. This showed us the value of good team communication as well as how valuable documentation is -- before going straight into the code.
## What's next for Abode
Abode was set out to be a solution to the gripes that we encountered on a daily basis.
Currently, we only support the core functionality - it will require some refactoring and abstractions so that we can make it extensible. We also only did manual testing of our API, so some automated test suites and unit tests are on the horizon. | ## Inspiration
Helping people who are visually and/or hearing impaired to have better and safer interactions.
## What it does
The sensor beeps when the user comes too close to an object or too close to a hot beverage/food.
The sign language recognition system translates sign language from a hearing impaired individual to english for a caregiver.
The glasses capture pictures of surroundings and convert them into speech for a visually imapired user.
## How we built it
We used Microsoft Azure's vision API,Open CV,Scikit Learn, Numpy, Django + REST Framework, to build the technology.
## Challenges we ran into
Making sure the computer recognizes the different signs.
## Accomplishments that we're proud of
Making a glove with a sensor that helps user navigate their path, recognizing sign language, and converting images of surroundings to speech.
## What we learned
Different technologies such as Azure, OpenCV
## What's next for Spectrum Vision
Hoping to gain more funding to increase the scale of the project. | partial |
## Inspiration
Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call.
This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies.
## What it does
DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers.
Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene.
Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone.
## How we built it
We developed a comprehensive systems architecture design to visualize the communication flow across different softwares.

We developed DispatchAI using a comprehensive tech stack:
### Frontend:
* Next.js with React for a responsive and dynamic user interface
* TailwindCSS and Shadcn for efficient, customizable styling
* Framer Motion for smooth animations
* Leaflet for interactive maps
### Backend:
* Python for server-side logic
* Twilio for handling calls
* Hume and Hume's EVI for emotion detection and understanding
* Retell for implementing a voice agent
* Google Maps geocoding API and Street View for location services
* Custom-finetuned Mistral model using our proprietary 911 call dataset
* Intel Dev Cloud for model fine-tuning and improved inference
## Challenges we ran into
* Curated a diverse 911 call dataset
* Integrating multiple APIs and services seamlessly
* Fine-tuning the Mistral model to understand and respond appropriately to emergency situations
* Balancing empathy and efficiency in AI responses
## Accomplishments that we're proud of
* Successfully fine-tuned Mistral model for emergency response scenarios
* Developed a custom 911 call dataset for training
* Integrated emotion detection to provide more empathetic responses
## Intel Dev Cloud Hackathon Submission
### Use of Intel Hardware
We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration:
* Leveraged IDC Jupyter Notebooks throughout the development process
* Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform
### Intel AI Tools/Libraries
We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project:
* Utilized Intel® Extension for PyTorch (IPEX) for model optimization
* Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds
* This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools
### Innovation
Our project breaks new ground in emergency response technology:
* Developed the first empathetic, AI-powered dispatcher agent
* Designed to support first responders during resource-constrained situations
* Introduces a novel approach to handling emergency calls with AI assistance
### Technical Complexity
* Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud
* Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI
* Developed real-time call processing capabilities
* Built an interactive operator dashboard for data summarization and oversight
### Design and User Experience
Our design focuses on operational efficiency and user-friendliness:
* Crafted a clean, intuitive UI tailored for experienced operators
* Prioritized comprehensive data visibility for quick decision-making
* Enabled immediate response capabilities for critical situations
* Interactive Operator Map
### Impact
DispatchAI addresses a critical need in emergency services:
* Targets the 82% of understaffed call centers
* Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times)
* Potential to save lives by ensuring every emergency call is answered promptly
### Bonus Points
* Open-sourced our fine-tuned LLM on HuggingFace with a complete model card
(<https://huggingface.co/spikecodes/ai-911-operator>)
+ And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts>
* Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>)
* Promoted the project on Twitter (X) using #HackwithIntel
(<https://x.com/spikecodes/status/1804826856354725941>)
## What we learned
* How to integrate multiple technologies to create a cohesive, functional system
* The potential of AI to augment and improve critical public services
## What's next for Dispatch AI
* Expand the training dataset with more diverse emergency scenarios
* Collaborate with local emergency services for real-world testing and feedback
* Explore future integration | ## Inspiration
Prolonged covid restrictions have caused immense damage to the economy and local markets alike. Shifts in this economic landscape have led to many individuals seeking alternate sources of income to account for the losses imparted by lack of work or general opportunity. One major sector that has seen a boom, despite local market downturns, is investment in the stock market. While stock market trends at first glance, seem to be logical, and fluid, they're in fact the opposite. Beat earning expectation? New products on the market? *It doesn't matter!*, because at the end of the day, a stock's value is inflated by speculation and **hype**. Many see the allure of rapidly increasing ticker charts, booming social media trends, and hear talk of town saying how someone made millions in a matter of a day *cough* **GameStop** *cough* , but more often then not, individual investors lose money when market trends spiral. It is *nearly* impossible to time the market. Our team sees the challenges and wanted to create a platform which can account for social media trends which may be indicative of early market changes so that small time investors can make smart decisions ahead of the curve.
## What it does
McTavish St. Bets is a platform that aims to help small time investors gain insight on when to buy, sell, or hold a particular stock on the DOW 30 index. The platform uses the recent history of stock data along with tweets in the same time period in order to estimate the future value of the stock. We assume there is a correlation between tweet sentiment towards a company, and it's future evaluation.
## How we built it
The platform was build using a client-server architcture and is hosted on a remote computer made available to the team. The front-end was developed using react.js and bootstrap for quick and efficient styling, while the backend was written in python with flask. The dataset was constructed by the team using a mix of tweets and article headers. The public Twitter API was used to scrape tweets according to popularity and were ranked against one another using an engagement scoring function. Tweets were processed using a natural language processing module with BERT embeddings which was trained for sentiment analysis. Time series prediction was accomplished through the use of a neural stochastic differential equation which incorporated text information as well. In order to incorporate this text data, the latent representations were combined based on the aforementioned scoring function. This representation is then fed directly to the network for each timepoint in the series estimation in an attempt to guide model predictions.
## Challenges we ran into
Obtaining data to train the neural SDE proved difficult. The free Twitter API only provides high engagement tweets for the last seven days. Obtaining older tweets requires an enterprise account costing thousands of dollars per month. Unfortunately, we didn’t feel that we had the data to train an end-to-end model to learn a single representation for each day’s tweets. Instead, we use a weighted average tweet representation, weighing each tweet by its importance computed as a function of its retweets and likes. This lack of data extends to the validation side too, with us only able to validate our model’s buy/sell/hold prediction on this Friday's stock price.
Finally, without more historical data, we can only model the characteristics of the market this week, which has been fairly uncharacteristic of normal market conditions. Adding additional data for the trajectory modeling would have been invaluable.
## Accomplishments that we're proud of
* We used several API to put together a dataset, trained a model, and deployed it within a web application.
* We put together several animations introduced in the latest CSS revision.
* We commissioned McGill-themed banner in keeping with the /r/wallstreetbets culture. Credit to Jillian Cardinell for the help!
* Some jank nlp
## What we learned
Learned to use several new APIs, including Twitter and Web Scrapers.
## What's next for McTavish St. Bets
Obtaining much more historical data by building up a dataset over several months (using Twitters 7-day API). We would have also liked to scale the framework to be reinforcement based which is data hungry. | ## Inspiration
Guardian Angel was born from the need for reliable emergency assistance in an unpredictable world. Our experiences with the elderly, such as our grandparents, who may fall when we’re not around, and the challenges we may face in vulnerable situations motivated us to create a tool that automatically reaches out for help when it’s needed most. We aimed to empower individuals to feel safe and secure, knowing that assistance is just a call away, even in their most vulnerable moments.
## What it does
Core to Guardian Angel, our life-saving Emergency Reporter AI speech app, is an LLM and text-to-speech pipeline that provides real-time, situation-critical responses to 911 dispatchers. The app automatically detects distress signals—such as falls or other emergencies—and contacts dispatch services on behalf of the user, relaying essential information like patient biometric data, medical history, current state, and location. By integrating these features, Guardian Angel enhances efficiency and improves success in time-sensitive situations where rapid, accurate responses are crucial.
## How we built it
We developed Guardian Angel using React Native with Expo, leveraging Python and TypeScript for enhanced code quality. The backend is powered by FastAPI, allowing for efficient data handling. We integrated AI technologies, including Google Gemini for voice transcription and Deepgram for audio processing, which enhances our app’s ability to communicate effectively with dispatch services.
## Challenges we ran into
Our team faced several challenges during development, including difficulties with database integration and frontend design. Many team members were new to React Native, leading to styling and compatibility issues. Additionally, figuring out how to implement functions in the API for text-to-speech and speech-to-text during phone calls required significant troubleshooting.
## Accomplishments that we're proud of
We are proud of several milestones achieved during this project. First, we successfully integrated a unique aesthetic into our UI by incorporating hand-drawn elements, which sets our app apart and creates a friendly, approachable user experience. Additionally, we reached a significant milestone in audio processing by effectively transcribing audio input using the Gemini model, allowing us to capture user commands accurately, and converting the transcribed text back to voice with Deepgram for seamless communication with dispatch. We’re also excited to share that our members have only built websites, making the experience of crafting an app and witnessing the fruits of our labor even more rewarding. It’s been exciting to acquire and apply new tools throughout this project, diving into various aspects of transforming our idea into a scalable application—from designing and learning UI/UX to implementing the React Native framework, emulating iOS and Android devices for testing compatibility, and establishing communication between the frontend and backend/database.
## What we learned
Through this hackathon, our team learned the importance of effective collaboration, utilizing a “divide and conquer” approach while keeping each other updated on our progress. We gained hands-on experience in mobile app development, transitioning from our previous focus on web development, and explored new tools and technologies essential for creating a scalable application.
## What's next for Guardian Angel
Looking ahead, we plan to enhance Guardian Angel by integrating features such as smartwatch compatibility for monitoring vital signs like heart rate and improving fall detection accuracy. We aim to refine our GPS location services for better tracking and continue optimizing our AI speech models for enhanced performance. Additionally, we’re exploring the potential for spatial awareness and microphone access to record surroundings during emergencies, further improving our response capabilities. | winning |
## Inspiration
We wanted to make a simple product that sharpens blurry images without a lot of code! This could be used as a preprocessing step for image recognition or a variety of other image processing tasks. It can also be used as a standalone product to enhance old images.
## What it does
Our product takes blurry images and makes them more readable. It also improves IBM Watson's visual recognition functionality. See our powerpoint for more information!
## How we built it
We used python3 and the IBM Watson library.
## Challenges we ran into
Processing images takes a lot of time!
## Accomplishments that we're proud of
Our algorithm improves Watson's capabilities by 10% or more!
## What we learned
Sometimes, simple is better :)
## What's next for Pixelator
We could incorporate our product into an optical character recognition system, or try to incorporate our system as a preprocessing step in a pipeline involving e.g. convolutional neural nets to get even greater accuracy with the cost of higher latency. | ## Inspiration
I like looking at things. I do not enjoy bad quality videos . I do not enjoy waiting. My CPU is a lazy fool. He just lays there like a drunkard on new years eve. My poor router has a heart attack every other day so I can stream the latest Kylie Jenner video blog post, or has the kids these days call it, a 'vlog' post.
CPU isn't being effectively leveraged to improve video quality. Deep learning methods are in their own world, concerned more with accuracy than applications. We decided to develop a machine learning application to enhance resolution while developing our models in such a way that they can effective run without 10,000 GPUs.
## What it does
We reduce your streaming bill. We let you stream Kylie's vlog in high definition. We connect first world medical resources to developing nations. We make convert an unrecognizeable figure in a cop's body cam to a human being. We improve video resolution.
## How I built it
Wow. So lots of stuff.
Web scraping youtube videos for datasets of 144, 240, 360, 480 pixels. Error catching, thread timeouts, yada, yada. Data is the most import part of machine learning, and no one cares in the slightest. So I'll move on.
## ML stuff now. Where the challenges begin
We tried research papers. Super Resolution Generative Adversarial Model [link](https://arxiv.org/abs/1609.04802). SRGAN with an attention layer [link](https://arxiv.org/pdf/1812.04821.pdf). These were so bad. The models were to large to hold in our laptop, much less in real time. The model's weights alone consisted of over 16GB. And yeah, they get pretty good accuracy. That's the result of training a million residual layers (actually *only* 80 layers) for months on GPU clusters. We did not have the time or resources to build anything similar to these papers. We did not follow onward with this path.
We instead looked to our own experience. Our team had previously analyzed the connection between image recognition and natural language processing and their shared relationship to high dimensional spaces [see here](https://arxiv.org/abs/1809.05286). We took these learnings and built a model that minimized the root mean squared error as it upscaled from 240 to 480 px.
However, we quickly hit a wall, as this pixel based loss consistently left the upscaled output with blurry edges. In order to address these edges, we used our model as the Generator in a Generative Adversarial Network. However, our generator was too powerful, and the discriminator was lost.
We decided then to leverage the work of the researchers before us in order to build this application for the people. We loaded a pretrained VGG network and leveraged its image embeddings as preprocessing for our discriminator. Leveraging this pretrained model, we were able to effectively iron out the blurry edges while still minimizing mean squared error.
Now model built. We then worked at 4 AM to build an application that can convert videos into high resolution.
## Accomplishments that I'm proud of
Building it good.
## What I learned
Balanced approaches and leveraging past learning
## What's next for Crystallize
Real time stream-enhance app. | ## Inspiration
We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems.
## What it does
Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well.
## How we built it
We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed.
## Challenges we ran into
Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app.
## Accomplishments that we're proud of
Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson.
## What we learned
Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs.
## What's next for AEye
Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework.
See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679> | winning |
## Inspiration
We wanted to build something fun, yet also challenges our ability to learn new material. We decided to take the game of Minesweeper, which has been around forever, and make it more accessible, more exciting, and more challenging.
## What it does
Much of the gameplay is the same. You right-click to flag a mine and left-click to reveal a square. If you step on a mine, you lose. However, the crucial difference is now you get to play with friends--at the same time. Playable on most web browsers, you can compete against your friends to see who is the fastest minesweeper.
## How we built it
A lot of trial and error trying to sync up the front end and the back end.
## Challenges we ran into
Documentation on crow server, or rather the lack of. We repeatedly ran into instances where we needed the server to do something, but couldn't find any help from documentation, StackOverflow, or the like. For example, we wanted to do something where the server would broadcast a signal to the multiple clients when a certain condition is met. Unable to find any help, we decided to make the clients poll the server instead.
## Accomplishments that we're proud of
Getting the game to work is something we're proud of. Communicating between the server and client, while accounting for synchronization was much harder than we expected.
## What we learned
Debugging in c++ is really hard.
## What's next for MinesweeperBR
Abilities! Players will be able to "cast effects" onto opponents to slow them down. hahaha.... | ## Minesweeper Battle: Inspiration
Minesweeper is a classic, nostalgic, game that has stood the test of time. But while it may be fun, it can boring alone. As a single player game, the grid is fixed at the very beginning, and involves a slow, systematic deduction process to find all the mines in the grid. What if we could modify this time-honored game to be not only multiplayer, but also dynamic and competitive; combining logical deduction, with speed, accuracy, and a bit of luck? Let us introduce Minesweeper-Battle, a online version of minesweeper where players are competing against each other in real time. While we preserve all features of the original game, we also added new features to make our online version more competitive.
## Minesweeper Battle: How to Play
Our game preserves all the features of the classic minesweeper; first click will open a large revealed patch, correctly flag all landmines for the win, and revealing a mine ends the game with a loss.
However, what’s special about our game is of course its online competitive multiplayer features: every user is playing against at least one other player in real time. Additionally, when a player correctly flags a mine, the player will send a “disruption” to a random opponent. That is, a random-sized patch on the opponent’s grid will be reset; opened blocks will be masked again, flags will be taken down, and placements of landmines and the adjacency numbers will be updated too.
On the main game page, the user will see their game grid as well as their opponents’ grids. When an opponent finishes their game, their grid will change color depending on whether if it was a win or lose. When a player finishes their game, a leaderboard will be shown with the rankings of all the users in the game.
To identify the users in a game, each player is prompted to enter a username on first entry into the application page; they’ll have the option to either create a game or join an existing game, and they can immediately start playing. If a player loses connection to the game, their game state is always persisted on the server, so they can join back into the game at any time.
## How we built it
The application was built using React (in Typescript) and Convex, with Convex doing a lot of the heavy lifting with synchronization and real-time updates. We created all of the UI and game design from scratch, first building a playable single-player Minesweeper game, before adding a database and real-time synchronization utilizing Convex.
Throughout the gameplay, occasional mutation requests are sent to the server to ensure that the server game state is kept up to date, making sure that every player is able to see their opponents’ game states in real-time. This synchronized game state is used to display other information like rankings, whether opponents have won or lost, etc.
## Challenges
One of the first challenges we faced in this project was with the game design; we wanted a game that was interesting to implement, but also interesting to play. The interactions between players needed to be well thought out, and we decided upon sending “disruptions” to opponent grids when a user correctly flags a mine, inspired by Tetris multiplayer, where users can send “garbage lines” to their opponents upon a line clear.
Further, as a part of the design of the game, we had lengthy discussions about what aspects of the game logic should be implemented in the frontend, and which aspects should be implemented in the backend. Implementing the game logic in the backend would provide more fine-grained and guaranteed synchronization, but implementing the game logic in the frontend would reduce latency and allow for more responsive gameplay. In the end, due to the WiFi connectivity issues in the venue, the latency benefits outweighed any synchronization benefits, and we opted to implement the bulk of the game logic in the frontend, with the backend simply managing the persistence of game and grid states.
The process of learning Convex and the workflows involved was also challenging, but paid off quite well in the end—it’s a new way of approaching full stack development, and we found Convex to be extremely useful in simplifying and speeding up the development process after we learned about all of its features, especially with the real-time synchronization of game states.
Another challenge we faced was the presence of race conditions in the queries and mutations; combined with the React lifecycle, we had several race condition bugs where the order of query resolution caused certain parts of the game to be out of date, causing weird behavior. After painful debugging and careful thought, we eventually managed to resolve all of the race conditions we faced during testing.
## What we learned
Everyone on the team was new to Convex, so we learned how to use queries, mutations, tables, and many other Convex functionalities to create a robust backend for our game. In order to maximize the learning experience of the hackathon, we assigned members of the team to parts of the project that they were NOT familiar with. For example, a member of the team has also had no experience with React or Typescript, so naturally we decided to assign him to work on the frontend, and he was able to learn a lot about web development.
Aside from tools, we generally learned a lot about what it is like to create real-time applications and synchronize the experience between multiple players. In an effort to reduce latency, we moved most of the game logic to the frontend, and so we needed to make sure that despite potential differences in the game state between players, they can still have a smooth experience and stay updated on the status of other players. Many bugs that we encountered involved race conditions and we had to write the code to be resilient to such potential problems.
Finally, we learned a lot about game design, and a lot of the code we wrote in later stages of the project involved adding components to make the game experience more fun to play, and the competitive experience between players more balanced. For example, we realized that one way that players can abuse the disruption system is by spamming the flag on a known mine to send multiple disruptions for a single mine. We therefore had to add an edge case, and other cases like this, for situations where players could try to outsmart the game.
## Accomplishments that we're proud of
We are proud of pushing out a fully functioning product (from design to implementation to hosting) with limited time. We are also proud of how despite assigning each team member the part of the project we are not familiar with, all four of us were able to learn efficiently, and contribute a notable amount to the project.
## Futures for Minesweeper Battle
There are a lot of ways to extend this project; at the moment, we have a working prototype of the main gameplay loop and the new features required for multiplayer Minesweeper to function. In the future, the game could be enhanced with more thought into game balancing, improving the game design to increase the amount of fun and engagement in the user experience.
Additional features like lobbies and settings for Minesweeper difficulties would be interesting to add, and even incorporating AI to make bot opponents (even adaptive bots corresponding to the player’s current experience level).
The UI could also use some improvements, with more specialized graphic design and styling to make the site look sleeker; more GUI elements could be added to provide more pertinent information to users during the game play as well (ex. number of mines remaining, who you’re sending disruptions to, etc.)
The application could also be improved with the addition of security, authentication, and authorization; currently, we identify users only through a single username string, with no extra checks. This means that any user can impersonate any other user, and users cannot have the same usernames. Adding additional authentication mechanisms and actual user accounts could make the game feel a lot more official and polished. | ## Inspiration
The world is constantly chasing after smartphones with bigger screens and smaller bezels. But why wait for costly display technology, and why get rid of old phones that work just fine? We wanted to build an app to create the effect of the big screen using the power of multiple small screens.
## What it does
InfiniScreen quickly and seamlessly links multiple smartphones to play videos across all of their screens. Breathe life into old phones by turning them into a portable TV. Make an eye-popping art piece. Display a digital sign in a way that is impossible to ignore. Or gather some friends and strangers and laugh at memes together. Creative possibilities abound.
## How we built it
Forget Bluetooth, InfiniScreen seamlessly pairs nearby phones using ultrasonic communication! Once paired, devices communicate with a Heroku-powered server written in node.js, express.js, and socket.io for control and synchronization. After the device arrangement is specified and a YouTube video is chosen on the hosting phone, the server assigns each device a region of the video to play. Left/right sound channels are mapped based on each phone's location to provide true stereo sound support. Socket-emitted messages keep the devices in sync and provide play/pause functionality.
## Challenges we ran into
We spent a lot of time trying to implement all functionality using the Bluetooth-based Nearby Connections API for Android, but ended up finding that pairing was slow and unreliable. The ultrasonic+socket.io based architecture we ended up using created a much more seamless experience but required a large rewrite. We also encountered many implementation challenges while creating the custom grid arrangement feature, and trying to figure out certain nuances of Android (file permissions, UI threads) cost us precious hours of sleep.
## Accomplishments that we're proud of
It works! It felt great to take on a rather ambitious project and complete it without sacrificing any major functionality. The effect is pretty cool, too—we originally thought the phones might fall out of sync too easily, but this didn't turn out to be the case. The larger combined screen area also emphasizes our stereo sound feature, creating a surprisingly captivating experience.
## What we learned
Bluetooth is a traitor. Mad respect for UI designers.
## What's next for InfiniScreen
Support for different device orientations, and improved support for unusual aspect ratios. Larger selection of video sources (Dailymotion, Vimeo, random MP4 urls, etc.). Seeking/skip controls instead of just play/pause. | losing |
## Inspiration
The transition to online schooling has made it difficult for students to work with friends. The decrease in communications among students has left them feeling lost in this challenging time. Thus to solve the lack of emotional support, we have decided to create this web app that aids and encourages the planning of group-working.
## What it does
Timesync is a web app that connects you with different people, so you don’t have to be alone in your work. Once you sign up with your social media information, you will be able to add a daily to-do list. Using the to-do list, Timesync can search its database for other users doing similar activities with you, and provide you with a list of potential work-buddies. The site can provide other users’ names and instagram handles so that maybe a work together party call can be made.
## How we built it
Our team divided into two groups for the project: backend and frontend.
Using Flask and sqlalchemy, the backend team built 2 models for storing data for tasks and users. We began by creating a user authentication and validation system by using encrypted passwords. We also configured routes for different directories including different pages and the necessary actions, such as updating and deleting. We then configured it to appropriate html files, to wire the backend and the frontend of the web app to receive and store data properly. Once proper data input was established, we coded an algorithm to compare the tasks among users and their task data to match the result.
The frontend would design the overall visuals of Timesync using html and css, while creating forms to allow for the backend team to collect data. Multiple sites were created to guide the users, including the landing, todo-list additions, matchfinder, signup and login pages. Javascript and some flask was used for interactions with the user, such as adding new items that can be deleted to the site when a form is submitted.
## Challenges we ran into
We began our developing journey by struggling to understand the structure of Github. We were fortunate to get assistance by our lovely mentors who taught us the general standards for the use of branches, the use of merges, and the commands “push”, “pull”, “fetch” for efficient collaboration.
Although most of us were familiar with python, we were all newcomers to Web Development, starting from Flask, SQL, to CSS, so forming the general structure was challenging. This required all of us to learn different materials to be stitched together. This required every individual part to function successfully, and consequently there were long hours of debugging.
## Accomplishments that we're proud of
We learned and were able to utilize github, flask, sql and many more new tools and libraries to build a working web app.
## What we learned
We learned how to develop a web app with a user client from the ground up.
## What's next for TimeSync
We would like to improve our frontend aesthetics, more variety of input type. | ## Objective
We want to make people go further, together! Our application aims to create a friendly competitive environment for an entire organization. Each time an employee completes a task, points are given based on how much time it took to complete it. But, most importantly, if you cooperated with another department/team to check the task, you both get more points!
## What it does
Our app is a TODO list visible to your entire organization! Your manager assigns his employees, whom are split by department, certain tasks. Honestly, we did not had time to implement most of our features, but hey, we can check task in live with Socket.io
## How we built it
Our web application was built from the ground up using the Flask framework connected to a Google Cloud MySql Database. Everything updates in real-time thanks to Websockets!
## Challenges we ran into
Even though we got our project idea quite early, setting up the environment took a lot of time! Constant tweaking in order to future-proof the project was our downfall, as we had underestimated the time we would spend on annoying technical problems.
## Accomplishments that we're proud of
Getting it to work, we didn't have any experience with all of the tools used. Our project started coming together at 4 P.M. on Saturday. Even though the demo is not as functional as we would like, a little more time in the oven will inevitably make our app great!
## What's next for LiveCheck
A proper authentification system, stats for the entire organization (with graphs), reward system for high-scoring teams and individuals and a graphical overhaul. | ## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :) | losing |
Our journey as a diverse group of students with backgrounds from Morocco, Ghana, Afghanistan, and China brought us to HackHarvard with a shared desire to make an impact on the world of healthcare. Our experiences and those of our parents, who come from various parts of the world, made us acutely aware of the gaps in healthcare systems globally. Whether it’s a lack of access to emergency care or inadequate monitoring systems, these experiences deeply influenced our decision to work on something meaningful.
We were inspired by the Patient Safety 101 workshop, which highlighted the importance of addressing patient harm and the urgent need for innovative solutions to prevent medical errors. Hearing that nearly 70% of Americans have never even heard of patient safety made it clear that this is an area where technology can—and should—make a difference. The Pittsburgh Regional Health Initiative’s commitment to patient safety technology and its emphasis on reducing preventable harm pushed us further to take up this challenge and do our part.
Thus, we decided to create AIDERS—a real-time patient monitoring system utilizing a vision model to detect emergencies like falls or choking incidents in hospitals. By promptly alerting healthcare professionals and designated contacts, we aim to enhance patient safety and bring peace of mind to patients and families alike. Through countless hours of work on the technical stack—leveraging YOLOv8 for detection, combining front-end technologies like Go, Handlebars, and JavaScript, and building our backend in Python—we built a system that not only recognizes critical incidents but also ensures rapid response.
As a team, we have learned that creating a tech-driven solution for healthcare is not just about coding or models—it’s about empathy, safety, and giving back to the people who need these services the most. Our work on AIDERS is a step toward contributing to a safer healthcare environment and engaging our tech-savvy generation in building solutions that truly matter.
We hope to expand AIDERS to support additional emergency scenarios and integrate it into more healthcare facilities. We will continue refining the model for improved accuracy and broader usability. | ## Inspiration
The inspiration for our hackathon idea stemmed from an experience observed by one of our team members who had recently been to the hospital. They noticed the numerous amount of staff required at every entrance to ensure that patients and visitors had their masks on properly, as well as asking COVID-19 screening questions and recording their time of entry into the hospital. They thought about the potential problems and implications that this might have such as health care workers having a higher chance of getting sick due to more frequent exposure with other individuals, as well as the required resources needed to complete this task.
Another thing that was discussed was about the scalability of this procedure and how it could apply to schools & businesses. Hiring an employee to perform these tasks may be financially unfeasible for small businesses and schools but the social benefit that these services would provide would definitely help towards the containment of COVID-19.
Our team decided to see if we could use a combination of Machine Learning, AI, Robotics, and Web development in order to automate this process and create a solution that would be financially feasible and reduce the workload on already hard-working individuals who work every day to keep us safe.
## What it does
Our stand-alone solutions consists of three main elements, the hardware, the mobile app, and the software to connect everything together.
**Camera + Card Reader**
The hardware is meant to be placed at an entry point for a business/school. It will automatically detect the presence of a person through an ultrasonic sensor. From there, it adjusts the camera to center the view for a better image, and takes a screenshot. The screenshot is used to make an API request using Microsoft Azure Computer Vision Prediction API where it can be used to return a confidence value of a tag. (Mask / No Mask) Once the person is confirmed to be wearing a mask through AI, the individual will be prompted to scan their RFID tag. The hardware will check the owner of the RFID id and add a time checked-in or out for their profile in a cloud database. (Firestore)
**Mobile Application**
The mobile application is intended for the administrator/business owner who would like to be able to manage the hardware settings and observe any analytics. \_ (We did not have enough time to complete that unfortunately) \_ Additionally, the mobile app can also be used to perform basic contact tracing through a API request on a custom-made Autocode API that will check the database and determine recent potential instances of exposure between employees based on check-in and check-out times. It will also determine those employees affected and automatically send them an email with the dates of the potential exposure instances.
**The software**
Throughout our application, we had many smaller instances of programming/software that was used run our overall prototype. From the python scripts on our Raspberry Pi to communicate with the database, to the custom API made on Autocode, there were many small pieces that we had to put together in order for this prototype to work.
## How we built it
For all of our team members, this was our first hackathon and we had to think creatively about how we were going to make our idea into a reality. Because of this, we used many well-documented/beginner-friendly services to create a "stack" that we were able to manage with our limited expertise. Our team background came mainly from robotics and hardware so we definitely wanted to incorporate a hardware element into our project, however we also wanted to take full advantage of this amazing opportunity at Hack The 6ix and apply the knowledge that we learned in the workshops.
**The Hardware**
In order to make our hardware, we utilized a Raspberry Pi and various sensors that we had on hand. Our hardware consisted of an RFID reader, Ultrasonic Sensor, Servo Motor, and Web Camera to perform the tasks mentioned in the section above. Additionally, we had access to a 3D printer and were able to print some basic parts to mount our electronics and create our device. **(Although our team has a stronger mechanical background, we spent most of our time programming haha)**
**Mobile Application**
In order to program our mobile app, we utilized a framework called Flutter which is developed by Google and is a very easy way to rapidly prototype a mobile application that can be supported by both Android and iOS. Because Flutter is based on the DART language, it was very easy to follow along tutorials and documentation, as well as some members had previous experience with Flutter. We decided to also go with firestore as our database as there was quite a lot of documentation and support between the two applications.
**Software**
In order to put everything together, we had to utilize a variety of skills and get creative with how we were going to connect our backend considering our limited experience in programming and computer science. In order to run the mask detector, we first used some Python scripts on a Raspberry Pi to center our camera onto the object and perform very basic face detection to determine whether to take a screenshot or not in order to send to the cloud to be processed. We did not want to stream our entire camera feed to the cloud as that could be costly due to a high rate of API requests, as well as impracticality due to hardware limitations. Because of that, we used some lower end face detection in order to determine whether a screenshot should be taken and from there we send it through an API request through Microsoft Azure Services Computer Vision Prediction API where we had trained a model to detect two classifiers. (Mask and No Mask). We were very impressed with how easy it was to set up the Azure Prediction API and it really helped our team with reliable, accurate, and fast mask detection.
Since we did not have much experience with back-end in flutter, we decided to utilize a very powerful tool which was Autocode which we learned about during a workshop on Saturday. With the ease of use and utility of Autocode, we decided to create a back-end API that our mobile app could call basically with an HTTP request and through that our Autocode program could interact with our firebase database in order to perform basic calculations and achieve the basic contact tracing that we wanted in our project. The autocode project can be found here!
[link](https://autocode.com/src/samsonwhua81421/unmasked-api/)
## Challenges we ran into
The majority of our challenges that we ran into was due to our limited experience in back-end development which lead us with a lot of gaps in the functionality of our project. However, the mentors were very friendly and helpful and helped us with connecting the different parts of our project. Our creativity also aided in helping us connect our portions together.
Another challenge that we ran into was our hardware. Because of quarantine, many of us were at home and did not have access to lab equipment that could have been very helpful in diagnosing most of our hardware problems. (Multimeters, Oscilloscopes, Soldering Irons). However, we were able to solve these problems, all be-it using very precious hackathon time to do so.
## What We learned
-Hackathons are very fun, we definitely want to do more!
-Sleep is very important. :)
-Microsoft Azure Services are super easy to use
-Autocode is very useful and cool
## What's next for Unmasked
The next steps for Unmasked would be to further add to the contact tracing feature of the app, as knowing who was in the same building at the time does not provide enough information to determine who may actually be at risk. One potential solution to this would be to have employees scan their Id's based on location as well, enabling the ability to determine whether any individuals were actually near those with the virus. | ## Care Me
**Overworked nurses are linked with a 40 percent of risk of death in patients**
Our solution automates menial tasks like serving food and water, so medical professionals can focus on the important human-necessary interactions. It uses a robotic delivery system which acts autonomously based on voice recognition demand. One robot is added to each hospital wing with a microphone available for patient use.
Our product is efficient, increasing valuable life-saving time for medical professionals and patients alike, reducing wait-time for everyone. It prioritizes safety, really addressing the issue of burnout and dangerous levels of stress and tiredness that medical professionals face head on. Hospitals and medical facilities will see a huge boost in productivity because of the decreased stress and additional freed time.
Our product integrates multiple hardware components seamlessly through different methods of connectivity. A Raspberry Pi drives the Google Natural Language Processing Libraries to analyze the user’s request at a simple button press as a user. Using radio communications, the robot is quickly notified of the request and begins retrieving the item, delivering it to the user. | losing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.