anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
---|---|---|---|
## Inspiration
We were inspired by all the people who go along their days thinking that no one can actually relate to what they are experiencing. The Covid-19 pandemic has taken a mental toll on many of us and has kept us feeling isolated. We wanted to make an easy to use web-app which keeps people connected and allows users to share their experiences with other users that can relate to them.
## What it does
Alone Together connects two matching people based on mental health issues they have in common. When you create an account you are prompted with a list of the general mental health categories that most fall under. Once your account is created you are sent to the home screen and entered into a pool of individuals looking for someone to talk to. When Alone Together has found someone with matching mental health issues you are connected to that person and forwarded to a chat room. In this chat room there is video-chat and text-chat. There is also an icebreaker question box that you can shuffle through to find a question to ask the person you are talking to.
## How we built it
Alone Together is built with React as frontend, a backend in Golang (using Gorilla for websockets), WebRTC for video and text chat, and Google Firebase for authentication and database. The video chat is built from scratch using WebRTC and signaling with the Golang backend.
## Challenges we ran into
This is our first remote Hackathon and it is also the first ever Hackathon for one of our teammates (Alex Stathis)! Working as a team virtually was definitely a challenge that we were ready to face. We had to communicate a lot more than we normally would to make sure that we stayed consistent with our work and that there was no overlap.
As for the technical challenges, we decided to use WebRTC for our video chat feature. The documentation for WebRTC was not the easiest to understand, since it is still relatively new and obscure. This also means that it is very hard to find resources on it. Despite all this, we were able to implement the video chat feature! It works, we just ran out of time to host it on a cloud server with SSL, meaning the video is not sent on localhost (no encryption). Google App Engine also doesn't allow websockets in standard mode, and also doesn't allow `go.mod` on `flex` mode, which was inconvenient and we didn't have time to rewrite parts of our webapp.
## Accomplishments that we're proud of
We are very proud for bringing our idea to life and working as a team to make this happen! WebRTC was not easy to implement, but hard work pays off.
## What we learned
We learned that whether we work virtually together or physically together we can create anything we want as long as we stay curious and collaborative!
## What's next for Alone Together
In the future, we would like to allow our users to add other users as friends. This would mean in addition of meeting new people with the same mental health issues as them, they could build stronger connections with people that they have already talked to.
We would also allow users to have the option to add moderation with AI. This would offer a more "supervised" experience to the user, meaning that if our AI detects any dangerous change of behavior we would provide the user with tools to help them or (with the authorization of the user) we would give the user's phone number to appropriate authorities to contact them. | ## Inspiration
Last year we had to go through the hassle of retrieving a physical key from a locked box in a hidden location in order to enter our AirBnB. After seeing the August locks, we thought there must be a more convenient alternative.
We thought of other situations where you would want to grant access to your locks. In many cases where you would want to only grant temporary access, such as AirBnB, escape rooms or visitors or contractors at a business, you would want the end user to sign an agreement before being granted access, so naturally we looked into the DocuSign API.
## What it does
The app has two pieces: a way for home owners to grant temporary access to their clients, and the way for the clients to access the locks.
The property owner fills out a simple form with the phone number of their client as a way to identify them, the address of the property, the end date of their stay, the details needed to access the August lock. Our server then generates a custom DocuSign Click form and waits for the client.
When the client access the server, they first have to agree to the DocuSign form, which is mostly our agreement, but includes details about the time and location of the access granted, and includes a section for the property owners to add their own details. Once they have agreed to the form, they are able to use our website to lock and unlock the August lock they are granted access to via the internet, until the period of access specified by the property owner ends.
## How we built it
We set up a Flask server, and made an outline of what the website would be. Then we worked on figuring out the API calls we would need to make in local python scripts. We developed the DocuSign and August pieces separately. Once the pieces were ready, we began integrating them into the Flask server. Then we worked on debugging and polishing our product.
## Challenges we ran into
Some of the API calls were complex and it was difficult figuring out which pieces of data were needed and how to format them in order to use the APIs properly. The hardest API piece to implement was programatically generating DocuSign documents. Also, debugging was difficult once we were working on the Flask server, but once we figured out how to use Flask debug mode, it became a lot easier.
## Accomplishments that we're proud of
We successfully implemented all the main pieces of our idea, including ensuring users signed via DocuSign, controlling the August lock, rejecting users after their access expires, and including both the property owner and client sides of the project.
We are also proud of the potential security of our system. The renter is given absolute minimal access. They are never given direct access to the lock info, removing potential security vulnerabilities. They login to our website, and both verification that they have permission to use the lock and the API calls to control the lock occur on our server.
## What we learned
We learned a lot about web development including how to use cookies, forms, and URL arguments. We also gained a lot of experience in implementing 3rd party API's.
## What's next for Unlocc
The next steps would be expanding the rudimentary account system with a more polished one, having a lawyer help us draft the legalese in the DocuSign documents, and contacting potential users such as AirBnB property owners or escape room companies. | ## Inspiration
In this time of the pandemic, most of us are just in our own homes. Not all people can cope with this "new normal" state and can't go much outside. We have realized that the emotional factor in life is essential. In the Philippines, mental health issues are not taken seriously. In this project, we created a web app that people could vent out and seek advice from trusted volunteers and have a meaningful talk. With that in line, we have come up with Safe Space, which also means a place or environment where a person or category of people can feel confident that they will not expose to discrimination, criticism, harassment, or any other emotional or physical harm.
## What it does
This project is a user-friendly web- application for everyone who wants to talk about life without criticism and those who seek a place where everyone can vent out and have some meaningful talk. Three services are in the web-app: talk, mood booster, and inspire. These three categories consist of essential functions on helping a person cope with their struggles in life. The talk service paired up users with a volunteer to talk about and have advice. The mood booster shows memes that you could relate to and have fun with it. Lastly, the inspire section lets users post inspirational mess
## How we built it
We collaborated using Repl.it in building the web-app interface using HTML5, CSS, JavaScript. While the talk service is made from React.
## Challenges we ran into
A lot of challenges come up in creating this project. The first one is the time zone and it is difficult to adjust and make our time compatible. The internet speed and connectivity because there are a lot of times where the internet service providers are having issues. Last is brainstorming on what we would like to build that can help others and at the same time have fun building it.
## Accomplishments that we're proud of
Our group consists of beginner programmers, and being in this hackathon itself is an accomplishment for us to have created an idea and built it within a short amount of time.
## What we learned
We learned a lot in this hackathon, explored different technologies out there, and had a great time learning with the workshops, not just about programming but also about various things from the mini-events in Treehacks.
## What's next for Safe Space
A community that helps for better improvement of Mental Health awareness and advocates Mental Health issues. | partial |
# Tinderprint
Tinderprint is a dating app that matches you with people based
on true physical and spiritual compatibility. For millenia,
handreaders have been using fingerprints to characterize people's
personalities and to predict major life events. Tinderprint uses
a neural net to analyze the unique qualities of your fingerprint.
We use these qualities to match you with people with compatible
personalities, helping you get to your next major life event <3.
Tinderprint uses a convolutional neural network trained on a public data set to identify the prominent features of your fingerprint. According to thousands of years of fingerprint reading, the presence of loops reveals your intelligence and demeanor, the presence of whorls indicates your will and conviction, and the presence of arches exposes your pragmatism and stubbornness. We use a similarity metric across these features and more to help you find partners you'll be compatible with.
## Running the app
To run an instance of tinderprint, download the latest versions of npm
and mongo. Run mongo in the background (on port 27107), install the
app's dependencies, and run the app. You can then access the app at
localhost:3000.
```
mongod &
npm install
npm start
``` | # Inspiration
We’ve all been in a situation where collaborators’ strengths all lie in different areas, and finding “the perfect” team to work with is more difficult than expected. We wanted to make something that could help us find people with similar strengths without painstakingly scouring dozens of github accounts.
# What it does
MatchMadeIn.Tech is a platform where users are automatically matched with other github users who share similar commit frequencies, language familiarity, and more!
# How we built it
We used a modern stack that includes React for the front end and python Flask for the back end. Our model is a K-Means Cluster model, and we implemented it using scikit-learn, storing the trained model in a PickleDB. We leveraged GitHub's API to pull user contribution data and language preferences data for over 3 thousand users, optimizing our querying using GraphQL.
# Challenges we ran into
A big issue we faced was how to query the Github API to get full representation of all the users on the platform. Because there are over 100 million registered users on Github, many of which are bots and accounts that have no contribution history, we needed a way to parse these users.
Another obstacle we ran into was implementing the K-Means Cluster model. This was our first time using any machine learning algorithms other than Chat-GPT, so it was a very big learning curve. With multiple people working on the querying of data and training the model, our documentation regarding storing the data in code needed to be perfect, especially because the model required all the data to be in the same format.
# Accomplishments that we're proud of
Getting the backend to actually work! We decided to step out of our comfort zone and train our own statistical inference model. There were definitely times we felt discouraged, but we’re all proud of each other for pushing through and bringing this idea to life!
# What we learned
We learned that there's a real appetite for a more meaningful, niche-focused dating app in the tech community. We also learned that while the tech is essential, user privacy and experience are just as crucial for the success of a platform like this.
# What's next for MatchMadeIn.Tech
We’d love to add more metrics to determine user compatibility such as coding style, similar organizations, and similar feature use (such as the project board!). | ## About the Project
### TLDR:
Caught a fish? Take a snap. Our AI-powered app identifies the catch, keeps track of stats, and puts that fish in your 3d, virtual, interactive aquarium! Simply click on any fish in your aquarium, and all its details — its rarity, location, and more — appear, bringing your fishing memories back to life. Also, depending on the fish you catch, reel in achievements, such as your first fish caught (ever!), or your first 20 incher. The cherry on top? All users’ catches are displayed on an interactive map (built with Leaflet), where you can discover new fishing spots, or plan to get your next big catch :)
### Inspiration
Our journey began with a simple observation: while fishing creates lasting memories, capturing those moments often falls short. We realized that a picture might be worth a thousand words, but a well-told fish tale is priceless. This spark ignited our mission to blend the age-old art of fishing with cutting-edge AI technology.
### What We Learned
Diving into this project was like casting into uncharted waters – exhilarating and full of surprises. We expanded our skills in:
* Integrating AI models (Google's Gemini LLM) for image recognition and creative text generation
* Crafting seamless user experiences in React
* Building robust backend systems with Node.js and Express
* Managing data with MongoDB Atlas
* Creating immersive 3D environments using Three.js
But beyond the technical skills, we learned the art of transforming a simple idea into a full-fledged application that brings joy and preserves memories.
### How We Built It
Our development process was as meticulously planned as a fishing expedition:
1. We started by mapping out the user journey, from snapping a photo to exploring their virtual aquarium.
2. The frontend was crafted in React, ensuring a responsive and intuitive interface.
3. We leveraged Three.js to create an engaging 3D aquarium, bringing caught fish to life in a virtual environment.
4. Our Node.js and Express backend became the sturdy boat, handling requests and managing data flow.
5. MongoDB Atlas served as our net, capturing and storing each precious catch securely.
6. The Gemini AI was our expert fishing guide, identifying species and spinning yarns about each catch.
### Challenges We Faced
Like any fishing trip, we encountered our fair share of challenges:
* **Integrating Gemini AI**: Ensuring accurate fish identification and generating coherent, engaging stories required fine-tuning and creative problem-solving.
* **3D Rendering**: Creating a performant and visually appealing aquarium in Three.js pushed our graphics programming skills to the limit.
* **Data Management**: Structuring our database to efficiently store and retrieve diverse catch data presented unique challenges.
* **User Experience**: Balancing feature-rich functionality with an intuitive, streamlined interface was a constant tug-of-war.
Despite these challenges, or perhaps because of them, our team grew stronger and more resourceful. Each obstacle overcome was like landing a prized catch, making the final product all the more rewarding.
As we cast our project out into the world, we're excited to see how it will evolve and grow, much like the tales of fishing adventures it's designed to capture. | partial |
# Doc.Care
## Our revolutionary platform for medical professionals and patients: a centralized data hub for accessing medical history. Our platform provides easy access to medical history data, allowing doctors and other medical professionals worldwide to access and reference patient records in real-time with a unique search feature. Patients can now have their medical history at their fingertips with our conversational chat function, powered by OpenAI's GPT-3. With this feature, patients can easily ask specific questions about their medical history, such as "when was their last flu shot?" and receive an answer immediately. Say goodbye to the hassle of reading through cryptic records and finally get quick answers for any quick questions patients have.
Leveraging Figma's web interface, we successfully crafted an initial design for our platform. The incorporation of key functionalities, including the organizational search tool and chat box, was executed utilizing a combination of React, CSS, and OpenAI's Chat GPT.
Our implementation of OpenAI's technology enabled us to develop a sophisticated model capable of generating a simulated medical record history for patients. Using this model, we can effectively recall and relay requested patient information in a conversational format.
As first-time participants in the hacking community, we take great pride in our Treehacks 2023 project. We are excited to continue developing this project further in the near future. | ## Inspiration
Healthcare providers spend nearly half of their time with their patients entering data into outdated, user-unfriendly software. This is why we have built a medical assistant that helps doctors efficiently filter through patient information and extract critical information for diagnosis, allowing more of their attention to focus on meeting each patient's individual needs.
The current Electronic Health Record (EHR) system is unnecessarily difficult to navigate. For instance, doctors must go through multiple steps (clicking, scrolling, typing) to access the necessary diagnostic information. This is **time-consuming and mentally taxing**. It negatively impacts the doctor-patient relationship because the doctor's attention is perpetually on the monitor rather than on the patient. In a nation with a somewhat disturbing track record in medical outcomes, we felt that something had to change.
Tackling this giant of a problem one step at a time, we built a medical assistant that helps doctors efficiently filter through patient information and extract critical information for diagnoses, allowing them to focus on personalizing each patient's care. This new interface hopes to **alleviate a physician's information overload** and **maximize a patient's mental and physical well-being**.
## What it does
Our web application consists of two main functionalities: a **conversation visualizer** and a **ranked prompt list**.
The conversation visualizer takes in a transcript of a recording of the interaction between the patient and physician. The speech bubbles indicate which speaker each message corresponds to. Behind this interface, the words are processed to determine what topics are currently being discussed.
The ranked prompt list pulls the most relevant past information for the patient to the forefront of the list, making it easy for the physician to ask better clarifying questions or make adjustments to their mental model, all without having to click and scroll through tens or hundreds of records.
## How we built it
Our end goal is to help doctors efficiently filter and prioritize patient data, so each aspect of our process (ML, backend, frontend) attempts to address that in some way.
We designed a **deep-learning-based recommendation system** for features within the patient’s Electronic Health Records (EHR). It decides what information should be displayed based on the patient’s description of their medical needs and symptoms. We leveraged the **OpenAI Embedding API** to embed string token representations of these key features into a high dimensional vector space and extract semantic similarity between each. Then, we employed the *k*-nearest neighbor algorithm to compute and display the top *k* relevant features. This allowed us to cluster related keywords together, such as "COVID" with "shortness of breath". The appearance of one word/phrase in the cluster will bring EHR data containing other related words/phrases to the top of the list.
We implemented the ML backend using **Flask in Python**. The main structure and logic were done in **Node.js** within the **Svelte** framework. We designed the UI and front-end layout in Svelte to create something easy to navigate and use in time-sensitive situations. We designed the left panel to be the conversation visualizer, along with an option to record sound (see "What's next for MedSpeak"). The right panel holds the prompt list, which updates in real time as more information is fed in.
## Challenges we ran into
One challenge we encountered was understanding the medical workflow and procuring simulated medical data to work with. As none of us had much background in the medical field, it took us some time to find the right data. This also led to difficulties in settling on a final project idea, since we were not sure what kind of information we had access to. However, speaking with Professor Tang and other non-hackers to flesh out our idea was incredibly insightful and helped lead us onto the right track.
## Accomplishments that we're proud of
We are proud of generating a novel application of existing technology in a way that benefits the sector most in need of an upgrade.
Our solution has great potential in the daily medical workplace. It is able to **integrate past and ongoing patient information** to enhance and expedite the interaction for both parties. The implementation of our solution would result in a considerable reduction in the number of physical steps and the level of attention required to record patient data.
Our product's effects are twofold. It decreases the **mental and physical attention** needed for doctors to retrieve medical information. It allows doctors to spend **quality time communicating** with patients, fostering relationships built on trust and mutual understanding.
## What we learned
Over the course of this hackathon, each of us on the team became more familiar with technologies like machine learning and general full-stack development. Coming in individually with our separate skill sets, we needed to share our respective knowledge with the others in order to stay on the same page throughout. Thus we each picked up some important tidbits of the others’ expertise, enabling us to become better developers and engineers.
To build on that, we learned the importance of keeping everyone up to speed about the general direction of the project. Since we did not confirm our group until very late on the first day, we were delayed in settling on an idea and executing our tasks. More communication throughout the early stages could potentially have saved us time and confusion, allowing us to achieve more of our reach goals.
## Product Risk
Accuracy and ethics should always be a cornerstone of consideration when it comes to human health and well-being. Our product is no exception.
The medical metric recommendations may not function effectively when dealing with the latest medical metrics or conditions, as the pre-trained model is employed. This can potentially be mitigated by connecting our platform with the most up-to-date medical websites or journals. Even so, the model would require retraining every so often.
There is a possibility that doctors may become overly reliant on AI-generated prompts. While designing our solution, we purposefully stayed clear of having the prompt list return information that could be misinterpreted as diagnoses. It is incredibly dangerous to have a machine make official diagnoses, so there would have to be regulations in place to prevent abuse of the technology.
The voice transcription may not be (and likely is not) 100% accurate, which may lead to some inaccuracies in the vital signs or vital result recommendations. However, with enough training, we can hopefully make those occurrences a rarity. Even when they happen, the recording can ensure that we have a reference when verifying data.
It is imperative that physicians who use this product obtain the proper consent from their patients. Since our current product involves the transcription of a patient's words and our end goal involves an audio recording feature, sensitive information could be at risk. We should consult with legal professionals before making the product available.
## What's next for MedSpeak
Algorithmically, we aim to fine-tune the current embedding model on **clinical and biological datasets**, allowing the model to extract even more well-informed correlations based on a broader context pool.
We also hope to extend this project to incorporate **real-time speech-to-text processing** into the visualizer. The recording would also act as a safety net in case the patient or physician wishes to revisit that conversation.
A further extension would be the option to **autofill patient information** as the conversation goes on, as well as a **chatbot function** to quickly make changes to the record. The NLP aspect allows physicians to use abbreviations or more casual language, which saves mental resources in the long run.
Another feature could be an integration of hardware, by having **sensors that detect vital signs** transmit the data directly to the app. This would save time and energy for the nurse or doctor, enabling them to spend more time with their patient. | ## Inspiration
In large corporations, such as RBC, the help desk gets called hundreds phone calls every hour, lasting about 8 minutes on average and costing the company $15 per hour. We thought this was both a massive waste of time and resources, not to mention it being quite ineffective and inefficient. We wanted to create a product that accelerated the efficiency of a help-desk to optimize productivity. We designed a product that has the ability to wrap a custom business model and a help service together in an accessible SMS link. This is a novel innovation that is heavily needed in today's businesses.
## What it does
SMS Assist offers the ability for a business to **automate their help-desk** using SMS messages. This allows requests to be answered both online and offline, an innovating accessibility perk that many companies need. Our system has no limit to concurrent users, unlike a live help-desk. It provides assistance for exactly what you need, and this is ensured by our IBM Watson model, which trains off client data and uses Machine Learning/NLU to interpret client responses to an extremely high degree of accuracy.
**Assist** also has the ability to recieve orders from customers if the businesses so chose. The order details and client information is all stored by the Node server, so that employees can view orders in realtime.
Finally, **Assist** utilizes text Sentiment Analysis to analyse each client's tone in their texts. It then sends a report to the console so that the company can receive feedback from customers automatically, and improve their systems.
## How we built it
We used Node.js, Twilio, and IBM watson to create SMS Assist.
**IBM Watson** was used to create the actual chatbots, and we trained it on user data in order to recognize the user's intent in their SMS messages. Through several data sets, we utilized Watson's machine learning and Natural Language & Sentiment analysis to make communication with Assist hyper efficient.
**Twilio** was used for the front end- connecting an SMS client with the server. Using our Twilio number, messages can be sent and received from any number globally!
**Node.js** was used to create the server on which SMS Assist runs on. Twilio first recieves data from a user, and sends it to the server. The server feeds it into our Watson chatbot, which then interprets the data and generates a formulated response. Finally, the response is relayed back to the server and into Twilio, where the user recieves the respons via SMS.
## Challenges we ran into
There were many bugs involving the Node.js server. Since we didn't have much initial experience with Node or the IBM API, we encountered many problems, such as the SessionID not being saved and the messages not being sent via Twilio. Through hours of hard work, we persevered and solved these problems, resulting in a perfected final product.
## Accomplishments that we're proud of
We are proud that we were able to learn the new API's in such a short time period. All of us were completely new to IBM Watson and Twilio, so we had to read lots of documentation to figure things out. Overall, we learned a new useful skill and put it to good use with this project. This idea has the potential to change the workflow of any business for the better.
## What we learned
We learned how to use the IBM Watson API and Twilio to connect SMS messages to a server. We also discovered that working with said API's is quite complex, as many ID's and Auth factors need to be perfectly matched for everything to execute.
## What's next for SMS Assist
With some more development and customization for actual businesses, SMS Assist has the capability to help thousands of companies with their automated order systems and help desk features. More features can also be added | losing |
## Inspiration
Our app idea brewed from a common shared stressor of networking challenges. Recognizing the lack of available mentorship and struggle to form connections effortlessly, we envisioned a platform that seamlessly paired mentors and students to foster meaningful connections.
## What it does
mocha mentor is a web application that seamlessly pairs students and mentors based on their LinkedIn profiles. It analyzes user LinkedIn profiles, utilizes our dynamic backend structure and Machine Learning algorithm for accurate matching, and then as a result pairs a mentor and student together.
## How we built it
mocha mentor leverages a robust tech stack to enhance the mentor-student connection. MongoDB stores and manages profiles, while an Express.js server is ran on the backend. This server also executes Python scripts which employ pandas for data manipulation, scikit-learn for our ML cosine similarity-based matching algorithm, and reaches into the LinkedIn API for profile extraction. Our frontend was entirely built with React.js.
## Challenges we ran into
The hackathon's constrained timeframe led us to prioritize essential features. Additionally, other challenges we ran into were handling asynchronous events, errors integrating the backend and frontend, working with limited documentation, and running Python scripts efficiently in JavaScript.
## Accomplishments that we're proud of
We are proud of developing a complex technical project that had a diverse tech stack. Our backend was well designed and saved a lot of time when integrating with the frontend. With this year's theme of "Unlocking the Future with AI", we wanted to go beyond using a GPT backend, therefore, we utilized machine learning to develop our matching algorithm that gave accurate matches.
## What we learned
* The importance of good teamwork!
* How to integrate Python scripts in our Express server
* More about AI/ML and Cosine similarities
## What's next for mocha mentor
* Conduct outreach and incorporate community feedback
* Further develop UI
* Expand by adding additional features
* Improve efficiency in algorithms | ## Inspiration
Our inspiration stems from a fundamental realization about the critical role food plays in our daily lives. We've observed a disparity, especially in the United States, where the quality and origins of food are often overshadowed, leading to concerns about the overall health impact on consumers.
Several team members had the opportunity to travel to regions where food is not just sustenance but a deeply valued aspect of life. In these places, the connection between what we eat, our bodies, and the environment is highly emphasized. This experience ignited a passion within us to address the disconnect in food systems, prompting the creation of a solution that brings transparency, traceability, and healthier practices to the forefront of the food industry. Our goal is to empower individuals to make informed choices about their food, fostering a healthier society and a more sustainable relationship with the environment.
## What it does
There are two major issues that this app tries to address. The first is directed to those involved in the supply chain, like the producers, inspectors, processors, distributors, and retailers. The second is to the end user. For those who are involved in making the food, each step that moves on in the supply chain is tracked by the producer. For the consumer at the very end who will consume it, it will be a journey on where the food came from including its location, description, and quantity. Throughout its supply chain journey, each food shipment will contain a label that the producer will put on first. This is further stored on the blockchain for guaranteed immutability. As the shipment moves from place to place, each entity ("producer, processor, distributor, etc") will be allowed to make its own updated comment with its own verifiable signature and decentralized identifier (DiD). We did this through a unique identifier via a QR code. This then creates tracking information on that one shipment which will eventually reach the end consumer, who will be able to see the entire history by tracing a map of where the shipment has been.
## How we built it
In order to build this app, we used both blockchain and web2 in order to alleviate some of the load onto different servers. We wrote a solidity smart contract and used Hedera in order to guarantee the immutability of the record of the shipment, and then we have each identifier guaranteed its own verifiable certificate through its location placement. We then used a node express server that incorporated the blockchain with our SQLite database through Prisma ORM. We finally used Firebase to authenticate the whole app together in order to provide unique roles and identifiers. In the front end, we decided to build a react-native app in order to support both Android and iOS. We further used different libraries in order to help us integrate with QR codes and Google Maps. Wrapping all this together, we have a fully functional end-to-end user experience.
## Challenges we ran into
A major challenge that we ran into was that Hedera doesn't have any built-in support for constructing arrays of objects through our solidity contract. This was a major limitation as we had to find various other ways to ensure that our product guaranteed full transparency.
## Accomplishments that we're proud of
These are some of the accomplishments that we can achieve through our app
* Accurate and tamper-resistant food data
* Efficiently prevent, contain, or rectify contamination outbreaks while reducing the loss of revenue
* Creates more transparency and trust in the authenticity of Verifiable Credential data
* Verifiable Credentials help eliminate and prevent fraud
## What we learned
We learned a lot about the complexity of food chain supply. We understand that this issue may take a lot of helping hand to build out, but it's really possible to make the world a better place. To the producers, distributors, and those helping out with the food, it helps them prevent outbreaks by keeping track of certain information as the food shipments transfer from one place to another. They will be able to efficiently track and monitor their food supply chain system, ensuring trust between parties. The consumer wants to know where their food comes from, and this tool will be perfect for them to understand where they are getting their next meal to stay strong and fit.
## What's next for FoodChain
The next step is to continue to build out all the different moving parts of this app. There are a lot of different directions that one person can take app toward the complexity of the supply chain. We can continue to narrow down to a certain industry or we can make this inclusive using the help of web2 + web3. We look forward to utilizing this at some companies where they want to prove that their food ingredients and products are the best. | This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | partial |
## Inspiration
Everyone learns in a different way. Whether it be from watching that YouTube tutorial series or scouring the textbook, each person responds to and processes knowledge very differently. We hoped to identify students’ learning styles and tailor educational material to the learner for two main reasons: one, so that students can learn more efficiently, and two, so that educators may understand a student’s style and use it to motivate or teach a concept to a student more effectively.
## What it does
EduWave takes live feedback from the Muse Headband while a person is undergoing a learning process using visual, auditory, or haptic educational materials, and it recognizes when the brain is more responsive to a certain method of learning than others. Using this data, we then create a learning profile of the user.
With this learning profile, EduWave tailors educational material to the user by taking any topic that the user wants to learn and finding resources that apply to the type of learner they are. For instance, if the user is a CS major learning different types of elementary sorts and wants to learn specifically how insertion sort works, and if EduWave determines that the user is a visual learner, EduWave will output resources and lesson plans that teach insertion sort with visual aids (e.g. with diagrams and animations).
## How we built it
We used the Muse API and Muse Direct to obtain the data from the user while they were solving the initial assessment tests and checked for what method the brain was more responsive to using data analysis with Python. We added an extra layer to this by using the xLabs Gaze API which tracked eye movements and was able to contribute to the analysis. We then sent this data back with a percentage determination of a learning profile. We then parsed a lesson plan on a certain topic and outputted the elements based on the percentage split of learning type.
## Challenges we ran into
The Muse Headband was somewhat difficult to use, and we had to go through a lot of testing and make sure that the data we were using was accurate. We also ran into some roadblocks proving the correlation between the data and specific learning types. Besides this, we also had to do deep research on what brain waves are most engaged during learning and why, and then subsequently determine a learning profile. Another significant challenge was the creation of lesson plans as we not only had to keep in mind the type of learner but also manage the content itself so that it could be presented in a specific way.
## Accomplishments that we're proud of
We are most proud of learning how to use the Muse data and creating a custom API that was able to show the data for analysis.
## What we learned
How to use Muse API, Standard Library, Muse Direct, how brainwaves work, how people learn and synthesizing unrelated data.
## What's next for EduWave
Our vision for EduWave is to improve it over time. By determining one's most preferred way of learning, we hope to devise custom lesson plans of learning for the user for any topics that they wish to learn – that is, we want a person to be able to have resources for whatever they want to learn made exclusively for them. In addition, we hope to use EduWave to benefit educators, as they can use the data to better understand their students' learning styles, | ## Inspiration
As college students more accustomed to having meals prepared by someone else than doing so ourselves, we are not the best at keeping track of ingredients’ expiration dates. As a consequence, money is wasted and food waste is produced, thereby discounting the financially advantageous aspect of cooking and increasing the amount of food that is wasted. With this problem in mind, we built an iOS app that easily allows anyone to record and track expiration dates for groceries.
## What it does
The app, iPerish, allows users to either take a photo of a receipt or load a pre-saved picture of the receipt from their photo library. The app uses Tesseract OCR to identify and parse through the text scanned from the receipt, generating an estimated expiration date for each food item listed. It then sorts the items by their expiration dates and displays the items with their corresponding expiration dates in a tabular view, such that the user can easily keep track of food that needs to be consumed soon. Once the user has consumed or disposed of the food, they could then remove the corresponding item from the list. Furthermore, as the expiration date for an item approaches, the text is highlighted in red.
## How we built it
We used Swift, Xcode, and the Tesseract OCR API. To generate expiration dates for grocery items, we made a local database with standard expiration dates for common grocery goods.
## Challenges we ran into
We found out that one of our initial ideas had already been implemented by one of CalHacks' sponsors. After discovering this, we had to scrap the idea and restart our ideation stage.
Choosing the right API for OCR on an iOS app also required time. We tried many available APIs, including the Microsoft Cognitive Services and Google Computer Vision APIs, but they do not have iOS support (the former has a third-party SDK that unfortunately does not work, at least for OCR). We eventually decided to use Tesseract for our app.
Our team met at Cubstart; this hackathon *is* our first hackathon ever! So, while we had some challenges setting things up initially, this made the process all the more rewarding!
## Accomplishments that we're proud of
We successfully managed to learn the Tesseract OCR API and made a final, beautiful product - iPerish. Our app has a very intuitive, user-friendly UI and an elegant app icon and launch screen. We have a functional MVP, and we are proud that our idea has been successfully implemented. On top of that, we have a promising market in no small part due to the ubiquitous functionality of our app.
## What we learned
During the hackathon, we learned both hard and soft skills. We learned how to incorporate the Tesseract API and make an iOS mobile app. We also learned team building skills such as cooperating, communicating, and dividing labor to efficiently use each and every team member's assets and skill sets.
## What's next for iPerish
Machine learning can optimize iPerish greatly. For instance, it can be used to expand our current database of common expiration dates by extrapolating expiration dates for similar products (e.g. milk-based items). Machine learning can also serve to increase the accuracy of the estimates by learning the nuances in shelf life of similarly-worded products. Additionally, ML can help users identify their most frequently bought products using data from scanned receipts. The app could recommend future grocery items to users, streamlining their grocery list planning experience.
Aside from machine learning, another useful update would be a notification feature that alerts users about items that will expire soon, so that they can consume the items in question before the expiration date. | ## Inspiration
We were inspired to create HackParty after extensive first-hand experience with the cumbersome nature of assembling a hackathon team on other social media platforms like Facebook, which were not expressly designed for that purpose.
## What it does
HackParty provides users with the ability to keep track of the hackathons they plan on attending as well as ways of getting in contact with other attendees ahead of time via email. HackParty allows users to advertise their skills and passions, as well as to quickly and efficiently view the same information about their fellow hackers.
## How we built it
HackParty is a web app built in JavaScript, HTML5, and Python, with user information stored using Google's firebase database service.
## Challenges we ran into
Integrating the remote database correctly into the Python/JavaScript app we produced and making sure that the components written in different languages interacted properly with one another.
## Accomplishments that we're proud of
Successfully integrating the remote database, finishing one of the largest software projects any of us have ever undertaken.
## What we learned
How to interact with Google's firebase software remotely with JavaScript and Python, how to use the Flask library effectively in web development, and a whole lot about front-end web development.
## What's next for HackParty
There are a large number of features we'd like to add in the future. We envision a HackParty that not only provides users with the means of contacting each other via email but directly integrates an interuser messaging service to make scheduling even easier. We'd also like to integrate machine learning algorithms that would allow HackParty to competently recommend teammates based on hackathon attendance and complementary skills.
We will be also putting this on [www.hackparty.net](http://www.hackparty.net) or hackparty.tech soon too! | winning |
## Inspiration
I wanted to make something that let me explore everything you need to do at a hackathon.
## What it does
Currently, the web app stores and encrypts passwords onto a database hosted by cockroachDB with the "sign up" form. The web app also allows you to retrieve and decrypt your password with the "fetch" form.
## How we built it
I used python to build the server side components and flask to connect the server to the web app. I stored the user-data using the cockroachDB API. I used html, jinja2, and bootstrap to make the front-end look pretty.
## Challenges we ran into
Originally, I was going to use the @sign API and further continue my project, but the @platform uses Dart. I do not use Dart and I did not plan on doing so within the submission period. I then had to descale my project to something more achievable, which is what I have now.
## Accomplishments that we're proud of
I made something when I had little idea of what I was doing.
## What we learned
I learned a lot of the basic elements of creating a web app (front-end + back-end) and using databases (cockroachdb).
## What's next for Password Manager
Fulling fleshing out the entire web app. | ## Inspiration
We are tinkerers and builders who love getting our hands on new technologies. When we discovered that the Spot Robot Dog from Boston Dynamics was available to build our project upon, we devised different ideas about the real-world benefits of robot dogs. From a conversational companion to a navigational assistant, we bounced off different ideas and ultimately decided to use the Spot robot to detect explosives in the surrounding environment as we realized the immense amount of time and resources that are put into training real dogs to perform these dangerous yet important tasks.
## What it does
Lucy uses the capabilities of Spot Robot Dog to help identify potentially threatening elements in a surrounding through computer vision and advanced wave sensing capabilities. A user can command the dog to inspect a certain geographic area and the dog autonomously walks around the entire area and flags objects that could be a potential threat. It captures both raw and thermal images of the given object in multiple frames, which are then stored on a vector database and can be searched through semantic search.
This project is a simplified approach inspired by the research "Atomic Magnetometer Multisensor Array for rf Interference Mitigation and Unshielded Detection of Nuclear Quadrupole Resonance" (<https://link.aps.org/accepted/10.1103/PhysRevApplied.6.064014>).
## How we built it
We've combined the capabilities of OpenCV with a thermal sensing camera to allow Spot Robot to identify and flag potentially threatening elements in a given surrounding. To simulate these elements in the surroundings, we built a simple Arduino application that emits light waves in irregular patterns. The robot dog operates independently through speech instructions, which are powered by DeepGram's Speech to Text and Llama-3-8b model hosted on the Groq platform. Furthermore, we've leveraged ChromDB's vector database to tokenize images that allow people to easily search through images, which are captured in the range of 20-40fps.
## Challenges we ran into
The biggest challenge we encountered was executing and testing our code on Spot due to the unreliable internet connection. We also faced configuration issues, as some parts of modules were not supported and used an older version, leading to multiple errors during testing. Additionally, the limited space made it difficult to effectively run and test the code.
## Accomplishments that we're proud of
We are proud that we took on the challenge of working with something that we had never worked with before and even after many hiccups and obstacles we were able to convert our idea in our brains into a physical reality.
## What we learned
We learned how to integrate and deploy our program onto Spot. We also learned that to work around the limitations of the technology and our experience working with them.
## What's next for Lucy
We want to integrate LiDar in our approach, providing more accurate results then cameras. We plan to experiment beyond light to include different wave forms, thus helping improve the reliability of the results. | ## Inspiration
The inspiration for PhishNet stemmed from a growing frequency of receiving email phishing scams. As we rely on our emails for important information, such as with our career, academics, and so on, my team and I often encountered suspicious emails that raised doubts about their legitimacy. We realized the importance of having a tool that could analyze the trustworthiness of emails and help users make informed decisions about whether to engage with them or not.
## What it does
PhishNet allows users to upload their exported emails onto the website, which will then scan the file information and give the user a rating or percentage of how trustworthy or legitimate the email senders are, how suspicious they or the links they send are, and as well as checking if any similar scam has been reported.
## How we built it
### Technologies Used
* React for frontend development
* Python for backend
* GoDaddy for domain name
* Auth0 for account sign in/sign up authentication (Unfinished)
* Procreate for mockups/planning
* Canva for logo and branding
### Development Process
1. **Frontend Design**: We started by sketching out the user interface and designing the user experience flow through Procreate and Canva. React was instrumental in creating a sleek and responsive frontend design.
2. **Backend Development**: Using Python, we built the backend infrastructure for handling file uploads, parsing email data, and communicating with the machine learning models.
3. **Unfinished: Sign In Authentication**: Although we were unable to finish its full functionality, we used Auth0 for our sign in and sign up options, in order to provide users with the security they needed as even if it is just uploading emails, there is no denying the website needs to keep the privacy of each user.
## Challenges we ran into
1. **Data Preprocessing**: Cleaning and preprocessing email data to extract relevant features for analysis posed a significant challenge. We had to handle various data formats and ensure consistency in feature extraction.
2. **File Uploading/Input**: We had to try several different libraries/open source code/ alternatives in general that would help us not only provide a clean, efficient file upload functionality, but also one we could use to check for user input validation and respond accordingly.
3. **Finishing Everything**: We took a lot of time to finalise our thoughts and pick the theme we wanted to explore for this year's hackathon. However, that also let to us underestimating how much time we were taking up unknowingly. I think that taught us to be more aware, and conscious of our time.
## Accomplishments that we're proud of
* Creating a user-friendly interface for easy interaction.
* Handling complex data preprocessing tasks efficiently.
* Working as only a two-person team, which the each of us taking on new roles.
## What we learned
Throughout the development process of PhishNet, we gained valuable insights into email security, phishing tactics, and data analysis. We honed our skills in frontend and backend development, as well as machine learning integration. Additionally, we learned about the importance of user feedback and iterative development in creating a robust application.
## What's next for PhishNet | partial |
## Inspiration
We were inspired by the health tracking functionality that FitBit gives their users and the nostalgia that the Tamagotchi virtual pets give their users. We wanted to mix the nostalgia with something new to create a fun way to motivate users to keep up with their daily steps.
## What it does
Visit and take care of one of ten virtual pets as they help you reach your daily goals. Each day you will be visited by a new pet; take care of them by feeding and playing and making sure their hunger and happiness meters are satisfied. To feed, achieve a certain amount of steps to get food for your pet and increase the hunger meter. Play with your pet by selecting the play button, which will increase the happiness meter. Be careful not to let your happiness or hunger meter get too low, or your pet will get sick! Remember, pets get tired too and need to sleep, so make sure you keep up your steps!
## How we built it
We created the pixel art for all of the FitPets using Pixilart and Piskel. We created the functionality using FitBit Studio (SVG, CSS3, JavaScript).
## Challenges we ran into
We came across various limitations with our selected platform. Fitbit Studio does not allow applications to run while the app is not open. To resolve this issue we kept track of the date and time since the app was last opened by the user. Using this information, we tracked how much time had passed since the user had last opened the app, and performed calculations based on this information. In addition, Fitbit Studio offers a limited range of designs for buttons. Due to this, we had to alter our initial designs to work with the limitations.
## Accomplishments that we're proud of
We learned how to use Fitbit Studio and created the app all within the hackathons hours without any prior experience. In addition, our artist created amazing art for our virtual pets that we could not be more proud of.
## What we learned
We learned how to coordinate with a team and successfully divide tasks, and coordinate ideas with a team of developers and non-developers. We gained experience with methods used to thoroughly test code as well as merging code between different developers. In addition, we learned our way around Fitbit Studio, which helped us adapt to working in a brand new, fast-paced environment.
## What's next for FitPet
Features we would like to add in the future would be the ability to keep the food earned from your steps the previous day when the pet is changed at midnight. We would also like to add a currency system and the ability to select what type of food to purchase for your pet. Instead of earning food from your steps, you would earn currency. We would like for pets to be around for longer than a day, as well as adding more pets to add variety to the application. Ideally in the future, the happiness meter would be filled up by reaching your daily goals in order to motivate the users to reach their goals. We would also like some animals to be more rare than others, giving the user a sense of accomplishment when they are visited by a rare pet. | ## Inspiration
Our core idea revolves around the concept of providing private layers for Large Language Models (LLMs). We believe that privacy is essential in today's data-driven world, and centralized solutions are not sufficient. Our inspiration stems from envisioning a future where anyone can deploy their own Anonymization node, share it in a smart contract, and give users the freedom to choose among them.
## What it does
Our project demonstrates the power of decentralized Anonymization nodes for LLMs. We have deployed different layers using OpenAI and Cohere, one focusing on privacy and the other not. Through our front-end interface, we showcase how user experiences can vary based on their choice of Anonymization module.
In the future, we envision these nodes to be highly customizable, allowing each Anonymization node to incorporate Natural Language Processing (NLP) modules for extracting sensitive inputs from prompts, making the process even more secure and user-friendly.
## How we built it
Our project is built on a decentralized architecture. Here's a high-level overview of how it works:
1. **User Interaction**: Users input their queries into an LLM-enabled device.
2. **Deploy Anonymization Node**: The query is sent to a Custom node (based on their reputation), where identifiers and sensitive information are further anonymized.
3. **LLM Processing**: The anonymized query is forwarded to the LLM provider for processing.
4. **Data Enrichment (In Future)**: The LLM provider sends the response back to the custom node. The node then injects the sensitive information back into the response.
5. **User Experience**: The enriched response is sent back to the user's device, ensuring privacy and a seamless user experience.
## Challenges we ran into
While building our decentralized Anonymization system, we faced various technical challenges, including:
1. Figuring out a way to use deployed smart contract as a registry for available nodes.
2. Connecting all three components (backend, frontend, private layer) in a manner that does not hurt user experience.
## Accomplishments that we're proud of
* Successfully deploying decentralized Anonymization nodes.
* Demonstrating how user experiences can be enhanced with privacy-focused solutions.
* Designing a system that can adapt and evolve with future NLP modules.
## What we learned
Throughout this project, we gained valuable insights into decentralized systems, smart contracts, and the importance of user privacy. We also learned how to work with APIs provided by two LLM giants (cohere and openai)
## What's next for ChainCloak
The future of ChainCloak looks promising. We plan to:
* Expand the range of Anonymization modules and LLM providers.
* Enhance the security and customization options for Anonymization nodes.
* Collaborate with the community to build a robust ecosystem of privacy-focused solutions.
* Continue exploring new technologies and innovations in the field of decentralized AI and privacy.
We are excited about the potential impact of ChainCloak in ensuring privacy in the era of AI-powered language models. | ## Inspiration
Our biggest inspiration came from our grandparents, who often felt lonely and struggled to find help. Specifically, one of us have a grandpa with dementia. He lives alone and finds it hard to receive help since most of his relatives live far away and he has reduced motor skills. Knowing this, we were determined to create a product -- and a friend -- that would be able to help the elderly with their health while also being fun to be around! Ted makes this dream a reality, transforming lives and promoting better welfare.
## What it does
Ted is able to...
* be a little cutie pie
* chat with speaker, reactive movements based on conversation (waves at you when greeting, idle bobbing)
* read heart rate, determine health levels, provide help accordingly
* Drives towards a person in need through using the RC car, utilizing object detection and speech recognition
* dance to Michael Jackson
## How we built it
* popsicle sticks, cardboards, tons of hot glue etc.
* sacrifice of my fingers
* Play.HT and Claude 3.5 Sonnet
* YOLOv8
* AssemblyAI
* Selenium
* Arudino, servos, and many sound sensors to determine direction of speaker
## Challenges we ran into
Some challenges we ran into during development was making sure every part was secure. With limited materials, we found that materials would often shift or move out of place after a few test runs, which was frustrating to keep fixing. However, instead of trying the same techniques again, we persevered by trying new methods of appliances, which eventually led a successful solution!
Having 2 speech-to-text models open at the same time showed some issue (and I still didn't fix it yet...). Creating reactive movements was difficult too but achieved it through the use of keywords and a long list of preset moves.
## Accomplishments that we're proud of
* Fluid head and arm movements of Ted
* Very pretty design on the car, poster board
* Very snappy response times with realistic voice
## What we learned
* power of friendship
* don't be afraid to try new things!
## What's next for Ted
* integrating more features to enhance Ted's ability to aid peoples' needs --> ex. ability to measure blood pressure | losing |
## Inspiration
Imagine a world where learning is as easy as having a conversation with a friend. Picture a tool that unlocks the treasure trove of educational content on YouTube, making it accessible to everyone, regardless of their background or expertise. This is exactly what our hackathon project brings to life.
* Current massive online courses are great resources to bridge the gap in educational inequality.
* Frustration and loss of motivation with the lengthy and tedious search for that 60-second content.
* Provide support to our students to unlock their potential.
## What it does
Think of our platform as your very own favorite personal tutor. Whenever a question arises during your video journey, don't hesitate to hit pause and ask away. Our chatbot is here to assist you, offering answers in plain, easy-to-understand language. Moreover, it can point you to external resources and suggest specific parts of the video for a quick review, along with relevant sections of the accompanying text. So, explore your curiosity with confidence – we've got your back!
* Analyze the entire video content 🤖 Learn with organized structure and high accuracy
* Generate concise, easy-to-follow conversations⏱️Say goodbye to wasted hours watching long videos
* Generate interactive quizzes and personalized questions 📚 Engaging and thought-provoking
* Summarize key takeaways, explanations, and discussions tailored to you 💡 Provides tailored support
* Accessible to anyone with an internet 🌐 Accessible and Convenient
## How we built it
Vite React,js as front-end and Flask as back-end. Using Cohere command-nightly AI and Similarity ranking.
## Challenges we ran into
* **Increased application efficiency by 98%:** Reduced the number of API calls lowering load time from 8.5 minutes to under 10 seconds. The challenge we ran into was not taking into account the time taken for every API call. Originally, our backend made over 500 calls to Cohere's API to embed text every time a transcript section was initiated and repeated when a new prompt was made -- API call took about one second and added 8.5 minutes in total. By reducing the number of API calls and using efficient practices we reduced time to under 10 seconds.
* **Handling over 5000-word single prompts:** Scraping longer YouTube transcripts efficiently was complex. We solved it by integrating YouTube APIs and third-party dependencies, enhancing speed and reliability. Also uploading multi-prompt conversation with large initial prompts to MongoDB were challenging. We optimized data transfer, maintaining a smooth user experience.
## Accomplishments that we're proud of
Created a practical full-stack application that I will use on my own time.
## What we learned
* **Front end:** State management with React, third-party dependencies, UI design.
* **Integration:** Scalable and efficient API calls.
* **Back end:** MongoDB, Langchain, Flask server, error handling, optimizing time complexity and using Cohere AI.
## What's next for ChicSplain
We envision ChicSplain to be more than just an AI-powered YouTube chatbot, we envision it to be a mentor, teacher, and guardian that will be no different in functionality and interaction from real-life educators and guidance but for anyone, anytime and anywhere. | ## Inspiration
All of us have gone through the painstaking and difficult process of onboarding as interns and making sense of huge repositories with many layers of folders and files. We hoped to shorten this or remove it completely through the use of Code Flow.
## What it does
Code Flow exists to speed up onboarding and make code easy to understand for non-technical people such as Project Managers and Business Analysts. Once the user has uploaded the repo, it has 2 main features. First, it can visualize the entire repo by showing how different folders and files are connected and providing a brief summary of each file and folder. It can also visualize a file by showing how the different functions are connected for a more technical user. The second feature is a specialized chatbot that allows you to ask questions about the entire project as a whole or even specific files. For example, "Which file do I need to change to implement this new feature?"
## How we built it
We used React to build the front end. Any folders uploaded by the user through the UI are stored using MongoDB. The backend is built using Python-Flask. If the user chooses a visualization, we first summarize what every file and folder does and display that in a graph data structure using the library pyvis. We analyze whether files are connected in the graph based on an algorithm that checks features such as the functions imported, etc. For the file-level visualization, we analyze the file's code using an AST and figure out which functions are interacting with each other. Finally for the chatbot, when the user asks a question we first use Cohere's embeddings to check the similarity of the question with the description we generated for the files. After narrowing down the correct file, we use its code to answer the question using Cohere generate.
## Challenges we ran into
We struggled a lot with narrowing down which file to use to answer the user's questions. We initially thought to simply use Cohere generate to reply with the correct file but knew that it isn't specialized for that purpose. We decided to use embeddings and then had to figure out how to use those numbers to actually get a valid result. We also struggled with getting all of our tech stacks to work as we used React, MongoDB and Flask. Making the API calls seamless proved to be very difficult.
## Accomplishments that we're proud of
This was our first time using Cohere's embeddings feature and accurately analyzing the result to match the best file. We are also proud of being able to combine various different stacks and have a working application.
## What we learned
We learned a lot about NLP, how embeddings work, and what they can be used for. In addition, we learned how to problem solve and step out of our comfort zones to test new technologies.
## What's next for Code Flow
We plan on adding annotations for key sections of the code, possibly using a new UI so that the user can quickly understand important parts without wasting time. | ## Inspiration
In August, one of our team members was hit by a drunk driver. She survived with a few cuts and bruises, but unfortunately, there are many victims who are not as lucky. The emotional and physical trauma she and other drunk-driving victims experienced motivated us to try and create a solution in the problem space.
Our team initially started brainstorming ideas to help victims of car accidents contact first response teams faster, but then we thought, what if we could find an innovative way to reduce the amount of victims? How could we help victims by preventing them from being victims in the first place, and ensuring the safety of drivers themselves?
Despite current preventative methods, alcohol-related accidents still persist. According to the National Highway Traffic Safety Administration, in the United States, there is a death caused by motor vehicle crashes involving an alcohol-impaired driver every 50 minutes. The most common causes are rooted in failing to arrange for a designated driver, and drivers overestimating their sobriety. In order to combat these issues, we developed a hardware and software tool that can be integrated into motor vehicles.
We took inspiration from the theme “Hack for a Night out”. While we know this theme usually means making the night out a better time in terms of fun, we thought that another aspect of nights out that could be improved is getting everyone home safe. Its no fun at all if people end up getting tickets, injured, or worse after a fun night out, and we’re hoping that our app will make getting home a safer more secure journey.
## What it does
This tool saves lives.
It passively senses the alcohol levels in a vehicle using a gas sensor that can be embedded into a car’s wheel or seat. Using this data, it discerns whether or not the driver is fit to drive and notifies them. If they should not be driving, the app immediately connects the driver to alternative options of getting home such as Lyft, emergency contacts, and professional driving services, and sends out the driver’s location.
There are two thresholds from the sensor that are taken into account: no alcohol present and alcohol present. If there is no alcohol present, then the car functions normally. If there is alcohol present, the car immediately notifies the driver and provides the options listed above. Within the range between these two thresholds, our application uses car metrics and user data to determine whether the driver should pull over or not. In terms of user data, if the driver is under 21 based on configurations in the car such as teen mode, the app indicates that the driver should pull over. If the user is over 21, the app will notify if there is reckless driving detected, which is based on car speed, the presence of a seatbelt, and the brake pedal position.
## How we built it
Hardware Materials:
* Arduino uno
* Wires
* Grove alcohol sensor
* HC-05 bluetooth module
* USB 2.0 b-a
* Hand sanitizer (ethyl alcohol)
Software Materials:
* Android studio
* Arduino IDE
* General Motors Info3 API
* Lyft API
* FireBase
## Challenges we ran into
Some of the biggest challenges we ran into involved Android Studio. Fundamentally, testing the app on an emulator limited our ability to test things, with emulator incompatibilities causing a lot of issues. Fundamental problems such as lack of bluetooth also hindered our ability to work and prevented testing of some of the core functionality. In order to test erratic driving behavior on a road, we wanted to track a driver’s ‘Yaw Rate’ and ‘Wheel Angle’, however, these parameters were not available to emulate on the Mock Vehicle simulator app.
We also had issues picking up Android Studio for members of the team new to Android, as the software, while powerful, is not the easiest for beginners to learn. This led to a lot of time being used to spin up and just get familiar with the platform. Finally, we had several issues dealing with the hardware aspect of things, with the arduino platform being very finicky and often crashing due to various incompatible sensors, and sometimes just on its own regard.
## Accomplishments that we're proud of
We managed to get the core technical functionality of our project working, including a working alcohol air sensor, and the ability to pull low level information about the movement of the car to make an algorithmic decision as to how the driver was driving. We were also able to wirelessly link the data from the arduino platform onto the android application.
## What we learned
* Learn to adapt quickly and don’t get stuck for too long
* Always have a backup plan
## What's next for Drink+Dryve
* Minimize hardware to create a compact design for the alcohol sensor, built to be placed inconspicuously on the steering wheel
* Testing on actual car to simulate real driving circumstances (under controlled conditions), to get parameter data like ‘Yaw Rate’ and ‘Wheel Angle’, test screen prompts on car display (emulator did not have this feature so we mimicked it on our phones), and connecting directly to the Bluetooth feature of the car (a separate apk would need to be side-loaded onto the car or some wi-fi connection would need to be created because the car functionality does not allow non-phone Bluetooth devices to be detected)
* Other features: Add direct payment using service such as Plaid, facial authentication; use Docusign to share incidents with a driver’s insurance company to review any incidents of erratic/drunk-driving
* Our key priority is making sure the driver is no longer in a compromising position to hurt other drivers and is no longer a danger to themselves. We want to integrate more mixed mobility options, such as designated driver services such as Dryver that would allow users to have more options to get home outside of just ride share services, and we would want to include a service such as Plaid to allow for driver payment information to be transmitted securely.
We would also like to examine a driver’s behavior over a longer period of time, and collect relevant data to develop a machine learning model that would be able to indicate if the driver is drunk driving more accurately. Prior studies have shown that logistic regression, SVM, decision trees can be utilized to report drunk driving with 80% accuracy. | partial |
## Inspiration
There is one thing we all definitely miss during this time of social distancing, which is not being able to hang out with our friends and/or family, at least not in a huge group. Those birthday surprises, dinner parties, graduation celebrations, and such occasions commemorating various special days of our lives. One of the important things about these events is the gifts/surprises for our loved ones.
No matter how small the number of people involved is, there will always be one long discussion -- what should we gift him/her/them? Unless its a wedding and there is a registry, the most common and simple way to go about this is to meet and talk about it or nowadays, make a group chat; and needless to say, we all know what that talk turns into as everyone tries to stick to their choice of the gift. This is where our app comes in.
## What it does
**Wrapify** is an app that is focused on the fact that gift-giving is not as easy as it sounds, especially if it’s from multiple people. This app has a feature where the user can explore through ecommerce platforms such as Amazon or input a link to any other website to look for their perfect gift. It lets everyone in your group have a voice by letting them add and vote for their favourite gift(s). This is how it works:
* You create a room and add your friends/family members
* Each member can add a number of gifts to the list
* Each one of you can vote for the gifts you find the best
* The gift with the highest number of votes is ultimately the one that most of you loved!
* Seeing as how easy it became to choose a gift out of all the options, one of you creates a new room for a new event
## How we built it
We built a Hybrid App (iOS as well as an Android app) with the help of Ionic Framework which wraps our front end code to native iOS and Native applications! For the front-end, we used **React.js** with help of small npm libraries and used SCSS as it helps with better and maintainable code. We used **Firebase** as our backend system as it provides us with Firestore and its Authentication system to store users which is safe and secure.
We also built a custom API as directly querying into Firebase is not very secure. The API is built in NodeJS using Express library. We designed the App in **Figma** before developing it to get a clear understanding of our app.
## Challenges we ran into
The first issue we ran into was trying to find an Amazon or an online store API which can provide us data if we query using user input. Amazon doesn't allow us to do that ideally, and we need lots of permissions to get into their API, so we had to use other free APIs available online for such a store.
This was our first time integrating Firebase in React and we had problems with understanding how would we structure our API and database. It took us a while to get through understanding the backend.
Lastly, developing a good UI/UX was another challenge we faced. But by designing some user flows and sketching out low fidelity mockups we were able to come up with good high fidelity mockups.
## Accomplishments that we're proud of
We were really happy with the whole app, as we built a really interactive UI/UX long with a working API and database. We believe that users can navigate the UI very easily and the data shown is relevant to the users. One of the crucial aspects is that our platform is a Hybrid Platform and will attract lots of users.
## What we learned
We learned that APIs are not as simple to use but given the right effort we can overcome that challenge. We also learned about creating Hybrid Applications (iOS and Android) and running them locally. This shows that how we can develop a cross platform app for a lot of users and not restrict ourselves in one specific platform.
## What's next for Wrapify
We initially planned for the app to have Amazon store in the App where we can search products without having to navigate to Amazon app or other apps. Next steps would be integrating online stores and giving the user more options to shop from. | # Project Story: AI Email Summarization Tool
As avid email users, we always found ourselves spending countless hours sorting through our inboxes. It was a tedious and time-consuming task that left us feeling overwhelmed and stressed. That's when we had the idea to build an AI email summarization tool that could make our lives easier.
We researched the latest advancements in natural language processing and machine learning algorithms, and decided to leverage the capabilities of OpenAI to build our prototype. We collected a large dataset of emails with the Gmail API and trained our OpenAI model to identify key points and summarize them in a digestible format.
It wasn't an easy journey. We had never designed a chrome extension before and didn't have the Javascript knowledge necessary to work with the different APIs. But after figuring out each of our problems, we were able to design a functioning product that could analyze and summarize emails into a daily digest. We tested it on our own inboxes and were amazed at how much time and stress it saved us.
By using OpenAI's technology, we were able to harness the power of cutting-edge AI to simplify complex tasks and make our lives more efficient. Today, our AI email summarization tool, built with the help of OpenAI, is available to everyone, helping people across the world manage their inbox more efficiently and productively. | ## Inspiration
When the first experimental COVID-19 vaccine became available in China, hundreds of people started queuing outside hospitals, waiting to get that vaccine. Imagine this on a planetary scale when the whole everybody has to be vaccinated all around the world. There's a big chance while queuing they can spread the virus to people around them or maybe get infected because they cannot perform social distancing at all. We sure don't want that to happen.
The other big issue is that there are lots of conspiracy theories, rumors, stigma, and other forms of disinformation simultaneously spread across our social media about COVID-19 and it's vaccine. This misinformation creates frustrations for users many asking, we really don't know which one is right? Which one is wrong?
## What it does
Immunize is a mobile app that can save your life and save your time. The goal is to make the distribution of mass-vaccination become more effective, faster, and less crowded. With this app, you can book your vaccine appointment based on your own preference. So the user can easily choose the hospital based on the nearest location and easily schedule an appointment based on their availability.
In addition, based on the research we found that most of Covid-19 vaccines requires 2 doses given in 3 weeks apart to achieve that high effectiveness. And there's a big probability that people can forget to return for a follow-up shot. We can minimize that probability. This app will automatically schedule the patient for the 2nd vaccination so there is a less likelihood of user error. The reminder system (as notification feature) that will remind them in their phone when they have appointment that day.
## How we built it
We built the prototype using flutter as our client to support mobile. We integrated radar.io for hospital search. For facial recognition we used GCP and SMS reminders we used twilio. The mobile client connected to firebase: using firebase for auth, firebase storage for avatars and firestore for user metadata storage. The second backend host used datastax.
## Challenges we ran into
Working with an international team was very challenging with team members 12+ hours apart. All of us were learning something new whether it was flutter, facial recognition or experimenting with new APIs. Flutter APIs were very experimental, the camera API had to be rolled back two major version which occurred in less than 2 months to find a viable working version compatible with online tutorials
## Accomplishments that we're proud of
The features:
1. **QR Code Feature** for storing all personal data + health condition, so user don't need to wait for a long queue of administrative things.
2. **Digital Registration Form** checking if user is qualified of COVID-19 vaccine and which vaccine suits best.
3. **Facial Recognition** due to potential fraud in people who are not eligible for vaccination attempting to get limited supplies of vaccine, we implemented facial recognition to confirm the user for the appointment is the same one that showed up.
4. **Scheduling Feature** based on date, vaccine availability, and the nearby hospital.
5. **Appointment History** to track all data of patients, this data can be used for better efficiency of mass-vaccination in the future.
6. **Immunize Passport** for vaccine & get access to public spaces. This will create domino effect for people to get vaccine as soon as possible so that they can get access.
7. **Notification** to remind the patients every time they have appointment/ any important news via SMS and push notifications
8. **Vaccine Articles** - to ensure the user can get the accurate information from a verified source.
9. **Emergency Button** - In case there are side effects after vaccination.
10. **Closest Hospitals/Pharmacies** - based on a user's location, users can get details about the closest hospitals through Radar.io Search API.
## What we learned
We researched and learned a lot about the facts of COVID-19 Vaccine; Some coronavirus vaccines may work better in certain populations than others. And there may be one vaccine that seems to work better in the elderly than in younger populations. Alternatively, one may work better in children than it works in the elderly. Research suggests, the coronavirus vaccine will likely require 2 shots to be effective in which taken 21 days apart for Pfizer's vaccine and 28 days apart for Moderna's remedy.
## What's next for Immunize
Final step is to propose this solution to our government. We really hope this app could be implemented in real life and be a solution for people to get COVID-19 vaccine effectively, efficiently, and safely. Polish up our mobile app and build out an informational web app and a mobile app for hospital staff to scan QR codes and verify patient faces (currently they have to use the same app as the client) | losing |
## Inspiration
We see technology progressing rapidly in cool fields like virtual reality, social media and artificial intelligence but often neglect those who really need tech to make a difference in their lives.
SignFree aims to bridge the gap between the impaired and the general public by making it easier for everyone to communicate.
## What it does
SignFree is a smart glove that is able to detect movements and gestures to translate sign language into speech or text.
## How we built it
SignFree was built using a glove with embedded sensors to track finger patterns. The project relies on an Arduino board with a small logic circuit to detect which fingers are activated for each sign. This information is relayed over to a database and is collected by a script that converts this information into human speech.
## Challenges we ran into
Coming up with the logic behind sensing different finger patterns was difficult and took some planning
The speech API used on the web server was tricky to implement as well
## Accomplishments that we are proud of
We feel our hack has real world potential and this is something we aimed to accomplish at this hackathon.
## What we learned
Basic phrases in sign language. We used a bunch of new API's to get things working.
## What's next for SignFree
More hackathons. More hardware. More fun | ## Inspiration
Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers.
## What it does
It translates sign language into text in real-time processing.
## How we built it
We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam.
-Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters.
-Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers.
-Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time.
## Challenges we ran into
The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code.
## Accomplishments that we're proud of
The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision.
## What we learned
At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program. | Currently, about 600,000 people in the United States have some form of hearing impairment. Through personal experiences, we understand the guidance necessary to communicate with a person through ASL. Our software eliminates this and promotes a more connected community - one with a lower barrier entry for sign language users.
Our web-based project detects signs using the live feed from the camera and features like autocorrect and autocomplete reduce the communication time so that the focus is more on communication rather than the modes. Furthermore, the Learn feature enables users to explore and improve their sign language skills in a fun and engaging way. Because of limited time and computing power, we chose to train an ML model on ASL, one of the most popular sign languages - but the extrapolation to other sign languages is easily achievable.
With an extrapolated model, this could be a huge step towards bridging the chasm between the worlds of sign and spoken languages. | winning |
## Inspiration
Each living in a relatively suburban area, we are often quite confused when walking through larger cities. We can each associate with the frustration of not being able to find what seems to be even the simplest of things: a restroom nearby or a parking space we have been driving around endlessly to find. Unfortunately, we can also associate with the fear of danger present in many of these same cities. IntelliCity was designed to accommodate each one of these situations by providing users with a flexible, real-time app that reacts to the city around them.
## What it does
IntelliCity works by leveraging the power of crowdsourcing. Whenever users spot an object, event or place that fits into one of several categories, they can report it through a single button in our app. This is then relayed through our servers and other users on our app can view this report along with any associated images or descriptions, conveniently placed as a marker on a map.
## How we built it

IntelliCity was built using a variety of different frameworks and tools. Our front-end was designed using Flutter and the Google Maps API, which provided us with an efficient way to get geolocation data and place markers. Our backend was made using Flask and Google-Cloud.
## Challenges we ran into
Although we are quite happy with our final result, there were definitely a few hurdles we faced along the way. One of the most significant of these was properly optimizing our app for mobile devices, for which we were using Flutter, a relatively new framework for many of us. A significant challenge related to this was placing custom, location-dependent markers for individual reports. Another challenge we faced was transmitting the real-time data throughout our setup and having it finally appear on individual user accounts. Finally, a last challenge we faced was actually sending text messages to users when potential risks were identified in their area.
## Accomplishments that we're proud of
We are proud of getting a functional app for both mobile and web.
## What we learned
We learned a significant amount throughout this hackathon, about everything from using specific frameworks and APIS such as Flutter, Google-Maps, Flask and Twilio to communication and problem-solving skills.
## What's next for IntelliCity
In the future, we would like to add support for detailed analysis of specific cities. | In the public imagination, the year 1956 brings to mind a number of things – foremost the Hungarian Revolution, and its subsequent bloody suppression. Those of a certain vintage would recall the Suez Crisis, or the debut album of Elvis Presley.
But those in the know would associate 1956 with the Dartmouth workshop, often considered the seminal event in artificial intelligence. In the intervening decades the field of AI bore witness to several cycles of hype and bust, as it broadened and matured.
The field is once again in a frenzy, and public perception of AI is divided. Evangelists, believing it a tool of Promethean promise, herald the coming of what they call the AI revolution. Others, wary of the limits of today’s computational powers and the over-promise of previous hypes, warn of a market correction of sorts. Because of its complexity and apparent inaccessibility, the average layperson views it with both awe and suspicion. Still others are unaware of its developments at all.
However, there is one major difference between the present flowering of AI and the previous decades. It is here in our everyday lives, and here to stay. Yet most people are not aware of this.
We aim to make AI more accessible by creating a user-friendly experience that gives easy and fun example use-cases, and provides users with a memento after completion.
We initially started off rather ambitiously, and wanted to create a cinematic experience that would incorporate computer vision, and natural language processing. However, we quickly discovered that this would prove difficult to implement within the 36-hour time limit, especially given that this is the first hackathon that our team members have participated in, and that some of us had limited exposure to the tools and frameworks that we used to deploy our project.
Nevertheless, we are proud of the prototype that we built and we hope to expand upon it after the conclusion of TreeHacks.
We used AWS to host our website and produce our conversational agents, Gradio to host our OpenAI GPT-3 demo, and HTML, CSS, Javascript to build the front-end and back-end of our website. | ## Inspiration
We wanted to use Livepeer's features to build a unique streaming experience for gaming content for both streamers and viewers.
Inspired by Twitch, we wanted to create a platform that increases exposure for small and upcoming creators and establish a more unified social ecosystem for viewers, allowing them both to connect and interact on a deeper level.
## What is does
kizuna has aspirations to implement the following features:
* Livestream and upload videos
* View videos (both on a big screen and in a small mini-player for multitasking)
* Interact with friends (on stream, in a private chat, or in public chat)
* View activities of friends
* Highlights smaller, local, and upcoming streamers
## How we built it
Our web-application was built using React, utilizing React Router to navigate through webpages, and Livepeer's API to allow users to upload content and host livestreams on our videos. For background context, Livepeer describes themselves as a decentralized video infracstucture network.
The UI design was made entirely in Figma and was inspired by Twitch. However, as a result of a user research survey, changes to the chat and sidebar were made in order to facilitate a healthier user experience. New design features include a "Friends" page, introducing a social aspect that allows for users of the platform, both streamers and viewers, to interact with each other and build a more meaningful connection.
## Challenges we ran into
We had barriers with the API key provided by Livepeer.studio. This put a halt to the development side of our project. However, we still managed to get our livestreams working and our videos uploading! Implementing the design portion from Figma to the application acted as a barrier as well. We hope to tweak the application in the future to be as accurate to the UX/UI as possible. Otherwise, working with Livepeer's API was a blast, and we cannot wait to continue to develop this project!
You can discover more about Livepeer's API [here](https://livepeer.org/).
## Accomplishments that we're proud of
Our group is proud of the persistance through the all the challenges that confronted us throughout the hackathon. From learning a whole new programming language, to staying awake no matter how tired, we are all proud of each other's dedication to creating a great project.
## What we learned
Although we knew of each other before the hackathon, we all agreed that having teammates that you can collaborate with is a fundamental part to developing a project.
The developers (Josh and Kennedy) learned lot's about implementing API's and working with designers for the first time. For Josh, this was his first time implementing his practiced work from small projects in React. This was Kennedy's first hackathon, where she learned how to implement CSS.
The UX/UI designers (Dorothy and Brian) learned more about designing web-applications as opposed to the mobile-applications they are use to. Through this challenge, they were also able to learn more about Figma's design tools and functions.
## What's next for kizuna
Our team maintains our intention to continue to fully develop this application to its full potential. Although all of us our still learning, we would like to accomplish the next steps in our application:
* Completing the full UX/UI design on the development side, utilizing a CSS framework like Tailwind
* Implementing Lens Protocol to create a unified social community in our application
* Redesign some small aspects of each page
* Implementing filters to categorize streamers, see who is streaming, and categorize genres of stream. | partial |
## Inspiration
Over 50 million people suffer from epilepsy worldwide, making it one of the most common neurological diseases. With the rise in digital adoption, people who suffer from epilepsy are at risk from online videos that may trigger seizure responses. This can have adverse health effects on the lives of epileptic individuals and inhibits their interaction with technology. Often, there are warnings on videos that may trigger seizures, however, not much effort is made to resolve the problem. Our goal is clear, to provide a solution that proactively solves the problem and increases accessibility for this target group.
## What it does
We built a chrome extension that interacts with YouTube videos and provides users with a warning in advance of seizure-inducing content and applies a filter to allow people to continue watching the video.
## How we built it
We first started off by building our flask server that has an open endpoint where the youtube video url is passed. This video is then downloaded to our server using PyTube and passed to openCv which determines the luminance values across the video. This dataset is then pre-processed and passed to the Azure Anomaly Detection model which determines anomalies in the luminance dataset. These anomalies represent portions in the youtube video where there are large discrepancies in light variation and hence, could trigger potential seizures for photosensitive epileptic users. The anomaly dataset is then processed to determine the timestamps in the video where these events occur. This dataset is then returned to a google chrome extension which overlays a dark filter over the video during these anomaly timestamps, thus preventing a potential seizure.
## Challenges we ran into
The first major challenge we faced was attempting to use Google Cloud Services as our anomaly detector for our luminance dataset as this service required us to package the data into a model and then create an anomaly detection model. As we were inexperienced in this subject, we were unable to complete this objective, this prevented us from using Google Cloud Services and caused us to lose precious development time. Another challenge that we faced was with the framerate of our downloaded videos. In our first iteration of creating the mvp, we were originally processing the youtube videos at a frame rate of 8 frames per second while creating the video timestamps on the anomaly data at a frame rate of 30 frames per second. This caused the extension to overlay the seizure warning filter at incorrect times of the video. Once we noticed that we were incorrectly downloading the videos at a slower frame rate, we were able to swiftly fix this issue.
## What's next for Epilepsy Safe Viewer
The first thing that we wanted to do as a team was some form of user testing on the product to determine the effectiveness of the product while also determining potential improvement areas. This would allow us to reiterate through our design process and make the product even better at solving our targeted user group’s problem. We also wanted to try applying our extension to users that face startle epilepsy by determining anomalies in the audio decibel levels of Youtube videos and normalizing the audio based on this. Lastly, we wanted to look into potential opportunities of expanding this product to other video platforms such as Netflix and TikTok, thus increasing the inclusivity and accessibility of this user group with technology. | ## Inspiration
<https://www.youtube.com/watch?v=lxuOxQzDN3Y>
Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie.
## What it does
We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications.
## How I built it
The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer
## Challenges I ran into
Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key.
## Accomplishments that I'm proud of
We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API.
## What I learned
We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise.
## What's next for Speech Computer Control
At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future. | ## Inspiration
There are approximately **10 million** Americans who suffer from visual impairment, and over **5 million Americans** suffer from Alzheimer's dementia. This weekend our team decided to help those who were not as fortunate. We wanted to utilize technology to create a positive impact on their quality of life.
## What it does
We utilized a smartphone camera to analyze the surrounding and warn visually impaired people about obstacles that were in their way. Additionally, we took it a step further and used the **Azure Face API** to detect the faces of people that the user interacted with and we stored their name and facial attributes that can be recalled later. An Alzheimer's patient can utilize the face recognition software to remind the patient of who the person is, and when they last saw him.
## How we built it
We built our app around **Azure's APIs**, we created a **Custom Vision** network that identified different objects and learned from the data that we collected from the hacking space. The UI of the iOS app was created to be simple and useful for the visually impaired, so that they could operate it without having to look at it.
## Challenges we ran into
Through the process of coding and developing our idea we ran into several technical difficulties. Our first challenge was to design a simple UI, so that the visually impaired people could effectively use it without getting confused. The next challenge we ran into was attempting to grab visual feed from the camera, and running them fast enough through the Azure services to get a quick response. Another challenging task that we had to accomplish was to create and train our own neural network with relevant data.
## Accomplishments that we're proud of
We are proud of several accomplishments throughout our app. First, we are especially proud of setting up a clean UI with two gestures, and voice control with speech recognition for the visually impaired. Additionally, we are proud of having set up our own neural network, that was capable of identifying faces and objects.
## What we learned
We learned how to implement **Azure Custom Vision and Azure Face APIs** into **iOS**, and we learned how to use a live camera feed to grab frames and analyze them. Additionally, not all of us had worked with a neural network before, making it interesting for the rest of us to learn about neural networking.
## What's next for BlindSpot
In the future, we want to make the app hands-free for the visually impaired, by developing it for headsets like the Microsoft HoloLens, Google Glass, or any other wearable camera device. | winning |
## Inspiration
Our inspiration came from seeing how overwhelming managing finances can be, especially for students and young professionals. Many struggle to track spending, stick to budgets, and plan for the future, often due to a lack of accessible tools or financial literacy.
So, we decided to build a solution that isn't just another financial app, but a tool that empowers individuals, especially students, to take control of their finances with simplicity, clarity, and efficiency. We believe that managing finances should not be a luxury or a skill learned through trial and error, but something that is accessible and intuitive for everyone
## What it does
Sera simplifies financial management by providing users with an intuitive dashboard where they can track their recent transactions, bills, budgets, and overall balances - all in one place. What truly sets it apart is the personalization, AI-powered guidance that goes beyond simple tracking. Users receive actionable recommendations like "manage your budget" or "plan for retirement" based on their financial activity
With features like scanning receipts via QR code and automatic budget updates, we ensure users never miss a detail. The AI chatbot, SeraAI, offers tailored financial advice and can even handle tasks like adding transactions or adjusting budgets - making complex financial decisions easy and stress-free. With a focus on accessibility, Sera makes financial literacy approachable and actionable for everyone.
## How we built it
We used Next.js with TailwindCSS for a responsive, dynamic UI, leveraging server-side rendering for performance. The backend is powered by Express and Node.js, with MongoDB Atlas for scalable, secure data storage.
For advanced functionality, we integrated Roboflow for OCR, enabling users to scan receipts via QR codes, automatically updating their transactions, Cerebras handles AI processing, powering SeraAI, our chatbot that offers personalized financial advice and automates various tasks on our platform. In addition, we used Tune to provide users with customized financial insights, ensuring a proactive and intuitive financial management experience
## Challenges we ran into
Integrating OCR with our app posed several challenges, especially when using Cerebras for real-time processing. Achieving high accuracy was tricky due to the varying layouts and qualities of receipts, which often led to misrecognized data.
Preprocessing images was essential; we had to adjust brightness and contrast to help the OCR perform better, which took considerable experimentation. Handling edge cases, like crumpled or poorly printed receipts, also required robust error-checking mechanisms to ensure accuracy.
While Cerebras provided the speed we needed for real-time data extraction, we had to ensure seamless integration with our user interface. Overall, combining OCR with Cerebras added complexity but ultimately enhanced our app’s functionality and user experience.
## Accomplishments that we're proud of
We’re especially proud of developing our QR OCR system, which showcases our resilience and capabilities despite challenges. Integrating OCR for real-time receipt scanning was tough, as we faced issues with accuracy and image preprocessing.
By leveraging Cerebras for fast processing, we overcame initial speed limitations while ensuring a responsive user experience. This accomplishment is a testament to our problem-solving skills and teamwork, demonstrating our ability to turn obstacles into opportunities. Ultimately, it enhances our app’s functionality and empowers users to manage their finances effectively.
## What we learned
We learned that financial education isn’t enough, people need ongoing support to make lasting changes. It’s not just about telling users how to budget; it’s about providing the tools, guidance, and nudges to help them stick to their goals. We also learned the value of making technology feel human and approachable, particularly when dealing with sensitive topics like money.
## What's next for Sera
The next steps for Sera include expanding its capabilities to integrate with more financial platforms and further personalizing the user experience to provide everyone with guidance and support that fits their needs. Ultimately, we want Sera to be a trusted financial companion for everyone, from those just starting their financial journey to experienced users looking for better insights. | ## Inspiration
Everyone on our team comes from a family of newcomers and just as it is difficult to come into a new country, we had to adapt very quickly to the Canadian system. Our team took this challenge as an opportunity to create something that our communities could deeply benefit from when they arrive in Canada. A product that adapts to them, instead of the other way around. With some insight from our parents, we were inspired to create this product that would help newcomers to Canada, Indigenous peoples, and modest income families. Wealthguide will be a helping hand for many people and for the future.
## What it does
A finance program portal that provides interactive and accessible financial literacies to customers in marginalized communities improving their financially intelligence, discipline and overall, the Canadian economy 🪙. Along with these daily tips, users have access to brief video explanations of each daily tip with the ability to view them in multiple languages and subtitles. There will be short, quick easy plans to inform users with limited knowledge on the Canadian financial system or existing programs for marginalized communities. Marginalized groups can earn benefits for the program by completing plans and attempting short quiz assessments. Users can earn reward points ✨ that can be converted to ca$h credits for more support in their financial needs!
## How we built it
The front end was built using React Native, an open-source UI software framework in combination with Expo to run the app on our mobile devices and present our demo. The programs were written in JavaScript to create the UI/UX interface/dynamics and CSS3 to style and customize the aesthetics. Figma, Canva and Notion were tools used in the ideation stages to create graphics, record brainstorms and document content.
## Challenges we ran into
Designing and developing a product that can simplify the large topics under financial literacy, tools and benefits for users and customers while making it easy to digest and understand such information | We ran into the challenge of installing npm packages and libraries on our operating systems. However, with a lot of research and dedication, we as a team resolved the ‘Execution Policy” error that prevented expo from being installed on Windows OS | Trying to use the Modal function to enable pop-ups on the screen. There were YouTube videos of them online but they were very difficult to follow especially for a beginner | Small and merge errors prevented the app from running properly which delayed our demo completion.
## Accomplishments that we're proud of
**Kemi** 😆 I am proud to have successfully implemented new UI/UX elements such as expandable and collapsible content and vertical and horizontal scrolling. **Tireni** 😎 One accomplishment I’m proud of is that despite being new to React Native, I was able to learn enough about it to make one of the pages on our app. **Ayesha** 😁 I used Figma to design some graphics of the product bringing the aesthetic to life!
## What we learned
**Kemi** 😆 I learned the importance of financial literacy and responsibility and that FinTech is a powerful tool that can help improve financial struggles people may face, especially those in marginalized communities. **Tireni** 😎 I learned how to resolve the ‘Execution Policy” error that prevented expo from being installed on VS Code. **Ayesha** 😁 I learned how to use tools in Figma and applied it in the development of the UI/UX interface.
## What's next for Wealthguide
Newsletter Subscription 📰: Up to date information on current and today’s finance news. Opportunity for Wealthsimple product promotion as well as partnering with Wealthsimple companies, sponsors and organizations. Wealthsimple Channels & Tutorials 🎥: Knowledge is key. Learn more and have access to guided tutorials on how to properly file taxes, obtain a credit card with benefits, open up savings account, apply for mortgages, learn how to budget and more. Finance Calendar 📆: Get updates on programs, benefits, loans and new stocks including when they open during the year and the application deadlines. E.g OSAP Applications. | ## Inspiration
We wanted to make an app that helped people to be more environmentally conscious. After we thought about it, we realised that most people are not because they are too lazy to worry about recycling, turning off unused lights, or turning off a faucet when it's not in use. We figured that if people saw how much money they lose by being lazy, they might start to change their habits. We took this idea and added a full visualisation aspect of the app to make a complete budgeting app.
## What it does
Our app allows users to log in, and it then retrieves user data to visually represent the most interesting data from that user's financial history, as well as their utilities spending.
## How we built it
We used HTML, CSS, and JavaScript as our front-end, and then used Arduino to get light sensor data, and Nessie to retrieve user financial data.
## Challenges we ran into
To seamlessly integrate our multiple technologies, and to format our graphs in a way that is both informational and visually attractive.
## Accomplishments that we're proud of
We are proud that we have a finished product that does exactly what we wanted it to do and are proud to demo.
## What we learned
We learned about making graphs using JavaScript, as well as using Bootstrap in websites to create pleasing and mobile-friendly interfaces. We also learned about integrating hardware and software into one app.
## What's next for Budge
We want to continue to add more graphs and tables to provide more information about your bank account data, and use AI to make our app give personal recommendations catered to an individual's personal spending. | partial |
## Inspiration
Wanted to do the funny track :))
## What it does
LLMs roast each other
## How we built it
Claude, GPT, and LLAMA
## Challenges we ran into
Pivoting last minute
## Accomplishments that we're proud of
## What we learned
Not to pivot too last minute.
## What's next for BurnGPT
More user involvement! | **What inspired us**
Despite the prevalence of LLMs increasing, their power still hasn't been leveraged to improve the experience of students during class. In particular, LLMs are often discouraged by professors, often because they often give inaccurate or too much information. To remedy this issue, we created an LLM that has access to all the information for a course, including the course information, lecture notes, and problem sets. Furthermore, in order for this to be useful for actual courses, we made sure for the LLM to not answer specific questions about the problem set. Instead, the LLM guides the student and provides relevant information for the student to complete the coursework without providing students with the direct answer. This essentially serves as a TA for students to help them navigate their problem sets.
**What we learned**
Through this project, we delved into the complexities of integrating AI with software solutions, uncovering the essential role of user interface design and the nuanced craft of prompt engineering. We learned that crafting effective prompts is crucial, requiring a deep understanding of the AI’s capabilities and the project's specific needs. This process taught us the importance of precision and creativity in prompt engineering, where success depends on translating educational objectives into prompts that generate meaningful AI responses.
Our exploration also introduced us to the concept of retrieval-augmented generation (RAG), which combines the power of information retrieval with generative models to enhance the AI's ability to produce relevant and contextually accurate outputs. While we explored the potentials of using the OpenAI and Together APIs to enrich our project, we ultimately did not incorporate them into our final implementation. This exploration, however, broadened our understanding of the diverse AI tools available and their potential applications. It underscored the importance of selecting the right tools for specific project needs, balancing between the cutting-edge capabilities of such APIs and the project's goals. This experience highlighted the dynamic nature of AI project development, where learning about and testing various tools forms a foundational part of the journey, even if some are not used in the end.
**How we built our project**
Building our project required a strategic approach to assembling a comprehensive dataset from the Stanford CS 106B course, which included the syllabus, problem sets, and lectures. This effort ensured our AI chatbot was equipped with a detailed understanding of the course's structure and content, setting the stage for it to function as an advanced educational assistant. Beyond the compilation of course materials, a significant portion of our work focused on refining an existing chatbot user interface (UI) to better serve the specific needs of students engaging with the course. This task was far from straightforward; it demanded not only a deep dive into the chatbot's underlying logic but also innovative thinking to reimagine how it interacts with users. The modifications we made to the chatbot were extensive and targeted at enhancing the user experience by adjusting the output behavior of the language learning model (LLM).
A pivotal change involved programming the chatbot to moderate the explicitness of its hints in response to queries about problem sets. This adjustment required intricate tuning of the LLM's output to strike a balance between guiding students and stimulating independent problem-solving skills. Furthermore, integrating direct course content into the chatbot’s responses necessitated a thorough understanding of the LLM's mechanisms to ensure that the chatbot could accurately reference and utilize the course materials in its interactions. This aspect of the project was particularly challenging, as it involved manipulating the chatbot to filter and prioritize information from the course data effectively. Overall, the effort to modify the chatbot's output capabilities underscored the complexity of working with advanced AI tools, highlighting the technical skill and creativity required to adapt these systems to meet specific educational objectives.
**Challenges we faced**
Some challenges we faced included scoping our project to ensure that it is feasible given the constraints we had for this hackathon including time. We learned React.js and PLpgSQL for our project since we had only used JavaScript previously. Other challenges we faced were installing Docker, Supabase CLI, and ensuring all dependencies are properly managed. Moreover, we also had to configure Supabase and create the database schema. There were also deployment configuration issues as we had to integrate our front-end application with our back-end to ensure that they are communicating properly. | ## Inspiration
Algorithm interviews... suck. They're more a test of sanity (and your willingness to "grind") than a true performance indicator. That being said, large language models (LLMs) like Cohere and ChatGPT are rather *good* at doing LeetCode, so why not make them do the hard work...?
Introduce: CheetCode. Our hack takes the problem you're currently screensharing, feeds it to an LLM target of your choosing, and gets the solution. But obviously, we can't just *paste* in the generated code. Instead, we wrote a non-malicious (we promise!) keylogger to override your key presses with the next character of the LLM's given solution. Mash your keyboard and solve hards with ease.
The interview doesn't end there though. An email notification will appear on your computer after with the subject "Urgent... call asap." Who is it? It's not mom! It's CheetCode, with a detailed explanation including both the time and space complexity of your code. Ask your interviewer to 'take this quick' and then breeze through the follow-ups.
## How we built it
The hack is the combination of three major components: a Chrome extension, Node (actually... Bun) service, and Python script.
* The **extension** scrapes LeetCode for the question and function header, and forwards the context to the Node (Bun) service
* Then, the **Node service** prompts an LLM (e.g., Cohere, gpt-3.5-turbo, gpt-4) and then forwards the response to a keylogger written in Python
* Finally, the **Python keylogger** enables the user to toggle cheats on (or off...), and replaces the user's input with the LLM output, seamlessly
(Why the complex stack? Well... the extension makes it easy to interface with the DOM, the LLM prompting is best written in TypeScript to leverage the [TypeChat](https://microsoft.github.io/TypeChat/) library from Microsoft, and Python had the best tooling for creating a fast keylogger.)
(P.S. hey Cohere... I added support for your LLM to Microsoft's project [here](https://github.com/michaelfromyeg/typechat). gimme job plz.)
## Challenges we ran into
* HTML `Collection` data types are not fun to work with
* There were no actively maintained cross-platform keyloggers for Node, so we needed another service
* LLM prompting is surprisingly hard... they were not as smart as we were hoping (especially in creating 'reliable' and consistent outputs)
## Accomplishments that we're proud of
* We can now solve any Leetcode hard in 10 seconds
* What else could you possibly want in life?! | losing |
## Inspiration
Every project aims to solve a problem, and to solve people's concerns. When we walk into a restaurant, we are so concerned that not many photos are printed on the menu. However, we are always eager to find out what a food looks like. Surprisingly, including a nice-looking picture alongside a food item increases sells by 30% according to Rapp. So it's a big inconvenience for customers if they don't understand the name of a food. This is what we are aiming for! This is where we get into the field! We want to create a better impression on every customer and create a better customer-friendly restaurant society. We want every person to immediately know what they like to eat and the first impression of a specific food in a restaurant.
## How we built it
We mainly used ARKit, MVC and various APIs to build this iOS app. We first start with entering an AR session, and then we crop the image programmatically to feed it to OCR from Microsoft Azure Cognitive Service. It recognized the text from the image, though not perfectly. We then feed the recognized text to a Spell Check from Azure to further improve the quality of the text. Next, we used Azure Image Search service to look up the dish image from Bing, and then we used Alamofire and SwiftyJSON for getting the image. We created a virtual card using SceneKit and place it above the menu in ARView. We used Firebase as backend database and for authentication. We built some interactions between the virtual card and users so that users could see more information about the ordered dishes.
## Challenges we ran into
We ran into various unexpected challenges when developing Augmented Reality and using APIs. First, there are very few documentations about how to use Microsoft APIs on iOS apps. We learned how to use the third-party library for building HTTP request and parsing JSON files. Second, we had a really hard time understanding how Augmented Reality works in general, and how to place virtual card within SceneKit. Last, we were challenged to develop the same project as a team! It was the first time each of us was pushed to use Git and Github and we learned so much from branches and version controls.
## Accomplishments that we're proud of
Only learning swift and ios development for one month, we create our very first wonderful AR app. This is a big challenge for us and we still choose a difficult and high-tech field, which should be most proud of. In addition, we implement lots of API and create a lot of "objects" in AR, and they both work perfectly. We also encountered few bugs during development, but we all try to fix them. We're proud of combining some of the most advanced technologies in software such as AR, cognitive services and computer vision.
## What we learned
During the whole development time, we clearly learned how to create our own AR model, what is the structure of the ARScene, and also how to combine different API to achieve our main goal. First of all, we enhance our ability to coding in swift, especially for AR. Creating the objects in AR world teaches us the tree structure in AR, and the relationships among parent nodes and its children nodes. What's more, we get to learn swift deeper, specifically its MVC model. Last but not least, bugs teach us how to solve a problem in a team and how to minimize the probability of buggy code for next time. Most importantly, this hackathon poses the strength of teamwork.
## What's next for DishPlay
We desire to build more interactions with ARKit, including displaying a collection of dishes on 3D shelf, or cool animations that people can see how those favorite dishes were made. We also want to build a large-scale database for entering comments, ratings or any other related information about dishes! We are happy that Yelp and OpenTable bring us more closed to the restaurants. We are excited about our project because it will bring us more closed to our favorite food! | **DO YOU** hate standing at the front of a line at a restaurant and not knowing what to choose? **DO YOU** want to know how restaurants are dealing with COVID-19? **DO YOU** have fat fingers and hate typing on your phone's keyboard? Then Sizzle is the perfect app for you!
## Inspiration
We wanted to create a fast way of getting important information for restaurants (COVID-19 restrictions, hours of operation, etc...). Although there are existing methods of getting the information, it isn't always kept in one place. Especially in the midst of a global epidemic, it is important to know how you can keep yourself safe. That's why we designed our app so that the COVID-19 accommodations are visible straight away. (Sort of like Shazam or Google Assistant but with a camera and restaurants instead)
## What it does
To use Sizzle, simply point it at any restaurant sign. An ML computer vision model then applies text-recognition to recognize the text. This text is then input into a Google Scraper Function, which returns information about the restaurant, including the COVID-19 accommodations.
## How it's built
We built Sizzle in Java, using the Jsoup library. The ML Computer vision model was built using Firebase. The app itself was built in Android Studio and also coded in Java. We used Figma to draft working designs for the app.
## Challenges
Our team members are from 3 different timezones, so it was challenging finding a time where we could all work together. Moreover, for many of us, this was our first time working extensively with Android Studio, so it was challenging to figure out some of the errors and syntax. Finally, the Jsoup library kept malfunctioning, so we had to find a way to implement it properly (despite how frustrating it became).
## Accomplishments
Our biggest accomplishment would probably be completing our project in the end. Despite not including all the features we initially wanted to, we were able to implement most of our ideas. We encountered a lot of roadblocks throughout our project (such as using the Jsoup library), but were able to overcome them which was also a big accomplishment for us.
## What I learned
Each of us took away something different from this experience. Some of us used Android Studio and coded in Java for the first time. Some of us went deeper into Machine Learning and experimented with something new. For others, it was their first time using the Jsoup library or even their first time attending a hackathon. We learned a lot about organization, teamwork, and coordination. We also learned more about Android Studio, Java, and Machine Learning.
## What's next?
Probably adding more information to the app such as the hours of operation, address, phone number, etc... | ## Inspiration
We started **TimeLy** because we noticed a lot of useful information was hidden in the feedback and grades of courses offered in **Stony Brook University**. This information was not being used to its full potential. We believed that if this data could be understood easily, it would help students pick the **right courses**, and teachers could make their classes even better. So, we decided to create a tool that could make sense of all this data quickly and easily.
## What it does
TimeLy is a chatbot which performs a multi-faceted analysis of course feedback and grading data to extract actionable insights that benefit students, educators, and academic administrators. It also implements GPT-3 to give users human like experience.
**For Students:**
Course Insights: Students receive personalized recommendations, aiding them in selecting courses that align with their academic goals and learning preferences.
Difficulty Levels: TimeLy categorizes courses into varying levels of difficulty, offering students a clear perspective to make informed decisions.
**For Educators:**
Feedback Analysis: It systematically analyzes student feedback, transforming it into clear, actionable insights for course improvement.
Sentiment Scores: Educators gain insights into the emotional tone of feedback, allowing them to address specific areas of concern or enhancement.
**For Administrators:**
Data Overview: Academic administrators access a consolidated view of courses’ performance, sentiments, and difficulty levels, enabling strategic decision-making.
Real-Time Queries: The platform supports real-time queries, offering instant insights to optimize academic offerings and student experiences.
With the help of advanced algorithms, TimeLy processes and analyzes educational data, translating it into normalized scores that offer a comparative view of courses. The sentiment analysis feature delves into the emotional context of feedback, presenting a balanced view of positive and negative sentiments.
With the integration of machine learning and AI, the platform becomes interactive. Users can ask questions and receive real-time answers, thanks to the integration of OpenAI GPT-3.5 Turbo. Flask's web interface ensures the platform is accessible and user-friendly, making complex data understandable and usable for decision-making.
## How we built it
The initial phase involved the extraction of data from Excel sheets. We wrote a Python script leveraging the Pandas library, an open-source data analysis and manipulation tool, to process and organize vast datasets efficiently.
Our code is designed to automatically check for pre-processed data stored in a **Parquet file** (to make the processing more faster), a columnar storage file format that is highly optimized for use with data processing frameworks. If the processed data is unavailable, our script initiates the extraction, transformation, and loading **(ETL)** process on the raw data from the Excel file.
For **sentiment analysis**, we employed a specialized sentiment analysis pipeline with the help of huggingface. It’s capable of processing large volumes of textual feedback to derive sentiment scores, categorizing them into positive, negative, or neutral sentiments. We addressed the challenge of handling extensive text data by implementing a truncation mechanism, ensuring optimal performance without compromising the quality of insights.
To transition TimeLy into an interactive, user-friendly platform, we utilized **Flask**, a micro web framework in Python. Flask enabled us to build a web-based interface that is both intuitive and responsive with the help of **HTML**, **CSS** and **JavaScript**. Users can input their queries in **natural language**, and the system, also integrated with the **OpenAI GPT-3.5 Turbo model**, provides real-time, intelligent, and contextual responses aside from the course schedule part.
We also incorporated **Spacy**, a leading library in **NLP (Natural Language Processing)**, to parse and categorize user inputs effectively, ensuring each query yields the most accurate and relevant results. The integration of these advanced technologies transformed TimeLy into a robust, interactive, and highly intuitive educational data analysis platform.
## Challenges we ran into
We did run into some challenges. One big challenge was getting and handling a lot of text data. We had to figure out a way to read and understand this data without taking too much time. Also, the time played a crucial rule to limit the features that we wanted to implement in our project. Another challenge was making the tool user-friendly. We wanted to make sure anyone could use it without needing to know a lot about data or programming. Balancing between making the tool powerful and keeping it easy to use was tough, but we learned a lot from it.
## Accomplishments that we're proud of
We are particularly proud of how TimeLy has came out from a concept into a functional, interactive tool that stands at the confluence of education and technology. Even though it lacks a lot of things, we are proud of what we built.
**Interactivity:** The seamless integration of OpenAI GPT-3.5 Turbo, enabling real-time user interactions and intelligent responses, is an achievement that elevates the user experience.
**Sentiment Analysis:** Implementing a robust sentiment analysis feature that provides nuanced insights into the emotional context of student feedback is another accomplishment.
**User Experience:** We successfully created an intuitive user interface using Flask, ensuring that complex data is accessible and understandable to all users, irrespective of their technical expertise.
## What we learned
**Technical Skills:** We improved our skills in Python, data analysis, and machine learning. Working with libraries like Pandas, Spacy, and integrating OpenAI was a rich learning experience.
**User Engagement:** We learned the pivotal role of user experience, driving us to make TimeLy as intuitive and user-friendly as possible while retaining its technical robustness.
**Data Insights:** The project deepened our understanding of the power of data and how processed, analyzed data can be a goldmine of insights for students, educators, and institutions.
## What's next for TimeLy: A Course Recommender Tool
**Feature Expansion:** We plan to enhance TimeLy by adding more features, such as personalized course recommendations based on individual student’s academic history, learning preferences, and career aspirations.
**Data Sources:** We aim to integrate additional data sources from different colleges to provide a more comprehensive view and richer insights into courses, instructors, and institutions.
**AI Integration:** We are exploring opportunities to further harness AI, enhancing the tool’s predictive analytics capabilities to forecast trends and offer future-focused insights.
**User Community:** Building a community where users can share their experiences, provide feedback, and contribute to the continuous improvement of TimeLy. | partial |
## About
Learning a foreign language can pose challenges, particularly without opportunities for conversational practice. Enter SpyLingo! Enhance your language proficiency by engaging in missions designed to extract specific information from targets. You select a conversation topic, and the spy agency devises a set of objectives for you to query the target about, thereby completing the mission. Users can choose their native language and the language they aim to learn. The website and all interfaces seamlessly translate into their native tongue, while missions are presented in the foreign language.
## Features
* Choose a conversation topic provided by the spy agency and it will generate a designated target and a set of objectives to discuss.
* Engage the target in dialogue in the foreign language on any subject! As you achieve objectives, they'll be automatically marked off your mission list.
* Witness dynamically generated images of the target, reflecting the topics they discuss, after each response.
* Enhance listening skills with automatically generated audio of the target's response.
* Translate the entire message into your native language for comprehension checks.
* Instantly translate any selected word within the conversation context, providing additional examples of its usage in the foreign language, which can be bookmarked for future review.
* Access hints for formulating questions about the objectives list to guide interactions with the target.
* Your messages are automatically checked for grammar and spelling, with explanations in your native language for correcting foreign language errors.
## How we built it
With the time constraint of the hackathon, this project was built entirely on the frontend of a web application. The TogetherAI API was used for all text and image generation and the ElevenLabs API was used for audio generation. The OpenAI API was used for detecting spelling and grammar mistakes.
## Challenges we ran into
The largest challenge of this project was building something that can work seamlessly in **812 different native-foreign language combinations.** There was a lot of time spent on polishing the user experience to work with different sized text, word parsing, different punctuation characters, etc.
Even more challenging was the prompt engineering required to ensure the AI would speak in the language it is supposed to. The chat models frequently would revert to English if the prompt was in English, even if the prompt specified the model should respond in a different language. As a result, there are **over 800** prompts used, as each one has to be translated into every language supported during build time.
There was also a lot of challenges in reducing the latency of the API responses to make for a pleasant user experience. After many rounds of performance optimizations, the app now effectively generates the text, audio, and images in perceived real time.
## Accomplishments that I'm proud of
The biggest challenges also yielded the biggest accomplishments in my eyes. Building a chatbot that can be interacted with in any language and operates in real time by myself in the time limit was certainly no small task.
I'm also exceptionally proud of the fact that I honestly think it's fun to play. I've had many projects that get dumped on a dusty shelf once completed, but the fact that I fully intend to keep using this after the hackathon to improve my language skills makes me very happy.
## What we learned
I had never used these APIs before beginning this hackathon, so there was quite a bit of documentation that I had to read to understand for how to correctly stream the text & audio generation.
## What's next for SpyLingo
There are still more features that I'd like to add, like different types of missions for the user. I also think the image prompting can use some more work since I'm not as familiar with image generation.
I would like to productionize this project and setup a proper backend & database for it. Maybe I'll set up a stripe integration and make it available for the public too! | ## Inspiration
Our biggest inspiration for this project was to find a way to express or encapsulate almost “typical” human emotions visually through immersive art. There are so many different ways to express emotions, but our main focus was to work on a high-powered hardware project combining hardware (Arduino) and software (computer vision). We wanted to use high power image processing for this project, so we utilized the machine learning capabilities of the Arduino Portenta H7 as our camera input. While we had something that can see the world literally, we felt the need to re-express the world through a new artistic vision — in this project we chose flashing lights and color. Additionally, we became friends 3 years ago through our love of combining computing and creativity, it was only appropriate to get back to our roots of connection!
## What it does
This project utilizes the camera on board the Arduino Portenta H7 to recognize emotion on an individual's face, then process that emotion to display it as a color on an artistic light matrix display controlled separately by an Arduino Uno.
## How we built it
First, we found a dataset of thousands of images of faces tagged under 6 emotions: happy, sad, angry, surprised, neutral, and fearful. This dataset was loaded into edge impulse, which allowed us to create a transfer learning based image classifier model to classify an image as belonging to one of these categories. This model was then optimized for the Arduino Portenta H7 + Vision shield and loaded onto the board. A script was then written which would give the most likely class for each of the emotions listed and print the id of that class to the serial console.
After this script was written, we connected the Portenta to a computer which was also connected to an Arduino Uno running a neopixel light matrix. On this computer, we read in serial messages from the Portenta and re-sent them to the Uno, transferring the vision data to the Uno.
Finally, we programmed the Arduino to interpolate between colors based on the emotion data received, stabilizing to a different color for each emotion based on color psychology and the connection between certain colors and their emotional connections.
## Challenges we ran into
Our goal was to decrease the training time for our neural networks and datasets provided. The time it took to download the datasets was lengthy, and there was an uneven distribution of images per emotion category. For example, there were only 3,000 images for "sad," whereas "happy" had over 7,000 images. We were concerned about achieving higher accuracy in training our data, as we were only achieving a maximum of 40% accuracy. We searched online for the highest emotional accuracy and found 50%, so we left the model at 40% since, in the time frame given, it didn't seem like we would be able to optimize the model very much, especially with it running on a microcontroller.
Figuring out how to go from the Portenta H7 -> Uno -> Serial -> LED/Neopixel Matrix was also difficult to navigate at first, but eventually it was figured out through trial and error. A major issue we ran into was the Portenta not connecting to the serial port, which was solved by re-flashing the firmware and uploading the code file to the board differently. Another minor issue we had was that we were constrained by time when we used the 3D printer to make our Arduino cover, and we were unable to use power tools to clean it up, which resulted in it not meeting our aesthetic expectations.
## Accomplishments that we're proud of
We are proud of figuring out how to incorporate art and computing into a project — a long term goal of ours is being able to combine software and hardware together to make an art piece that is personal to the both of us. Additionally, we are super proud that we were able to successfully use the Portenta H7 for the first time, as it is a new board to both of us.
## What we learned
The biggest thing we learned was probably how to use the Arduino Portenta. Before this hackathon, neither of us had ever seen a Portenta. Now, we have learned how to create a machine learning model for the Portenta in edge impulse, connect to OpenMV, interface from the Portenta to a computer, and from the computer to the LED grid. We have also learned a lot about neopixel interfacing with Arduino, facial recognition/detection algorithms, and serial communication.
## What's next for Wavelength
We are hoping to expand and classify more emotions in the future — six human emotions is not enough to encapsulate how humans express themselves to one another. Combining different design patterns with more colors onto the LED Pixel Matrix is on the radar as well. Additionally, it is a hope to train our dataset more accurately so it beats more than 40% accuracy, although most emotion detection models have relatively low accuracy (at most 55-60%). | ## Inspiration
Being a student of the University of Waterloo, every other semester I have to attend interviews for Co-op positions. Although it gets easier to talk to people, the more often you do it, I still feel slightly nervous during such face-to-face interactions. During this nervousness, the fluency of my conversion isn't always the best. I tend to use unnecessary filler words ("um, umm" etc.) and repeat the same adjectives over and over again. In order to improve my speech through practice against a program, I decided to create this application.
## What it does
InterPrep uses the IBM Watson "Speech-To-Text" API to convert spoken word into text. After doing this, it analyzes the words that are used by the user and highlights certain words that can be avoided, and maybe even improved to create a stronger presentation of ideas. By practicing speaking with InterPrep, one can keep track of their mistakes and improve themselves in time for "speaking events" such as interviews, speeches and/or presentations.
## How I built it
In order to build InterPrep, I used the Stdlib platform to host the site and create the backend service. The IBM Watson API was used to convert spoken word into text. The mediaRecorder API was used to receive and parse spoken text into an audio file which later gets transcribed by the Watson API.
The languages and tools used to build InterPrep are HTML5, CSS3, JavaScript and Node.JS.
## Challenges I ran into
"Speech-To-Text" API's, like the one offered by IBM tend to remove words of profanity, and words that don't exist in the English language. Therefore the word "um" wasn't sensed by the API at first. However, for my application, I needed to sense frequently used filler words such as "um", so that the user can be notified and can improve their overall speech delivery. Therefore, in order to implement this word, I had to create a custom language library within the Watson API platform and then connect it via Node.js on top of the Stdlib platform. This proved to be a very challenging task as I faced many errors and had to seek help from mentors before I could figure it out. However, once fixed, the project went by smoothly.
## Accomplishments that I'm proud of
I am very proud of the entire application itself. Before coming to Qhacks, I only knew how to do Front-End Web Development. I didn't have any knowledge of back-end development or with using API's. Therefore, by creating an application that contains all of the things stated above, I am really proud of the project as a whole. In terms of smaller individual accomplishments, I am very proud of creating my own custom language library and also for using multiple API's in one application successfully.
## What I learned
I learned a lot of things during this hackathon. I learned back-end programming, how to use API's and also how to develop a coherent web application from scratch.
## What's next for InterPrep
I would like to add more features for InterPrep as well as improve the UI/UX in the coming weeks after returning back home. There is a lot that can be done with additional technologies such as Machine Learning and Artificial Intelligence that I wish to further incorporate into my project! | losing |
## Inspiration
Being disorganized can put a strain on your productivity and mental health. Since all of us have dealt with this before we wanted to create an application that would increase our productivity, while being user-friendly, and quick. If we knew what we had to do throughout the day, but it impedes with our schedule, it's really tough to organize around that.
## What it does
The user inputs up to 10 daily goals into our application and the priority of accomplishing it, then our algorithm sorts it by what we researched to be the best flow to accomplish personalized goals. Then it displays it in a sorted list for you on which goals you should tackle first for highest productivity. Finally, you have a choice of uploading it to your Google Calendar which will lay the events out to not overlap with each other nor and current events in your day.
## How we built it
Implemented in Python and the tkinter library for the front end. Then used Python and Google Calendar API on the back end.
## Challenges we ran into
* Google Calendar API doesn't have a display for which time blocks are taken
* Sorting algorithms based on user inputted goals
* Taking user input from front end and using it for the back end
## Accomplishments that we're proud of
Successfully overcame all of our challenges and finished everything we had planned out Saturday morning
## What we learned
* We learned Python's library tkinter, how to handle Google's API through Python, and sorting algorithms
* Stay organized as a team as well as tackle each of our individual jobs done
* Learned how to divide up the work evenly
## What's next for lockITdown
* Implement all timezone's for the user (currently only America/Toronto)
* User inputs starting time of their day
* User follows their schedule and we can update our code and provide them points
* Provide a place for user feedback
* Publish the application on Play Store/ iOS App store (implement it in java) | ## Inspiration
We all know that you shouldn't simply throw away used batteries or broken lightbulbs in the bin.
But we also know that we might often be too lazy to go all the way to a recycling centre for a couple of batteries. The solution? Lazy disposal!
## What it does
Submit the items you want to get rid of in a couple of seconds! Leave the box containing these items right in front of your house, or in other place of your choice.
Alternatively, be the environmental hero! Check out the list of boxes that are waiting to be collected, collect the ones closer to you and bring them to the closest recycling centre.
You can see the status of a box in real time - if it's already "booked" by another volunteer, or if it's available for you to book it.
## How we built it
We mainly used the Convex platform and Formik and Yup for creating and submitting forms.
## Challenges we ran into
Incorporating the geolocation API was a bigger challenge than we expected.
## Accomplishments that we're proud of
We are proud that our 2-back-end-people team finally learned some front-end.
## What we learned
We learned how to use Convex, and more about TypeScript and React and their libraries.
## What's next for Lazy Disposal
We plan to
* extend our database of recycling centres
* provide an estimate of the amount of money a recycling centre can offer based on the box content
* gamify the process! | **In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.**
## Inspiration
Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief.
## What it does
**Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter.
## How we built it
We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community.
## Challenges we ran into
Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon.
## Accomplishments that we're proud of
We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives.
## What we learned
We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs.
## What's next for Stronger Together
We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations. | losing |
## Inspiration
Digitized conversations have given the hearing impaired and other persons with disabilities the ability to better communicate with others despite the barriers in place as a result of their disabilities. Through our app, we hope to build towards a solution where we can extend the effects of technological development to aid those with hearing disabilities to communicate with others in real life.
## What it does
Co:herent is (currently) a webapp which allows the hearing impaired to streamline their conversations with others by providing them sentence or phrase suggestions given the context of a conversation. We use Co:here's NLP text generation API in order to achieve this and in order to provide more accurate results we give the API context from the conversation as well as using prompt engineering in order to better tune the model. The other (non hearing impaired) person is able to communicate with the webapp naturally through speech-to-text inputs and text-to-speech functionalities are put in place in order to better facilitate the flow of the conversation.
## How we built it
We built the entire app using Next.js with the Co:here API, React Speech Recognition API, and React Speech Kit API.
## Challenges we ran into
* Coming up with an idea
* Learning Next.js as we go as this is all of our first time using it
* Calling APIs are difficult without a backend through a server side rendered framework such as Next.js
* Coordinating and designating tasks in order to be efficient and minimize code conflicts
* .env and SSR compatibility issues
## Accomplishments that we're proud of
Creating a fully functional app without cutting corners or deviating from the original plan despite various minor setbacks.
## What we learned
We were able to learn a lot about Next.js as well as the various APIs through our first time using them.
## What's next for Co:herent
* Better tuning the NLP model to generate better/more personal responses as well as storing and maintaining more information on the user through user profiles and database integrations
* Better tuning of the TTS, as well as giving users the choice to select from a menu of possible voices
* Possibility of alternative forms of input for those who may be physically impaired (such as in cases of cerebral palsy)
* Mobile support
* Better UI | ## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages. | ## Inspiration
According to the American Psychological Association, one in three college freshmen worldwide suffer from mental health disorders. As freshmen, this issue is close to our hearts as we witness some of our peers struggle with adjusting to college life. We hope to help people understand how their daily activities influence their emotional welfare, and provide them with a safe space to express themselves.
## What it does
daybook is a secret weapon for students and others to be a bit happier every day. It's a **web-based journal** that gives people a space to reflect, automatically generating insights about mood trends and what things make people happiest. Our goal is to teach people how to best understand themselves and their happiness.
We don't force our users to go through awkward data entry or to be their own psychiatrist. Instead, daybook lets users write a few sentences for just a minute a day. Behind the scenes, daybook does all the work - using Google's Natural Language API to **automatically find the mood for each day and what events, people, and places in our users' life make them happiest**. daybook then gives the power back to our users with a **happiness summary** of mood, sleep, and a list of things that make them happiest, letting our users discover happy things they might not have even thought about.
## How we built it
daybook was built with Google Cloud's Natural Language API to automatically rate activities on a scale of how good the user feels when carrying them out, while extracting and categorizing them.
We used a variety of technologies for our hack, ranging from:
Frontend: Vue.js, CSS3, HTML5 hosted on Firebase (also using Firebase Auth)
Backend: Flask (Python, SQLite3) app running on Google Compute Engine servers
## Challenges we ran into
* Coming up with a meaningful and viable idea
* Determining which platform to use to store a database
* How to get the entities we wanted from Google's Natural Language API
* Setting up all of our servers - we host our landing page, main site, API, and CDN in different places
## Accomplishments that we're proud of
* Templates in Vue
* Using Google's Natural Language Processing libraries
* Developing a fully functional web application that is responsive, even for mobile
* Rolling our own CSS framework - minimalism was key
* Extensive use of Google Cloud - Firebase, Compute Engine, NLP
* Git best practices - all feature changes were made on separate branches and pull requested
* It's the first hackathon for 3 of us!
* DARK MODE WORKS
* Of course, our cool domain name: daybook.space, my.daybook.space
## What we learned
A whole lot of stuff. Three of us came in completely new to hackathons, so daybook was an opportunity to learn:
Creating databases and managing data using Google's Cloud Firestore; sentiment analysis using Google's Natural Language API; writing an API using Flask; handling GET and POSTS requests to facilitate communication between our web application and database.
## What's next for daybook
* iOS/Android App with notifications!
* Better authentication options
* More detailed analysis
* Getting our first users - ourselves | winning |
## Inspiration
this is a project which is given to me by an organization and my collegues inspierd me to do this project
## What it does
It can remind what we have to do in future and also set time when it is to be done
## How we built it
I built it using command time utility in python programming
## Challenges we ran into
Many challanges such as storing data in file and many bugs ae come in between middle of this program
## Accomplishments that we're proud of
I am proud that i make this real time project which reminds a person todo his tasks
## What we learned
I leraned more about command line utility of python
## What's next for Todo list
Next I am doing various projects such as Virtual assistant and game development | ## Inspiration
Ever join a project only to be overwhelmed by all of the open tickets? Not sure which tasks you should take on to increase the teams overall productivity? As students, we know the struggle. We also know that this does not end when school ends and in many different work environments you may encounter the same situations.
## What it does
tAIket allows project managers to invite collaborators to a project. Once the user joins the project, tAIket will analyze their resume for both soft skills and hard skills. Once the users resume has been analyzed, tAIket will provide the user a list of tickets sorted in order of what it has determined that user would be the best at. From here, the user can accept the task, work on it, and mark it as complete! This helps increase productivity as it will match users with tasks that they should be able to complete with relative ease.
## How we built it
Our initial prototype of the UI was designed using Figma. The frontend was then developed using the Vue framework. The backend was done in Python via the Flask framework. The database we used to store users, projects, and tickets was Redis.
## Challenges we ran into
We ran into a few challenges throughout the course of the project. Figuring out how to parse a PDF, using fuzzy searching and cosine similarity analysis to help identify the users skills were a few of our main challenges. Additionally working out how to use Redis was another challenge we faced. Thanks to the help from the wonderful mentors and some online resources (documentation, etc.), we were able to work through these problems. We also had some difficulty working out how to make our site look nice and clean. We ended up looking at many different sites to help us identify some key ideas in overall web design.
## Accomplishments that we're proud of
Overall, we have much that we can be proud of from this project. For one, implementing fuzzy searching and cosine similarity analysis is something we are happy to have achieved. Additionally, knowing how long the process to create a UI should normally take, especially when considering user centered design, we are proud of the UI that we were able to create in the time that we did have.
## What we learned
Each team member has a different skillset and knowledge level. For some of us, this was a great opportunity to learn a new framework while for others this was a great opportunity to challenge and expand our existing knowledge. This was the first time that we have used Redis and we found it was fairly easy to understand how to use it. We also had the chance to explore natural language processing models with fuzzy search and our cosine similarity analysis.
## What's next for tAIket
In the future, we would like to add the ability to assign a task to all members of a project. Some tasks in projects *must* be completed by all members so we believe that this functionality would be useful. Additionally, the ability for "regular" users to "suggest" a task. We believe that this functionality would be useful as sometimes a user may notice something that is broken or needs to be completed but the project manager has not noticed it. Finally, something else that we would work on in the future would be the implementation of the features located in the sidebar of the screen where the tasks are displayed. | ## Inspiration
We wanted to ease the workload and and increase the organization of students when it comes to scheduling and completing tasks. This way, they have one, organized platform where they can store all of the tasks they need to get done, and they can have fun with it by earning points and purchasing features!
## What it does
The project is a Discord Bot that allows user to input tasks, set the number of hours that they want to engage in the task for, set due dates, and earn virtual rewards to motivate them to complete these tasks. They receive reminders for their tasks at the appropriate time, too!
## How we built it
We built the bot using the Discord developer tools at our disposal combined with Python code, while using SQL to create a database that stores all user task information.
## Challenges we ran into
A large challenge was to relearn the commands, methods, and attributes associated with coding a bot using Python, since it is very different from coding in other areas. We had to relearn basic functions such as printing, user input, and methods to tailor it to the needs of the bot. Another challenge was being able to sync up our Python work with SQL in order to integrate pulling and manipulating information from the database automatically.
## Accomplishments that we're proud of
We're proud of creating a functional prototype linking our code to the Discord application!
## What we learned
We learned more about how to code bots in Discord and coding in Python in general.
## What's next for Discord To-Do List
In the future, we would like to expand our virtual currency and shop to include more items that the user can purchase. | partial |
## Inspiration
Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans.
## What it does
Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise.
## How we built it
At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data.
We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync.
Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase.
## Challenges we ran into
One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it.
## What's next for phys.io
<https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0> | ## Inspiration
Physiotherapy is expensive for what it provides you with, A therapist stepping you through simple exercises and giving feedback and evaluation? WE CAN TOTALLY AUTOMATE THAT! We are undergoing the 4th industrial revolution and technology exists to help people in need of medical aid despite not having the time and money to see a real therapist every week.
## What it does
IMU and muscle sensors strapped onto the arm accurately track the state of the patient's arm as they are performing simple arm exercises for recovery. A 3d interactive GUI is set up to direct patients to move their arm from one location to another by performing localization using IMU data. A classifier is run on this variable-length data stream to determine the status of the patient and how well the patient is recovering. This whole process can be initialized with the touch of a button on your very own mobile application.
## How WE built it
on the embedded system side of things, we used a single raspberry pi for all the sensor processing. The Pi is in charge of interfacing with the IMU while another Arduino interfaces with the other IMU and a muscle sensor. The Arduino then relays this info over a bridged connection to a central processing device where it displays the 3D interactive GUI and processes the ML data. all the data in the backend is relayed and managed using ROS. This data is then uploaded to firebase where the information is saved on the cloud and can be accessed anytime by a smartphone. The firebase also handles plotting data to give accurate numerical feedback of the many values orientation trajectory, and improvement over time.
## Challenges WE ran into
hooking up 2 IMU to the same rpy is very difficult. We attempted to create a multiplexer system with little luck.
To run the second IMU we had to hook it up to the Arduino. Setting up the library was also difficult.
Another challenge we ran into was creating training data that was general enough and creating a preprocessing script that was able to overcome the variable size input data issue.
The last one was setting up a firebase connection with the app that supported the high data volume that we were able to send over and to create a graphing mechanism that is meaningful. | ## Inspiration
One of our team members is a community manager for a real estate development group that often has trouble obtaining certifications in their attempts to develop eco-friendly buildings. The trouble that they go through, leaves them demotivated because of the cost and effort, leading them to avoid the process altogether, choosing to instead develop buildings that are not good for the environment.
If there was some way that they could more easily see what tier of LEED certification they could fall into and furthermore, what they need to do to get to the NEXT tier, they would be more motivated to do so, benefiting both their building practices as well as the Earth.
## What it does
Our product is a model that takes in building specifications and is trained on LEED codes. We take your building specifications and then answer any questions you may have on your building as well as put it into bronze, silver, gold, or platinum tiering!
## How we built it
The project structure is NextJS, React, Tailwind and for the ai component we used a custom openai api contextualized using past building specs and their certification level. We also used stack ai for testing and feature analysis.
## Challenges we ran into
The most difficult part of our project was figuring out how to make the model understand what buildings fall into different tiers.
## Accomplishments that we're proud of
GETTING THIS DONE ON TIME!!
## What we learned
This is our first full stack project using ai.
## What's next for LEED Bud
We're going to bring this to builders across Berkeley for them to use! Starting of course at the company of our team member! | winning |
## Inspiration
There's something about brief glints in the past that just stop you in your tracks: you dip down, pick up an old DVD of a movie while you're packing, and you're suddenly brought back to the innocent and carefree joy of when you were a kid. It's like comfort food.
So why not leverage this to make money? The ethos of nostalgic elements from everyone's favourite childhood relics turns heads. Nostalgic feelings have been repeatedly found in studies to increase consumer willingness to spend money, boosting brand exposure, conversion, and profit.
## What it does
Large Language Marketing (LLM) is a SaaS built for businesses looking to revamp their digital presence through "throwback"-themed product advertisements.
Tinder x Mean Girls? The Barbie Movie? Adobe x Bob Ross? Apple x Sesame Street? That could be your brand, too. Here's how:
1. You input a product description and target demographic to begin a profile
2. LLM uses the data with the Co:here API to generate a throwback theme and corresponding image descriptions of marketing posts
3. OpenAI prompt engineering generates a more detailed image generation prompt featuring motifs and composition elements
4. DALL-E 3 is fed the finalized image generation prompt and marketing campaign to generate a series of visual social media advertisements
5. The Co:here API generates captions for each advertisement
6. You're taken to a simplistic interface where you can directly view, edit, generate new components for, and publish each social media post, all in one!
7. You publish directly to your business's social media accounts to kick off a new campaign 🥳
## How we built it
* **Frontend**: React, TypeScript, Vite
* **Backend**: Python, Flask, PostgreSQL
* **APIs/services**: OpenAI, DALL-E 3, Co:here, Instagram Graph API
* **Design**: Figma
## Challenges we ran into
* **Prompt engineering**: tuning prompts to get our desired outputs was very, very difficult, where fixing one issue would open up another in a fine game of balance to maximize utility
* **CORS hell**: needing to serve externally-sourced images back and forth between frontend and backend meant fighting a battle with the browser -- we ended up writing a proxy
* **API integration**: with a lot of technologies being incorporated over our frontend, backend, database, data pipeline, and AI services, massive overhead was introduced into getting everything set up and running on everyone's devices -- npm versions, virtual environments, PostgreSQL, the Instagram Graph API (*especially*)...
* **Rate-limiting**: the number of calls we wanted to make versus the number of calls we were allowed was a small tragedy
## Accomplishments that we're proud of
We're really, really proud of integrating a lot of different technologies together in a fully functioning, cohesive manner! This project involved a genuinely technology-rich stack that allowed each one of us to pick up entirely new skills in web app development.
## What we learned
Our team was uniquely well-balanced in that every one of us ended up being able to partake in everything, especially things we hadn't done before, including:
1. DALL-E
2. OpenAI API
3. Co:here API
4. Integrating AI data pipelines into a web app
5. Using PostgreSQL with Flask
6. For our non-frontend-enthusiasts, atomic design and state-heavy UI creation :)
7. Auth0
## What's next for Large Language Marketing
* Optimizing the runtime of image/prompt generation
* Text-to-video output
* Abstraction allowing any user log in to make Instagram Posts
* More social media integration (YouTube, LinkedIn, Twitter, and WeChat support)
* AI-generated timelines for long-lasting campaigns
* AI-based partnership/collaboration suggestions and contact-finding
* UX revamp for collaboration
* Option to add original content alongside AI-generated content in our interface | ## Inspiration
As avid readers, we wanted a tool to track our reading metrics. As a child, one of us struggled with concentrating and focusing while reading. Specifically, there was a strong tendency to zone out. Our app provides the ability for a user to track their reading metrics and also quantify their progress in improving their reading skills.
## What it does
By incorporating Ad Hawk’s eye-tracking hardware into our build, we’ve developed a reading performance tracker system that tracks and analyzes reading patterns and behaviours, presenting dynamic second-by-second updates delivered to your phone through our app.
These metrics are calculated through our linear algebraic models, then provided to our users in an elegant UI interface on their phones. We provide an opportunity to identify any areas of potential improvement in a user’s reading capabilities.
## How we built it
We used the Ad Hawk hardware and backend to record the eye movements. We used their Python SDK to collect and use the data in our mathematical models. From there, we outputted the data into our Flutter frontend which displays the metrics and data for the user to see.
## Challenges we ran into
Piping in data from Python to Flutter during runtime was slightly frustrating because of the latency issues we faced. Eventually, we decided to use the computer's own local server to accurately display and transfer the data.
## Accomplishments that we're proud of
Proud of our models to calculate the speed of reading, detection of page turns and other events that were recorded simply through changes of eye movement.
## What we learned
We learned that Software Development in teams is best done by communicating effectively and working together with the same final vision in mind. Along with this, we learned that it's extremely critical to plan out small details as well as broader ones to ensure plan execution occurs seamlessly.
## What's next for SeeHawk
We hope to add more metrics to our app, specifically adding a zone-out tracker which would record the number of times a user "zones out". | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | winning |
### 💡 Inspiration 💡
We call them heroes, **but the support we give them is equal to the one of a slave.**
Because of the COVID-19 pandemic, a lot of medics have to keep track of their patient's history, symptoms, and possible diseases. However, we've talked with a lot of medics, and almost all of them share the same problem when tracking the patients: **Their software is either clunky and bad for productivity, or too expensive to use on a bigger scale**. Most of the time, there is a lot of unnecessary management that needs to be done to get a patient on the record.
Moreover, the software can even get the clinician so tired they **have a risk of burnout, which makes their disease predictions even worse the more they work**, and with the average computer-assisted interview lasting more than 20 minutes and a medic having more than 30 patients on average a day, the risk is even worse. That's where we introduce **My MedicAid**. With our AI-assisted patient tracker, we reduce this time frame from 20 minutes to **only 5 minutes.** This platform is easy to use and focused on giving the medics the **ultimate productivity tool for patient tracking.**
### ❓ What it does ❓
My MedicAid gets rid of all of the unnecessary management that is unfortunately common in the medical software industry. With My MedicAid, medics can track their patients by different categories and even get help for their disease predictions **using an AI-assisted engine to guide them towards the urgency of the symptoms and the probable dangers that the patient is exposed to.** With all of the enhancements and our platform being easy to use, we give the user (medic) a 50-75% productivity enhancement compared to the older, expensive, and clunky patient tracking software.
### 🏗️ How we built it 🏗️
The patient's symptoms get tracked through an **AI-assisted symptom checker**, which uses [APIMedic](https://apimedic.com/i) to process all of the symptoms and quickly return the danger of them and any probable diseases to help the medic take a decision quickly without having to ask for the symptoms by themselves. This completely removes the process of having to ask the patient how they feel and speeds up the process for the medic to predict what disease their patient might have since they already have some possible diseases that were returned by the API. We used Tailwind CSS and Next JS for the Frontend, MongoDB for the patient tracking database, and Express JS for the Backend.
### 🚧 Challenges we ran into 🚧
We had never used APIMedic before, so going through their documentation and getting to implement it was one of the biggest challenges. However, we're happy that we now have experience with more 3rd party APIs, and this API is of great use, especially with this project. Integrating the backend and frontend was another one of the challenges.
### ✅ Accomplishments that we're proud of ✅
The accomplishment that we're the proudest of would probably be the fact that we got the management system and the 3rd party API working correctly. This opens the door to work further on this project in the future and get to fully deploy it to tackle its main objective, especially since this is of great importance in the pandemic, where a lot of patient management needs to be done.
### 🙋♂️ What we learned 🙋♂️
We learned a lot about CRUD APIs and the usage of 3rd party APIs in personal projects. We also learned a lot about the field of medical software by talking to medics in the field who have way more experience than us. However, we hope that this tool helps them in their productivity and to remove their burnout, which is something critical, especially in this pandemic.
### 💭 What's next for My MedicAid 💭
We plan on implementing an NLP-based service to make it easier for the medics to just type what the patient is feeling like a text prompt, and detect the possible diseases **just from that prompt.** We also plan on implementing a private 1-on-1 chat between the patient and the medic to resolve any complaints that the patient might have, and for the medic to use if they need more info from the patient. | ## Inspiration
Both chronic pain disorders and opioid misuse are on the rise, and the two are even more related than you might think -- over 60% of people who misused prescription opioids did so for the purpose of pain relief. Despite the adoption of PDMPs (Prescription Drug Monitoring Programs) in 49 states, the US still faces a growing public health crisis -- opioid misuse was responsible for more deaths than cars and guns combined in the last year -- and lacks the high-resolution data needed to implement new solutions.
While we were initially motivated to build Medley as an effort to address this problem, we quickly encountered another (and more personal) motivation. As one of our members has a chronic pain condition (albeit not one that requires opioids), we quickly realized that there is also a need for a medication and symptom tracking device on the patient side -- oftentimes giving patients access to their own health data and medication frequency data can enable them to better guide their own care.
## What it does
Medley interacts with users on the basis of a personal RFID card, just like your TreeHacks badge. To talk to Medley, the user presses its button and will then be prompted to scan their ID card. Medley is then able to answer a number of requests, such as to dispense the user’s medication or contact their care provider. If the user has exceeded their recommended dosage for the current period, Medley will suggest a number of other treatment options added by the care provider or the patient themselves (for instance, using a TENS unit to alleviate migraine pain) and ask the patient to record their pain symptoms and intensity.
## How we built it
This project required a combination of mechanical design, manufacturing, electronics, on-board programming, and integration with cloud services/our user website. Medley is built on a Raspberry Pi, with the raspiaudio mic and speaker system, and integrates an RFID card reader and motor drive system which makes use of Hall sensors to accurately actuate the device. On the software side, Medley uses Python to make calls to the Houndify API for audio and text, then makes calls to our Microsoft Azure SQL server. Our website uses the data to generate patient and doctor dashboards.
## Challenges we ran into
Medley was an extremely technically challenging project, and one of the biggest challenges our team faced was the lack of documentation associated with entering uncharted territory. Some of our integrations had to be twisted a bit out of shape to fit together, and many tragic hours spent just trying to figure out the correct audio stream encoding.
Of course, it wouldn’t be a hackathon project without overscoping and then panic as the deadline draws nearer, but because our project uses mechanical design, electronics, on-board code, and a cloud database/website, narrowing our scope was a challenge in itself.
## Accomplishments that we're proud of
Getting the whole thing into a workable state by the deadline was a major accomplishment -- the first moment we finally integrated everything together was a massive relief.
## What we learned
Among many things:
The complexity and difficulty of implementing mechanical systems
How to adjust mechatronics design parameters
Usage of Azure SQL and WordPress for dynamic user pages
Use of the Houndify API and custom commands
Raspberry Pi audio streams
## What's next for Medley
One feature we would have liked more time to implement is better database reporting and analytics. We envision Medley’s database as a patient- and doctor-usable extension of the existing state PDMPs, and would be able to leverage patterns in the data to flag abnormal behavior. Currently, a care provider might be overwhelmed by the amount of data potentially available, but adding a model to detect trends and unusual events would assist with this problem. | ## Inspiration
The inspiration behind Medisync came from observing the extensive time patients and medical staff spend on filling out and processing medical forms. We noticed a significant delay in treatment initiation due to this paperwork. Our goal was to streamline this process, making healthcare more efficient and accessible by leveraging the power of AI. We envisioned a solution that not only saves time but also minimizes errors in patient data, leading to better patient outcomes.
## What it does
Medisync uses AI algorithms to automate the process of filling out medical forms. Patients can speak or type their information into the app, which then intelligently categorizes and inputs the data into the necessary forms. The patient's data will be continually updated with future questions as their medical history progresses. All data will be securely stored on the user's local machine. The user will be able to quickly and securely input their medical data into forms with different formats from different institutions. This results in a faster, more efficient onboarding process for patients.
## How we built it
We built Medisync using Natural Language Processing (NLP). Our development stack includes Python for backend development. We modeled a user-friendly interface that simplifies the data entry process. We uploaded common medical forms from the internet, we then scraped them for information and then using Open Ai's API call we populated the form outputting the final result in an md file.
## Challenges we ran into
Some challenges we faced were parsing the forms correctly and breaking down the questions into the relevant health categories. The output of the program was dependent on the level of understanding of the data that was available to fill the forms. Thus, in depth question generation was a challenge we had to overcome. We also had to understand how our project can be HIPAA compliant so that it can be released to the end user. Medical data is highly sensitive and personal and there are lots of privacy laws to product individuals. Going forward we have a detailed plan on how to make our service HIPAA compliant.
## Accomplishments that we're proud of
We are proud of developing a functional prototype that demonstrates a significant reduction in time spent on medical paperwork. Our pilot tests on random users showed a 70% decrease in patient onboarding time. Receiving positive feedback from patients. Furthermore, there is a huge issue surrounding human error in these forms. As they are repetitive long-form tasks it is easy for people to make a mistake. We are very proud to have made a product that makes patients safer and healthier by reducing error in medical forms.
## What we learned
Throughout this project, we learned the importance of interdisciplinary collaboration, combining expertise in AI, software development, and healthcare to address a common challenge. We gained insights into the complexities of healthcare regulations and the critical role of data privacy. This project also honed our skills in AI and ML, particularly in applying NLP techniques to real-world problems. Overall we have learnt the importance of solving a complex problem from many angles to create a solution in a time efficient manner. Combining new and old skills in a highly effective way.
## What's next for Medisync
Moving forward, we plan to make sure that Medisync fully meets HIPAA compliance and test our service with many more users. We are also exploring partnerships with hospitals and healthcare systems to integrate our solution into their existing workflows. Additionally, we aim to incorporate AI-driven analytics to provide healthcare providers with insights into patient data, further enhancing the quality of care. Allowing our service to fill out more forms faster. We also hope to improve the user experience and workflow of inputting user data by highlighting missing information from forms that users have filled out. In this we will make sure that we gather the right data in the fewest questions and thus in the most efficient way for our end user. | winning |
## Inspiration
Many people feel unconfident, shy, and/or awkward doing interview speaking. It can be challenging for them to know how to improve and what aspects are key to better performance. With Talkology, they will be able to practice in a rather private setting while receiving relatively objective speaking feedback based on numerical analysis instead of individual opinions. We hope this helps more students and general job seekers become more confident and comfortable, crack their behavioral interviews, and land that dream offer!
## What it does
* Gives users interview questions (behavioural, future expansion to questions specific to the job/industry)
* Performs quantitative analysis of users’ responses using speech-to-text & linguistic software package praat to study acoustic features of their speech
* Displays performance metrics with suggestions in a user-friendly, interactive dashboard
## How we built it
* React/JavaScript for the frontend dashboard and Flask/Python for backend server and requests
* My-voice-analysis package for voice analysis in Python
* AssemblyAI APIs for speech-to-text and sentiment analysis
* MediaStream Recording API to get user’s voice recordings
* Figma for the interactive display and prototyping
## Challenges we ran into
We went through many conversations to reach this idea and as a result, only started hacking around 8AM on Saturday. On top of this time constraint layer, we also lacked experience in frontend and full stack development. Many of us had to spend a lot of our time debugging with package setup, server errors, and for some of us even M1-chip specific problems.
## Accomplishments that we're proud of
This was Aidan’s first full-stack application ever. Though we started developing kind of late in the event, we were able to pull most of the pieces together within a day of time on Saturday. We really believe that this product (and/or future versions of it) will help other people with not only their job search process but also daily communication as well. The friendships we made along the way is also definitely something we cherish and feel grateful about <3
## What we learned
* Aidan: Basics of React and Flask
* Spark: Introduction to Git and full-stack development with sprinkles of life advice
* Cathleen: Deeper dive into Flask and React and structural induction
* Helen: Better understanding of API calls & language models and managing many different parts of a product at once
## What's next for Talkology
We hope to integrate computer vision approaches by collecting video recordings (rather than just audio) to perform analysis on hand gestures, overall posture, and body language. We also want to extend our language analysis to explore novel models aimed at performing tone analysis on live speech. Apart from our analysis methods, we hope to improve our question bank to be more than just behavioural questions and better cater to each user's specific job demands. Lastly, there are general loose ends that could be easily tied up to make the project more cohesive, such as integrating the live voice recording functionality and optimizing some remaining components of the interactive dashboard. | >
> Domain.com domain: IDE-asy.com
>
>
>
## Inspiration
Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code.
## What it does
Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE.
## How we built it
We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE.
## Challenges we ran into
>
> "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume.
> The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad*
>
>
> "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac*
>
>
> "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir*
>
>
> "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris*
>
>
>
## Accomplishments that we're proud of
>
> "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad*
>
>
> "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac*
>
>
> "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir*
>
>
> "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris*
>
>
>
## What we learned
>
> "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad*
>
>
> "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac*
>
>
> "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir*
>
>
> "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris*
>
>
>
## What's next for QuickCode
We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code. | ## Inspiration
The inspiration behind FaceWatch was a compelling desire to tackle rising bike thefts in Kingston. We aimed to create a solution that transcended conventional approaches, leveraging facial recognition technology to enhance community safety.
## What it does
FaceWatch is a versatile facial recognition software designed for authorized access. It goes beyond addressing bike thefts, offering a proactive security approach applicable to various scenarios. The system detects faces, granting access exclusively to authorized users, ensuring safety and privacy.
## How we built it
The project began with rigorous research into facial recognition algorithms, utilizing the face\_recognition library. This allowed us to identify faces that would be fed from video cameras and match them with ones that are in the provided database which we hosted in Firebase for this project. The cv2 library was used to capture video and extract faces from it. We prioritized user-friendly interfaces and seamless integration, crafting a robust backend and an intuitive frontend using react. The user side of things shows them location and name of the latest spotting of the face that was recognized which is pulled from the firebase database.
## Challenges we ran into
Challenges primarily revolved around knowledge acquisition and environment setup. Understanding the intricacies of face\_recognition and navigating Linux added technical complexities. The shift from a singular bike theft focus to a broader, authorization-based approach required flexibility in implementation. Adapting to these changes, the team overcame challenges, showcasing the resilience and adaptability crucial in a dynamic project environment.
## Accomplishments that we're proud of
Our proudest achievement lies in the seamless functionality of the project's backend. Through meticulous development and rigorous testing, we've successfully created a robust and reliable backbone for FaceWatch. The backend not only meets but exceeds performance expectations, ensuring the smooth operation of the entire system. This accomplishment underscores our commitment to technical excellence and the ability to create a foundation that can support the diverse needs of our users. The user side of the program stands as a testament to our dedication to user experience.
## What we learned
Embarking on the FaceWatch project provided a wealth of learning experiences. Setting up appointments honed organizational skills, and mastering Git improved version control and collaboration. Exploring the face\_recognition library enriched technical expertise, and delving into Linux enhanced understanding of system environments. Witnessing idea generation dynamics within the team highlighted the iterative nature of project development.
## What's next for FaceWatch
Mastering Bayun SDK for Enhanced Security
While our achievements with FaceWatch are noteworthy, we acknowledge the need for a deeper understanding of the Bayun SDK to fortify data security further. Addressing this gap is a priority for us, ensuring that the integration of the Bayun SDK is not only complete but also comprehensively understood. This commitment stems from our dedication to upholding the highest standards of data safety and privacy.
Seamless Frontend-Backend Integration
The journey doesn't end with a functional backend and an intuitive user interface; our next immediate step is to establish a seamless connection between the frontend and backend. This integration is pivotal for optimizing user interactions, providing a cohesive experience, and ensuring the efficient flow of information. We are excited about refining this connection to elevate the overall performance and responsiveness of FaceWatch.
Real-world Implementation and User Engagement
Our ultimate goal is to bring FaceWatch to real-life scenarios, making a tangible impact on community safety. | partial |
## Inspiration
We are currently living through one of the largest housing crises in human history. As a result, more Canadians than ever before are seeking emergency shelter to stay off the streets and find a safe place to recuperate. However, finding a shelter is still a challenging, manual process, where no digital service exists that lets individuals compare shelters by eligibility criteria, find the nearest one they are eligible for, and verify that the shelter has room in real-time. Calling shelters in a city with hundreds of different programs and places to go is a frustrating burden to place on someone who is in need of safety and healing. Further, we want to raise the bar: people shouldn't be placed in just any shelter, they should go to the shelter best for them based on their identity and lifestyle preferences.
70% of homeless individuals have cellphones, compared to 85% of the rest of the population; homeless individuals are digitally connected more than ever before, especially through low-bandwidth mediums like voice and SMS. We recognized an opportunity to innovate for homeless individuals and make the process for finding a shelter simpler; as a result, we could improve public health, social sustainability, and safety for the thousands of Canadians in need of emergency housing.
## What it does
Users connect with the ShelterFirst service via SMS to enter a matching system that 1) identifies the shelters they are eligible for, 2) prioritizes shelters based on the user's unique preferences, 3) matches individuals to a shelter based on realtime availability (which was never available before) and the calculated priority and 4) provides step-by-step navigation to get to the shelter safely.
Shelter managers can add their shelter and update the current availability of their shelter on a quick, easy to use front-end. Many shelter managers are collecting this information using a simple counter app due to COVID-19 regulations. Our counter serves the same purpose, but also updates our database to provide timely information to those who need it. As a result, fewer individuals will be turned away from shelters that didn't have room to take them to begin with.
## How we built it
We used the Twilio SMS API and webhooks written in express and Node.js to facilitate communication with our users via SMS. These webhooks also connected with other server endpoints that contain our decisioning logic, which are also written in express and Node.js.
We used Firebase to store our data in real time.
We used Google Cloud Platform's Directions API to calculate which shelters were the closest and prioritize those for matching and provide users step by step directions to the nearest shelter. We were able to capture users' locations through natural language, so it's simple to communicate where you currently are despite not having access to location services
Lastly, we built a simple web system for shelter managers using HTML, SASS, JavaScript, and Node.js that updated our data in real time and allowed for new shelters to be entered into the system.
## Challenges we ran into
One major challenge was with the logic of the SMS communication. We had four different outgoing message categories (statements, prompting questions, demographic questions, and preference questions), and shifting between these depending on user input was initially difficult to conceptualize and implement. Another challenge was collecting the distance information for each of the shelters and sorting between the distances, since the response from the Directions API was initially confusing. Lastly, building the custom decisioning logic that matched users to the best shelter for them was an interesting challenge.
## Accomplishments that we're proud of
We were able to build a database of potential shelters in one consolidated place, which is something the city of London doesn't even have readily available. That itself would be a win, but we were able to build on this dataset by allowing shelter administrators to update their availability with just a few clicks of a button. This information saves lives, as it prevents homeless individuals from wasting their time going to a shelter that was never going to let them in due to capacity constraints, which often forced homeless individuals to miss the cutoff for other shelters and sleep on the streets. Being able to use this information in a custom matching system via SMS was a really cool thing for our team to see - we immediately realized its potential impact and how it could save lives, which is something we're proud of.
## What we learned
We learned how to use Twilio SMS APIs and webhooks to facilitate communications and connect to our business logic, sending out different messages depending on the user's responses. In addition, we taught ourselves how to integrate the webhooks to our Firebase database to communicate valuable information to the users.
This experience taught us how to use multiple Google Maps APIs to get directions and distance data for the shelters in our application. We also learned how to handle several interesting edge cases with our database since this system uses data that is modified and used by many different systems at the same time.
## What's next for ShelterFirst
One addition to make could be to integrate locations for other basic services like public washrooms, showers, and food banks to connect users to human rights resources. Another feature that we would like to add is a social aspect with tags and user ratings for each shelter to give users a sense of what their experience may be like at a shelter based on the first-hand experiences of others. We would also like to leverage the Twilio Voice API to make this system accessible via a toll free number, which can be called for free at any payphone, reaching the entire homeless demographic.
We would also like to use Raspberry Pis and/or Arduinos with turnstiles to create a cheap system for shelter managers to automatically collect live availability data. This would ensure the occupancy data in our database is up to date and seamless to collect from otherwise busy shelter managers. Lastly, we would like to integrate into municipalities "smart cities" initiatives to gather more robust data and make this system more accessible and well known. | ## Inspiration
Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and
## What it does
A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform.
## How we built it
We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves.
## Challenges we ran into
We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities.
## Accomplishments that we're proud of
We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution.
## What we learned
Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0.
## What's next for BrAInstorm
We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well | ## Inspiration
In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol.
## What it does
Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating.
## How I built it
We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API.
## Challenges I ran into
Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of!
## Accomplishments that I'm proud of
We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity.
## What I learned
Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane.
## What's next for SafeHubs
Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic. | winning |
## Inspiration
## What it does
TrackIt! is a universal add-on for tripods that keeps the specified target in view of the camera. Any object can be selected, from humans to animals to even soccer balls. This allows for hands free recording. Gone are the days where people must view the world through a digital lens while attempting to follow a person place or thing.
## How we built it
TrackIt! is divided into two parts: **Object Tracking** and **Servo Control**
**Object Tracking**
Object Tracking is done through computer vision and the implementation of deep learning tracking algorithms which can accurately track user-selected regions, in an efficient manner. After determining where the object is in relation to the center of the screen, instructions are sent to the Flask server to deliver to the Servo Control units.
**Servo Control**
Servo Control is handled through two Raspberry Pies. Using the built-in GPIO, one Pi can effectively only control one servo at a time, which would have limited the scope of the project. The solution was to have two Pies, one responsible for vertical movement, and the other horizontal. The Raspberry Pies read in information from a Python Flask server to determine where to move to.
## Challenges we ran into
The original workflow plan did not work due to the fact that the WiFi in the area was simply too slow to handle real-time video streaming, an essential part of the project. The original idea was the outsource the heavy lifting of image recognition and object tracking to a powerful GPU in the cloud, but the plan was scrapped due to video not uploading fast enough on the network. Instead, all of the work was done locally, which causes the performance to take a hit, but still performs far better than if the online model was stuck with.
We realized pretty quickly that trying to control two servos on one Pi running directly off of the power of the GPIO board was a bad idea. Both servos constantly locked up, and it was impossible to truly have diagonal direction, and we attempted to simulate it by alternating small movements between the servos to fake it. However, this did not yield a satisfactory result. Many ideas where thought of, including using a breadboard and external power supply, and even using an Arduino as a source of power. Eventually, the idea of 2 Pies where used due to the appeal of easily being able to run the motors in sync, without having to alternate currents and frequencies as a singular Pi would have had to done. The end result works pretty will, as TrackIt! can go up, down, left, right, and diagonally.
## Accomplishments that we're proud of
We are not hardware people, and all of our knowledge is in software, so it was very rewarding to stumble through the wonderful world of hardware and figure out what each thing does. This was the first time that we used a modular design, with multiple tools being designed at the same time and eventually being put together and magically working. It was amazing to see that TrackIt! succeeded, given how ambitious it was.
## What we learned
We gained a lot of knowledge about live video streaming, even if it didn't help for this project in the end. It is always useful to have more in the toolkit for next time. Additionally, we gained a basic understanding of breadboards, the GPIO, and how the Pi can be used to interface with physical devices.
## What's next for TrackIt!
A faster tracking and movement speed for the camera can easily be achieved through a better GPU handling the loads. Right now, approximately 2 or 3 frames are processed per second, but even a mid-level consumer GPU could bump that number up to nearly 100 frames per second. Additionally, the project will be taken to the cloud, where there is more freedom and we will no longer be restricted by hardware, allowing the small quirks of the prototype to be minimized. | ## Inspiration
As some of our team members have little siblings, we understand the struggle of living with them! So, now that we're in university, we've grown to miss all of their little quirks. So, why not bring them back?
## What it does
Our robot searches for people, and once found, will track them and move toward them. When it gets close enough, our robot will spray you with water, before giggling and running away. Ahhhh, feels JUST like home!
## How we built it
We connected an iPhone via Bluetooth to a computer, where we analyze the footage in Python. Using the OpenCV library, our program finds a person and calculates where they are relative to the frame. The laptop then tells an Arduino over Bluetooth where the person is, and the Arduino then changes the velocity of two motors to control the speed and direction of the robot, ensuring that it meets its subject at an optimal distance for spraying. The enclosure is built from a combination of cardboard, 3D-printed supports, and screws.
## Challenges we ran into
Integrating all of the components together proved to be a challenge. While they seemed to work on their own, communicating between each piece was tricky. For example, we were all relatively new to asynchronous programming, so designing a Python script to both analyze footage and send the results over Bluetooth to the Arduino was more difficult than anticipated.
## Accomplishments that we're proud of
It works! Based on what the camera sees, our motors change direction to put the robot on a perfect spray trajectory!
## What we learned
We improved our programming skills, learned how to communicate between devices over Bluetooth, and operate the Arduino. We were able to use a camera and understand the position of a person using computer vision.
## What's next for Your Annoying Little Sibling
We would love to further improve our robot's tracking skills and incorporate more sibling-like annoyances like slapping, biting, and telling tattle tales. | ## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces. | losing |
## Inspiration
We were intriguied by the City of London's Open Data portal and wanted to see what we could do with it. We also wanted to give back to the city, which houses UWO and Hack Western, as well as many of our friends. With The London Bridge, we aim to enable communication between the community and its citizens, highlight the most important points of infrastructure to maintain/build upon, and to ultimately make London citizens feel involved and proud of their city.
## What it does
The London Bridge is a web app aiming to bridge communication between changemakers and passionate residents in the city of London. Citizens can submit requests for the construction/maintenance of public infrastructure, including street lights, bike lanes, traffic lights, and parks. Using our specially designed algorithm, The London Bridge uses a variety of criteria such as public demand, proximity to similar infrastructure, and proximity to critical social services to determine the most important issues to bring to the attention of city employees, so that they may focus their efforts on what the city truly needs.
## How we built it
First and foremost, we consulted City of London booth sponsors, the City of London Open Data portal, colleagues studying urban planning, and the 2019 edition of the London Plan to determine the most important criteria that would be used in our algorithm.
We created a simple citizen portal where one can submit requests using PugJS templates. We stored geotagged photos in Google Cloud Storage, and relevant geographical/statistical data in MongoDB Atlas, to be used in our score calculating algorithm. Finally, we used Nodejs to implement our algorithm, calculating scores for certain requests, and sending an email to Ward Councellors upon meeting a threshold score.
## Challenges we ran into
Integrating and picking up a variety of new technologies proved to be a difficult challenge, as we had never used any of these technologies before. We also discussed and revised our algorithm many times throughout the hackathon, in hopes of creating a scoring system that would truly reflect London's needs.
## Accomplishments that we're proud of
We're proud of our team's commitment to our hack's vision and goals, especially when things looked hairy.
## What we learned
We learned more about a variety of the aforementioned web technologies, as well as the struggles of integrating them together.
## What's next for The London Bridge
In the future, we'd hope to:
* Refine and add to our algorithm
* Implement additional request types
* Enhance data visualization and add workflow integration
* Add a web interface for city employees
* Create a user login system and impact tracking | **In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.**
## Inspiration
Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief.
## What it does
**Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter.
## How we built it
We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community.
## Challenges we ran into
Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon.
## Accomplishments that we're proud of
We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives.
## What we learned
We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs.
## What's next for Stronger Together
We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations. | # Inspiration
I recently got attached to Beat Saber so I thought I'd be fun to build something similar to it.
# Objective
The objective of the game is to score higher than your opponent. Points are scored if a player triggers a hitbox when a note is in contact with it.
**Points Chart:**
**Green:** Perfect hit! The hitbox was triggered when a note was in full contact, **full points + combo bonus**
**Yellow:** Hitbox was triggered when a note was in partial contact, **partial points**
**Red:** Hitbox was triggered when a note was not in contact, **no points**
**Combo (Bonus Points):**
Combos are achieved when a hitbox triggered **Green** more than once in a row. Combos add a great amount of bonus to your score and progressively increase in value as the pace of the notes progress.
# Controls & Info
**HitBox:** The blue circles at the bottom of each player's half of the screen
**Notes:** The orange circles that fall from the top of the screen down to the hotboxes
**Player 1 (Left Side):**
Key "A": Triggers the left hitbox
Key "S": Triggers the center hitbox
Key "D": Triggers the right hitbox
**Player 2 (Right Side):**
Key "J: Triggers the left hitbox
Key "K": Triggers the center hitbox
Key "L": Triggers the right hitbox
# What's next for Rhythm Flow
1. Support for tablets. The game is very much playable on the computer but the mechanics of it can also be ported to tablets where the touch screen size is sufficient enough to use controls.
2. More game modes. Currently, there is only one game mode where two people are directly competing against each other. I have ideas for other games modes where instead of competing, two players would have collaborate together to beat the round. | winning |
## Inspiration
As Chinese military general Sun Tzu's famously said: "Every battle is won before it is fought."
The phrase implies that planning and strategy - not the battles - win wars. Similarly, successful traders commonly quote the phrase: "Plan the trade and trade the plan."

Just like in war, planning ahead can often mean the difference between success and failure. After recent events of the election, there was a lot of panic and emotional trading in the financial markets but there were very few applications that help handle the emotional side of training, and being able to trade the plan not on your emotions.
Investing Hero was created to help investors be aware and learn more about the risks of emotional trading and trading in general.
## What it does
This application is a tool to help investors manage their risk and help them learn more about the stock-market.
This is accomplished through many ways, one of which is tracking each transaction in the market and ensuring that the investor trade's their plan.
This application does live analysis on trades, taking in real-time stock-market data, processing the data and delivering the proper guidance through at chat-style artificial intelligent user experience.
## How we built it
We started a NodeJS server to make a REST API, which our iOS application uses to get a lot of the data shown inside the app.
We also have a Web Front-End (angularJS) which we use to monitor the information on the server, and simulate the oscillation of the prices in the stock market.
Both the iOS app, and the web Front-End are in sync, and as soon as any information is edited/deleted on either one, the other one will also show the changes in real-time.
Nasdaq-On-Demand gets us the stock prices, and that's where we go from.
## Challenges we ran into
* Real-time database connection using Firebase
* Live stock market data not being available over the weekend, and us having to simulate it
## Accomplishments that we're proud of
We made a seamless platform that is in complete sync at all times.
## What we learned

Learned about Heroku, Firebase & Swift Animations.
We also learned about the different ways a User Experience built on research can help the user get much more out of an application.
## What's next for Investment Hero
Improved AI bot & more advanced ordering options (i.e. limit orders). | ## Inspiration
We want people to make right decision at right time so that they can get returns on their investments and trading in stock market.
## What it does
It will enable to invest and trade in the stock by using HFT and technical analysis techniques. The product will have large scale applications in situations where the investor or trader does not has much time for individual analysis and for professionals that would like to deploy their money on auto pilot.
## How we built it
We have only the idea :)
## Challenges we ran into
How to implement the web app and mobile app for auto trading.
## What's next for TradeGO
We want to implement the app so that it can be used for auto trading. | ## Inspiration
Prolonged covid restrictions have caused immense damage to the economy and local markets alike. Shifts in this economic landscape have led to many individuals seeking alternate sources of income to account for the losses imparted by lack of work or general opportunity. One major sector that has seen a boom, despite local market downturns, is investment in the stock market. While stock market trends at first glance, seem to be logical, and fluid, they're in fact the opposite. Beat earning expectation? New products on the market? *It doesn't matter!*, because at the end of the day, a stock's value is inflated by speculation and **hype**. Many see the allure of rapidly increasing ticker charts, booming social media trends, and hear talk of town saying how someone made millions in a matter of a day *cough* **GameStop** *cough* , but more often then not, individual investors lose money when market trends spiral. It is *nearly* impossible to time the market. Our team sees the challenges and wanted to create a platform which can account for social media trends which may be indicative of early market changes so that small time investors can make smart decisions ahead of the curve.
## What it does
McTavish St. Bets is a platform that aims to help small time investors gain insight on when to buy, sell, or hold a particular stock on the DOW 30 index. The platform uses the recent history of stock data along with tweets in the same time period in order to estimate the future value of the stock. We assume there is a correlation between tweet sentiment towards a company, and it's future evaluation.
## How we built it
The platform was build using a client-server architcture and is hosted on a remote computer made available to the team. The front-end was developed using react.js and bootstrap for quick and efficient styling, while the backend was written in python with flask. The dataset was constructed by the team using a mix of tweets and article headers. The public Twitter API was used to scrape tweets according to popularity and were ranked against one another using an engagement scoring function. Tweets were processed using a natural language processing module with BERT embeddings which was trained for sentiment analysis. Time series prediction was accomplished through the use of a neural stochastic differential equation which incorporated text information as well. In order to incorporate this text data, the latent representations were combined based on the aforementioned scoring function. This representation is then fed directly to the network for each timepoint in the series estimation in an attempt to guide model predictions.
## Challenges we ran into
Obtaining data to train the neural SDE proved difficult. The free Twitter API only provides high engagement tweets for the last seven days. Obtaining older tweets requires an enterprise account costing thousands of dollars per month. Unfortunately, we didn’t feel that we had the data to train an end-to-end model to learn a single representation for each day’s tweets. Instead, we use a weighted average tweet representation, weighing each tweet by its importance computed as a function of its retweets and likes. This lack of data extends to the validation side too, with us only able to validate our model’s buy/sell/hold prediction on this Friday's stock price.
Finally, without more historical data, we can only model the characteristics of the market this week, which has been fairly uncharacteristic of normal market conditions. Adding additional data for the trajectory modeling would have been invaluable.
## Accomplishments that we're proud of
* We used several API to put together a dataset, trained a model, and deployed it within a web application.
* We put together several animations introduced in the latest CSS revision.
* We commissioned McGill-themed banner in keeping with the /r/wallstreetbets culture. Credit to Jillian Cardinell for the help!
* Some jank nlp
## What we learned
Learned to use several new APIs, including Twitter and Web Scrapers.
## What's next for McTavish St. Bets
Obtaining much more historical data by building up a dataset over several months (using Twitters 7-day API). We would have also liked to scale the framework to be reinforcement based which is data hungry. | partial |
## Inspiration
The amount of data in the world today is mind-boggling. We are generating 2.5 quintillion bytes of data every day at our current pace, but the pace is only accelerating with the growth of IoT.
We felt that the world was missing a smart find-feature for videos. To unlock heaps of important data from videos, we decided on implementing an innovative and accessible solution to give everyone the ability to access important and relevant data from videos.
## What it does
CTRL-F is a web application implementing computer vision and natural-language-processing to determine the most relevant parts of a video based on keyword search and automatically produce accurate transcripts with punctuation.
## How we built it
We leveraged the MEVN stack (MongoDB, Express.js, Vue.js, and Node.js) as our development framework, integrated multiple machine learning/artificial intelligence techniques provided by industry leaders shaped by our own neural networks and algorithms to provide the most efficient and accurate solutions.
We perform key-word matching and search result ranking with results from both speech-to-text and computer vision analysis. To produce accurate and realistic transcripts, we used natural-language-processing to produce phrases with accurate punctuation.
We used Vue to create our front-end and MongoDB to host our database. We implemented both IBM Watson's speech-to-text API and Google's Computer Vision API along with our own algorithms to perform solid key-word matching.
## Challenges we ran into
Trying to implement both Watson's API and Google's Computer Vision API proved to have many challenges. We originally wanted to host our project on Google Cloud's platform, but with many barriers that we ran into, we decided to create a RESTful API instead.
Due to the number of new technologies that we were figuring out, it caused us to face sleep deprivation. However, staying up for way longer than you're supposed to be is the best way to increase your rate of errors and bugs.
## Accomplishments that we're proud of
* Implementation of natural-language-processing to automatically determine punctuation between words.
* Utilizing both computer vision and speech-to-text technologies along with our own rank-matching system to determine the most relevant parts of the video.
## What we learned
* Learning a new development framework a few hours before a submission deadline is not the best decision to make.
* Having a set scope and specification early-on in the project was beneficial to our team.
## What's next for CTRL-F
* Expansion of the product into many other uses (professional education, automate information extraction, cooking videos, and implementations are endless)
* The launch of a new mobile application
* Implementation of a Machine Learning model to let CTRL-F learn from its correct/incorrect predictions | ## Motivation
Coding skills are in high demand and will soon become a necessary skill for nearly all industries. Jobs in STEM have grown by 79 percent since 1990, and are expected to grow an additional 13 percent by 2027, according to a 2018 Pew Research Center survey. This provides strong motivation for educators to find a way to engage students early in building their coding knowledge.
Mixed Reality may very well be the answer. A study conducted at Georgia Tech found that students who used mobile augmented reality platforms to learn coding performed better on assessments than their counterparts. Furthermore, research at Tufts University shows that tangible programming encourages high-level computational thinking. Two of our team members are instructors for an introductory programming class at the Colorado School of Mines. One team member is an interaction designer at the California College of the Art and is new to programming. Our fourth team member is a first-year computer science at the University of Maryland. Learning from each other's experiences, we aim to create the first mixed reality platform for tangible programming, which is also grounded in the reality-based interaction framework. This framework has two main principles:
1) First, interaction **takes place in the real world**, so students no longer program behind large computer monitors where they have easy access to distractions such as games, IM, and the Web.
2) Second, interaction behaves more like the real world. That is, tangible languages take advantage of **students’ knowledge of the everyday, non-computer world** to express and enforce language syntax.
Using these two concepts, we bring you MusicBlox!
## What is is
MusicBlox combines mixed reality with introductory programming lessons to create a **tangible programming experience**. In comparison to other products on the market, like the LEGO Mindstorm, our tangible programming education platform **cuts cost in the classroom** (no need to buy expensive hardware!), **increases reliability** (virtual objects will never get tear and wear), and **allows greater freedom in the design** of the tangible programming blocks (teachers can print out new card/tiles and map them to new programming concepts).
This platform is currently usable on the **Magic Leap** AR headset, but will soon be expanded to more readily available platforms like phones and tablets.
Our platform is built using the research performed by Google’s Project Bloks and operates under a similar principle of gamifying programming and using tangible programming lessons.
The platform consists of a baseboard where students must place tiles. Each of these tiles is associated with a concrete world item. For our first version, we focused on music. Thus, the tiles include a song note, a guitar, a piano, and a record. These tiles can be combined in various ways to teach programming concepts. Students must order the tiles correctly on the baseboard in order to win the various levels on the platform. For example, on level 1, a student must correctly place a music note, a piano, and a sound in order to reinforce the concept of a method. That is, an input (song note) is fed into a method (the piano) to produce an output (sound).
Thus, this platform not only provides a tangible way of thinking (students are able to interact with the tiles while visualizing augmented objects), but also makes use of everyday, non-computer world objects to express and enforce computational thinking.
## How we built it
Our initial version is deployed on the Magic Leap AR headset. There are four components to the project, which we split equally among our team members.
The first is image recognition, which Natalie worked predominantly on. This required using the Magic Leap API to locate and track various image targets (the baseboard, the tiles) and rendering augmented objects on those tracked targets.
The second component, which Nhan worked on, involved extended reality interaction. This involved both Magic Leap and Unity to determine how to interact with buttons and user interfaces in the Magic leap headset.
The third component, which Casey spearheaded, focused on integration and scene development within Unity. As the user flows through the program, there are different game scenes they encounter, which Casey designed and implemented. Furthermore, Casey ensured the seamless integration of all these scenes for a flawless user experience.
The fourth component, led by Ryan, involved project design, research, and user experience. Ryan tackled user interaction layouts to determine the best workflow for children to learn programming, concept development, and packaging of the platform.
## Challenges we ran into
We faced many challenges with the nuances of the Magic Leap platform, but we are extremely grateful to the Magic Leap mentors for providing their time and expertise over the duration of the hackathon!
## Accomplishments that We're Proud of
We are very proud of the user experience within our product. This feels like a platform that we could already begin testing with children and getting user feedback. With our design expert Ryan, we were able to package the platform to be clean, fresh, and easy to interact with.
## What We learned
Two of our team members were very unfamiliar with the Magic Leap platform, so we were able to learn a lot about mixed reality platforms that we previously did not. By implementing MusicBlox, we learned about image recognition and object manipulation within Magic Leap. Moreover, with our scene integration, we all learned more about the Unity platform and game development.
## What’s next for MusicBlox: Tangible Programming Education in Mixed Reality
This platform is currently only usable on the Magic Leap AR device. Our next big step would be to expand to more readily available platforms like phones and tablets. This would allow for more product integration within classrooms.
Furthermore, we only have one version which depends on music concepts and teaches methods and loops. We would like to expand our versions to include other everyday objects as a basis for learning abstract programming concepts. | ## Inspiration
The idea arose from the current political climate. At a time where there is so much information floating around, and it is hard for people to even define what a fact is, it seemed integral to provide context to users during speeches.
## What it does
The program first translates speech in audio into text. It then analyzes the text for relevant topics for listeners, and cross references that with a database of related facts. In the end, it will, in real time, show viewers/listeners a stream of relevant facts related to what is said in the program.
## How we built it
We built a natural language processing pipeline that begins with a speech to text translation of a YouTube video through Rev APIs. We then utilize custom unsupervised learning networks and a graph search algorithm for NLP inspired by PageRank to parse the context and categories discussed in different portions of a video. These categories are then used to query a variety of different endpoints and APIs, including custom data extraction API's we built with Mathematica's cloud platform, to collect data relevant to the speech's context. This information is processed on a Flask server that serves as a REST API for an Angular frontend. The frontend takes in YouTube URL's and creates custom annotations and generates relevant data to augment the viewing experience of a video.
## Challenges we ran into
None of the team members were very familiar with Mathematica or advanced language processing. Thus, time was spent learning the language and how to accurately parse data, give the huge amount of unfiltered information out there.
## Accomplishments that we're proud of
We are proud that we made a product that can help people become more informed in their everyday life, and hopefully give deeper insight into their opinions. The general NLP pipeline and the technologies we have built can be scaled to work with other data sources, allowing for better and broader annotation of video and audio sources.
## What we learned
We learned from our challenges. We learned how to work around the constraints of a lack of a dataset that we could use for supervised learning and text categorization by developing a nice model for unsupervised text categorization. We also explored Mathematica's cloud frameworks for building custom API's.
## What's next for Nemo
The two big things necessary to expand on Nemo are larger data base references and better determination of topics mentioned and "facts." Ideally this could then be expanded for a person to use on any audio they want context for, whether it be a presentation or a debate or just a conversation. | winning |
## Inspiration
Patients usually have to go through multiple diagnosis before getting the right doctor. With the astonishing computational power we have today, we could use predictive analysis to suggest the patients' potential illness.
## What it does
Clow takes a picture of the patient's face during registration and run it through an emotion analysis algorithm. With the "scores" that suggest the magnitude of a certain emotional trait, Clow matches these data with the final diagnosis result given by the doctor to predict illnesses.
## How we built it
We integrated machine learning and emotion analysis algorithms from Microsoft Azure cloud services on our Ionic-based app to predict the trends. We "trained" our machine by pairing the "scores" of images of sick patients with its illness, allowing it to predict illnesses based on the "scores".
## Challenges we ran into
All of us are new to machine learning and this has proved to be a challenge to all of us. Fortunately, Microsoft's representative was really helpful and guided us through the process. We also had a hard time writing the code to upload the image taken from the camera to a cloud server in order to run it through Microsoft's emotion analysis API, since we have to encode the image before uploading it.
## Accomplishments that we're proud of
Learning a new skill over a weekend and deploy it on a working prototype ain't easy. We did that, not one but two skills, over a weekend. And it's machine learning and emotion analysis. And they are actually the main components that powers our product.
## What we learned
We all came in with zero knowledge of machine learning and now we are able to walk away with a good idea of what it is. Well, at least we can visualize it now, and we are excited to work with machine learning and unleash its potential in the future.
## What's next for Clow
Clow needs the support of medical clinics and hospitals in order to be deployed. As the correlation between emotion and illness is still relatively unproven, research studies have to be done in order to prove its effectiveness. It may not be effectively produce results in the beginning, but if Clow analyzes thousands of patients' emotion and illness, it can actually very accurately yield these results. | ## Inspiration
We want to fix healthcare! 48% of physicians in the US are burned out, which is a driver for higher rates of medical error, lower patient satisfaction, higher rates of depression and suicide. Three graduate students at Stanford have been applying design thinking to the burnout epidemic. A CS grad from USC joined us for TreeHacks!
We conducted 300 hours of interviews, learned iteratively using low-fidelity prototypes, to discover,
i) There was no “check engine” light that went off warning individuals to “re-balance”
ii) Current wellness services weren’t designed for individuals working 80+ hour weeks
iii) Employers will pay a premium to prevent burnout
And Code Coral was born.
## What it does
Our platform helps highly-trained individuals and teams working in stressful environments proactively manage their burnout. The platform captures your phones’ digital phenotype to monitor the key predictors of burnout using machine learning. With timely, bite-sized reminders we reinforce individuals’ atomic wellness habits and provide personalized services from laundry to life-coaching.
Check out more information about our project goals: <https://youtu.be/zjV3KeNv-ok>
## How we built it
We built the backend using a combination of API's to Fitbit/Googlemaps/Apple Health/Beiwe; Built a machine learning algorithm and relied on an App Builder for the front end.
## Challenges we ran into
API's not working the way we want. Collecting and aggregating "tagged" data for our machine learning algorithm. Trying to figure out which features are the most relevant!
## Accomplishments that we're proud of
We had figured out a unique solution to addressing burnout but hadn't written any lines of code yet! We are really proud to have gotten this project off the ground!
i) Setting up a system to collect digital phenotyping features from a smart phone ii) Building machine learning experiments to hypothesis test going from our digital phenotype to metrics of burnout iii) We figured out how to detect anomalies using an individual's baseline data on driving, walking and time at home using the Microsoft Azure platform iv) Build a working front end with actual data!
Note - login information to codecoral.net: username - test password - testtest
## What we learned
We are learning how to set up AWS, a functioning back end, building supervised learning models, integrating data from many source to give new insights. We also flexed our web development skills.
## What's next for Coral Board
We would like to connect the backend data and validating our platform with real data! | ## Inspiration
In this era, with medicines being readily available for consumption, people take on pills without even consulting with a specialist to find out what diagnosis they have. We have created this project to find out what specific illnesses that a person can be diagnosed with, so that they can seek out the correct treatment, without self-treating themselves with pills which might in turn harm them in the long run.
## What it does
This is your personal medical assistant bot which takes in a set of symptoms you are experiencing and returns some illnesses that are most closely matched with that set of symptoms. It is powered by Machine learning which enables it to return more accurate data (tested and verified!) as to what issue the person might have.
## How we built it
We used React for building the front-end. We used Python and its vast array of libraries to design the ML model. For building the model, we used scikit-learn. We used pandas for the data processing. To connect the front end with the model, we used Fast API. We used a Random Forest multi-label classification model to give the diagnosis. Since the model takes in a string, we used the Bag-of-Words from Scikit-Learn to convert it to number-related values.
## Challenges we ran into
Since none of us had significant ML experience, we had to learn how to create an ML model specifically the multi-label classification model, train it and get it deployed on time. Furthermore, FAST API does not have good documentation, we ran into numerous errors while configuring and interfacing it between our front-end and back-end.
## Accomplishments that we're proud of
Creating a Full-Stack Application that would help the public to find a quick diagnosis for the symptoms they experience. Working on the Project as a team and brainstorming ideas for the proof of concept and how to get our app working.
We trained the model with use cases which evaluated to 97% accuracy
## What we learned
Working with Machine Learning and creating a full-stack App. We also learned how to coordinate with the team to work effectively. Reading documentation and tutorials to get an understanding of how the technologies we used work.
## What's next for Medical Chatbot
The first stage for the Medical Chatbot would be to run tests and validate that it works using different datasets. We also plan about adding more features in the front end such as authentication so that different users can register before using the feature. We can get inputs from professionals in healthcare to increase coverage and add more questions to give the correct prediction. | partial |
# Lumio AI Glass - Enhancing Accessibility for the Visually Impaired
## Inspiration
The inspiration for Lumio AI Glass was born out of a desire to make a meaningful impact on the lives of visually impaired individuals. We recognized the daily challenges faced by people with visual impairments and aimed to leverage cutting-edge technology to enhance accessibility and independence. We were inspired by the idea of creating a smart wearable device that would act as a reliable companion, providing real-time information and assistance in various aspects of life.
## What It Does
Lumio AI Glass is a multifunctional wearable device designed to assist the visually impaired in their daily activities. It combines various technologies to provide a range of features:
* **Object Recognition**: Through deep learning and computer vision, the device can recognize and describe objects in the user's environment. This includes identifying everyday objects and providing audio descriptions.
* **Voice Interaction**: Users can interact with the device using voice commands. Lumio AI Glass can recognize and respond to voice prompts, enabling tasks such as object identification and voice-guided navigation.
* **Gesture Control**: We integrated gesture recognition using the MediaPipe library, allowing users to perform specific actions through hand gestures. This touchless interaction method adds an extra layer of convenience.
* **Database Management**: We've implemented a database system, using PostgreSQL, to securely store relevant data and user information. This will play a crucial role in user customization and data retrieval.
## How We Built It
Building Lumio AI Glass was a multidisciplinary effort that involved multiple technologies and domains:
* **Deep Learning**: We adopted the YOLOv8 model for object detection, fine-tuning it for our specific use case. This was crucial for the device's ability to recognize a wide range of objects.
* **Speech Recognition**: To implement voice interaction, we integrated speech recognition using the SpeechRecognition library. This enables users to provide voice commands naturally.
* **Gesture Control**: We utilized the MediaPipe library to recognize hand gestures and translate them into commands for the AI Glass.
* **Database Management**: PostgreSQL is used to store and efficiently manage data, including detecting and storing text, and retrieving matching text from the database.
## Challenges We Ran Into
Our journey in developing Lumio AI Glass came with its fair share of challenges:
* **Complex AI Models**: Implementing and fine-tuning complex deep learning models like YOLOv8 required significant effort and expertise.
* **Real-Time Interaction**: Achieving real-time interaction and responsiveness for features like object recognition and voice interaction was a technical challenge.
## Accomplishments That We're Proud Of
We're proud of the milestones we've achieved in developing Lumio AI Glass:
* Successfully implemented deep learning models for object recognition, enabling users to identify and interact with their surroundings.
* Integrated voice recognition technology for natural voice commands, improving accessibility.
* Developed a gesture control system that enhances the device's usability and user experience.
## What We Learned
Our journey with Lumio AI Glass has been a valuable learning experience:
* We gained expertise in deep learning models, including object detection and fine-tuning.
* We mastered the integration of speech recognition and gesture control for an enhanced user interface.
* We honed our teamwork and project management skills, essential for tackling multifaceted projects.
## What's Next for Lumio AI
The journey doesn't end here. We have ambitious plans for the future of Lumio AI Glass:
* **Improved Object Recognition**: We aim to enhance the device's object recognition capabilities, making it even more accurate and versatile.
* **Advanced Voice Interaction**: We plan to integrate natural language processing to enable more complex and context-aware voice interactions.
* **Gesture Control Expansion**: Expanding gesture control features to cover a broader range of actions and gestures for user convenience.
* **Connectivity**: We plan to explore connectivity options, such as Bluetooth and Wi-Fi, to enable seamless integration with other smart devices and platforms.
* **Wearable Design**: Focusing on the physical design of the AI Glass to make it practical, comfortable, and stylish for everyday use. | ## Inspiration
Integration for patients into society: why can't it be achieved? This is due to the lack of attempts to combine medical solutions and the perspectives of patients in daily use. More specifically, we notice fields in aid to visual disabilities lack efficiency, as the most common option for patients with blindness is to use a cane and tap as they move forward, which can be slow, dangerous, and limited. They are clunky and draw attention to the crowd, leading to more possible stigmas and inconveniences in use. We attempt to solve this, combining effective healthcare and fashion.
## What it does
* At Signifeye, we have created a pair of shades with I/O sensors that provides audio feedback to the wearer on how far they are to the object they are looking at.
* We help patients build a 3D map of their surroundings and they can move around much quicker, as opposed to slowly tapping the guide cane forward
* Signifeye comes with a companion app that allows for both the blind user and caretakers. The UI is easy to navigate for the blind user and allows for easier haptic feedback manipulation.Through the app, caretakers can also monitor and render assistance to the blind user, thereby being there for the latter 24/7 without having to be there physically through tracking of data and movement.
## How we built it
* The frame of the sunglasses is inspired by high-street fashion, and was modeled via Rhinoceros 3D
to balance aesthetics and functionality. The frame is manufactured using acrylic sheets on a
laser cutter for rapid prototyping
* The sensor arrays consist of an ultrasonic sensor, a piezo speaker, a 5V regulator and a 9V
battery, and are powered by the Arduino MKR WiFi 1010
* The app was created using React Native and Figma for more comprehensive user details, using Expo Go and VSCode for a development environment that could produce testable outputs.
## Challenges we ran into
Difficulty of iterative hardware prototyping under time and resource constraints
* Limited design iterations,
* Shortage of micro-USB cables that transfer power and data, and
* For the frame design, coordinating the hardware with the design for dimensioning.
Implementing hardware data to softwares
* Collecting Arduino data into a file and accommodating that with the function of the application, and
* Altering user and haptic feedback on different mobile operating systems, where different programs had different dependencies that had to be followed.
## What we learned
As most of us were beginner hackers, we learned about multiple aspects that went into creating a viable product.
* Fully integrating hardware and software functionality, including Arduino programming and streamlining.
* The ability to connect cross platform softwares, where I had to incorporate features or data pulled from hardware or data platforms.
* Dealing with transferring of data and the use of computer language to process different formats, such as audio files or censor induced wavelengths.
* Became more proficient in running and debugging code. I was able to adjust to a more independent and local setting, where an emulator or external source was required aside from just an IDE terminal. | ## Inspiration
We wanted to create a proof-of-concept for a potentially useful device that could be used commercially and at a large scale. We ultimately designed to focus on the agricultural industry as we feel that there's a lot of innovation possible in this space.
## What it does
The PowerPlant uses sensors to detect whether a plant is receiving enough water. If it's not, then it sends a signal to water the plant. While our proof of concept doesn't actually receive the signal to pour water (we quite like having working laptops), it would be extremely easy to enable this feature.
All data detected by the sensor is sent to a webserver, where users can view the current and historical data from the sensors. The user is also told whether the plant is currently being automatically watered.
## How I built it
The hardware is built on an Arduino 101, with dampness detectors being used to detect the state of the soil. We run custom scripts on the Arduino to display basic info on an LCD screen. Data is sent to the websever via a program called Gobetwino, and our JavaScript frontend reads this data and displays it to the user.
## Challenges I ran into
After choosing our hardware, we discovered that MLH didn't have an adapter to connect it to a network. This meant we had to work around this issue by writing text files directly to the server using Gobetwino. This was an imperfect solution that caused some other problems, but it worked well enough to make a demoable product.
We also had quite a lot of problems with Chart.js. There's some undocumented quirks to it that we had to deal with - for example, data isn't plotted on the chart unless a label for it is set.
## Accomplishments that I'm proud of
For most of us, this was the first time we'd ever created a hardware hack (and competed in a hackathon in general), so managing to create something demoable is amazing. One of our team members even managed to learn the basics of web development from scratch.
## What I learned
As a team we learned a lot this weekend - everything from how to make hardware communicate with software, the basics of developing with Arduino and how to use the Charts.js library. Two of our team member's first language isn't English, so managing to achieve this is incredible.
## What's next for PowerPlant
We think that the technology used in this prototype could have great real world applications. It's almost certainly possible to build a more stable self-contained unit that could be used commercially. | losing |
## Overview
We created a Smart Glasses hack called SmartEQ! This unique hack leverages the power of machine learning and facial recognition to determine the emotion of the person you are talking to and conducts sentiment analysis on your conversation, all in real time! Since we couldn’t get a pair of Smart Glasses (*hint hint* MLH Hardware Lab?), we created our MVP using a microphone and webcam, acting just like the camera and mic would be on a set of Smart Glasses.
## Inspiration
Millions of children around the world have Autism Spectrum Disorder, and for these kids it can be difficult to understand the emotions people around them. This can make it very hard to make friends and get along with their peers. As a result, this negatively impacts their well-being and leads to issues including depression. We wanted to help these children understand others’ emotions and enable them to create long-lasting friendships. We learned about research studies that wanted to use technology to help kids on the autism spectrum understand the emotions of others and thought, hey, let’s see if we could use our weekend to build something that can help out!
## What it does
SmartEQ determines the mood of the person in frame, based off of their facial expressions and sentiment analysis on their speech. SmartEQ then determines the most probable emotion from the image analysis and sentiment analysis of the conversation, and provides a percentage of confidence in its answer. SmartEQ helps a child on the autism spectrum better understand the emotions of the person they are conversing with.
## How we built it
The lovely front end you are seeing on screen is build with React.js and the back end is a Python flask server. For the machine learning predictions we used a whole bunch of Microsoft Azure Cognitive Services API’s, including speech to text from the microphone, conducting sentiment analysis based off of this text, and the Face API to predict the emotion of the person in frame.
## Challenges we ran into
Newton and Max came to QHacks as a confident duo with the initial challenge of snagging some more teammates to hack with! For Lee and Zarif, this was their first hackathon and they both came solo. Together, we ended up forming a pretty awesome team :D. But that’s not to say everything went as perfectly as our new found friendships did. Newton and Lee built the front end while Max and Zarif built out the back end, and as you may have guessed, when we went to connect our code together, just about everything went wrong. We kept hitting the maximum Azure requests that our free accounts permitted, encountered very weird socket-io bugs that made our entire hack break and making sure Max didn’t drink less than 5 Red Bulls per hour.
## Accomplishments that we're proud of
We all worked with technologies that we were not familiar with, and so we were able to learn a lot while developing a functional prototype. We synthesised different forms of machine learning by integrating speech-to-text technology with sentiment analysis, which let us detect a person’s emotions just using their spoken words. We used both facial recognition and the aforementioned speech-to-sentiment-analysis to develop a holistic approach in interpreting a person’s emotions. We used Socket.IO to create real-time input and output data streams to maximize efficiency.
## What we learned
We learnt about web sockets, how to develop a web app using web sockets and how to debug web socket errors. We also learnt how to harness the power of Microsoft Azure's Machine Learning and Cognitive Science libraries. We learnt that "cat" has a positive sentiment analysis and "dog" has a neutral score, which makes no sense what so ever because dogs are definitely way cuter than cats. (Zarif strongly disagrees)
## What's next for SmartEQ
We would deploy our hack onto real Smart Glasses :D. This would allow us to deploy our tech in real life, first into small sample groups to figure out what works and what doesn't work, and after we smooth out the kinks, we could publish it as an added technology for Smart Glasses. This app would also be useful for people with social-emotional agnosia, a condition that may be caused by brain trauma that can cause them to be unable to process facial expressions.
In addition, this technology has many other cool applications! For example, we could make it into a widely used company app that company employees can integrate into their online meeting tools. This is especially valuable for HR managers, who can monitor their employees’ emotional well beings at work and be able to implement initiatives to help if their employees are not happy. | ## Inspiration
Many of us have a hard time preparing for interviews, presentations, and any other social situation. We wanted to sit down and have a real talk... with ourselves.
## What it does
The app will analyse your speech, hand gestures, and facial expressions and give you both real-time feedback as well as a complete rundown of your results after you're done.
## How We built it
We used Flask for the backend and used OpenCV, TensorFlow, and Google Cloud speech to text API to perform all of the background analyses. In the frontend, we used ReactJS and Formidable's Victory library to display real-time data visualisations.
## Challenges we ran into
We had some difficulties on the backend integrating both video and voice together using multi-threading. We also ran into some issues with populating real-time data into our dashboard to display the results correctly in real-time.
## Accomplishments that we're proud of
We were able to build a complete package that we believe is purposeful and gives users real feedback that is applicable to real life. We also managed to finish the app slightly ahead of schedule, giving us time to regroup and add some finishing touches.
## What we learned
We learned that planning ahead is very effective because we had a very smooth experience for a majority of the hackathon since we knew exactly what we had to do from the start.
## What's next for RealTalk
We'd like to transform the app into an actual service where people could log in and save their presentations so they can look at past recordings and results, and track their progress over time. We'd also like to implement a feature in the future where users could post their presentations online for real feedback from other users. Finally, we'd also like to re-implement the communication endpoints with websockets so we can push data directly to the client rather than spamming requests to the server.

Tracks movement of hands and face to provide real-time analysis on expressions and body-language.
 | ## Introduction
Our innovative AI utilizes cutting-edge technology to analyze your facial expressions and speech through your webcam and microphone. Once you start interacting, the AI will adapt its responses according to your emotions and engagement level, providing a unique, immersive, and engaging conversational experience. It's not just a chat; it's a dynamic interaction crafted just for you to boost your confidence and mental health.
## Inspiration
As a group, we decided to address the issue of impostor syndrom and lack of confidence among students. We built the project with the intention of creating a unique way to boost self-esteem and emotional well-being by providing feedback for the user to improve based on.
## What it does
Our project uses both webcam analysis as well as chatbot interactions. Through these means, Fake:It aims to provide all students with valuable insight, support, and encouragement. Whether the user is struggling with self-doubt or anxiety; the main goal is to contribute to a student's well-being and personal growth.
## Technologies Employed
### Backend
#### [Flask](https://flask.palletsprojects.com/en/2.1.x/)
### Frontend
#### [React](https://reactjs.org/)
#### [Tailwind CSS](https://tailwindcss.com/)
#### [Vite](https://vitejs.dev/)
## Additional Integrations
### [TensorFlow Face API](https://www.tensorflow.org/)
### [OpenAI API](https://beta.openai.com/)
## Challenges we ran into
To begin, setting up local development environments with flask proved to take longer than expected. In addition, we faced trouble managing global React states once the project grew in complexity. Finally, our team faced trouble trying to implement a 'mood report' that would graph a users facial expressions over time after their session. Facing these challenges made this hackathon engaging and memorable.
## Accomplishments that we're proud of
We are very happy with our achievements including successfully tackling the issue of boosting self-confidence through the use of tracking various moods and making an effort to quantify them for our project, enhancing the user experience. These accomplishments reflect our commitment to providing a more empathetic and personalized platform for our users which includes tailored responses based on their moods to improve themselves. It's a significant step towards fostering emotional well-being and support within our community.
## What we learned
We gained technical expertise on challenges throughout the hackathon with tools like React, Flask, and RESTful api's. On top of this, being able to adapt and overcome certain challenges by using teamwork and collaboration is something we found to be paramount to this project's success.
## What's next for FakeIt
Next steps would be adding analytical features to the project, such as a way to display users' moods throughout their session. This could be compounded with an account system, where users could create accounts to track long term improvement in moods. Finally, adding more ai personalities the users can choose would boost engagement with our platform. | winning |
## Inspiration
Let's face it: Museums, parks, and exhibits need some work in this digital era. Why lean over to read a small plaque when you can get a summary and details by tagging exhibits with a portable device?
There is a solution for this of course: NFC tags are a fun modern technology, and they could be used to help people appreciate both modern and historic masterpieces. Also there's one on your chest right now!
## The Plan
Whenever a tour group, such as a student body, visits a museum, they can streamline their activities with our technology. When a member visits an exhibit, they can scan an NFC tag to get detailed information and receive a virtual collectible based on the artifact. The goal is to facilitate interaction amongst the museum patrons for collective appreciation of the culture. At any time, the members (or, as an option, group leaders only) will have access to a live slack feed of the interactions, keeping track of each other's whereabouts and learning.
## How it Works
When a user tags an exhibit with their device, the Android mobile app (built in Java) will send a request to the StdLib service (built in Node.js) that registers the action in our MongoDB database, and adds a public notification to the real-time feed on slack.
## The Hurdles and the Outcome
Our entire team was green to every technology we used, but our extensive experience and relentless dedication let us persevere. Along the way, we gained experience with deployment oriented web service development, and will put it towards our numerous future projects. Due to our work, we believe this technology could be a substantial improvement to the museum industry.
## Extensions
Our product can be easily tailored for ecotourism, business conferences, and even larger scale explorations (such as cities and campus). In addition, we are building extensions for geotags, collectibles, and information trading. | ## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues | ## Inspiration
Whenever I go on vacation, what I always fondly look back on is the sights and surroundings of specific moments. What if there was a way to remember these associations by putting them on a map to look back on? We strived to locate a problem, and then find a solution to build up from. What if instead of sorting pictures chronologically and in an album, we did it on a map which is easy and accessible?
## What it does
This app allows users to collaborate in real time on making maps over shared moments. The moments that we treasure were all made in specific places, and being able to connect those moments to the settings of those physical locations makes them that much more valuable. Users from across the world can upload pictures to be placed onto a map, fundamentally physically mapping their favorite moments.
## How we built it
The project is built off a simple React template. We added functionality a bit at a time, focusing on creating multiple iterations of designs that were improved upon. We included several APIs, including: Google Gemini and Firebase. With the intention of making the application very accessible to a wide audience, we spent a lot of time refining the UI and the overall simplicity yet useful functionality of the app.
## Challenges we ran into
We had a difficult time deciding the precise focus of our app and which features we wanted to have and which to leave out. When it came to actually creating the app, it was also difficult to deal with niche errors not addressed by the APIs we used. For example, Google Photos was severely lacking in its documentation and error reporting, and even after we asked several experienced industry developers, they could not find a way to work around it. This wasted a decent chunk of our time, and we had to move in a completely different direction to get around it.
## Accomplishments that we're proud of
We're proud of being able to make a working app within the given time frame. We're also happy over the fact that this event gave us the chance to better understand the technologies that we work with, including how to manage merge conflicts on Git (those dreaded merge conflicts). This is our (except one) first time participating in a hackathon, and it was beyond our expectations. Being able to realize such a bold and ambitious idea, albeit with a few shortcuts, it tells us just how capable we are.
## What we learned
We learned a lot about how to do merges on Git as well as how to use a new API, the Google Maps API. We also gained a lot more experience in using web development technologies like JavaScript, React, and Tailwind CSS. Away from the screen, we also learned to work together in coming up with ideas and making decisions that were agreed upon by the majority of the team. Even with being friends, we struggled to get along super smoothly while working through our issues. We believe that this experience gave us an ample amount of pressure to better learn when to make concessions and also be better team players.
## What's next for Glimpses
Glimpses isn't as simple as just a map with pictures, it's an album, a timeline, a glimpse into the past, but also the future. We want to explore how we can encourage more interconnectedness between users on this app, so we want to allow functionality for tagging other users, similar to social media, as well as providing ways to export these maps into friendly formats for sharing that don't necessarily require using the app. We also seek to better merge AI into our platform by using generative AI to summarize maps and experiences, but also help plan events and new memories for the future. | winning |
## Inspiration
Our inspiration for **steersafe** came from a desire to make a tangible impact, particularly in the dense urban areas where we live and commute daily as students. Car crashes, especially those caused by distracted driving, are a universal issue that us as college students relate to firsthand, often witnessing them happen on our campuses. Distracted driving leads to countless accidents, injuries, and fatalities each year. We wanted to create a solution that rewards those making an effort, all while promoting the concept of safer, smarter cities.
[Watch our hype video!](https://www.youtube.com/watch?v=68n9w_0jZnM)
## What It Does
**steersafe** is a mobile app that detects phone usage while driving, encouraging drivers to stay focused by rewarding distraction-free habits. Users earn coins for safe, uninterrupted driving, which can be redeemed for real-world rewards like gift cards, and other rewards as we begin to partner with brands such as Dunkin’ Donuts. The app also features a **leaderboard** system, fostering friendly competition among users to see who can be the safest driver on the road. It provides an engaging and fun gamified experience where safe driving habits become a fun challenge rather than a chore.
## How We Built It
We used a combination of technologies we were completely unfamiliar with to build **steersafe**:
* **Swift + SwiftUI** for the frontend, creating a modern, user-friendly interface for iPhones.
* **Firebase** for backend infrastructure, including **Firebase Authentication** for secure user login and **Firebase Realtime Database** for real-time data storage which updates driving information with low latency.
* **Apple CoreMotion** and **CoreLocation** APIs to accurately detect phone usage and track driving activity.
* To detect phone usage, we leveraged Apple’s APIs, which grant access to the smartphone’s motion sensors. Through rigorous testing, trial and error, and ultimate teamwork, we found and fine tuned an algorithm that accurately detects phone usage via accelerometer data across 3 dimensions.
* **TomTom API** to integrate map data and ensure seamless navigation support.
* **Figma** for designing a clean, intuitive user interface focused on ease of use and aesthetics.
## Challenges We Ran Into
One of the biggest challenges was the completely new language of **Swift** and realm of mobile development. Despite having the option to turn this into a website or change ideas, we decided as a team to face the challenge head-on and implement the best possible version of this solution. Furthermore, none of us had worked with **Xcode** or iOS development tools before, so the early stages of development were riddled with crashes and bugs.
As far as the main task within this app–detecting phone usage while driving–accomplishing this with efficiency and accuracy with zero external hardware proved to be extremely challenging. However, we quickly realized that picking the approach was only half of the task. After deciding to use the smartphone’s natively equipped motion sensors, we began extensively reading documentation to learn about navigating this previously explored realm of hardware.
Another key challenge was **gamifying** the driving experience in a way that was both engaging and rewarding. Finding the right balance between keeping drivers focused without interrupting their focus proved to be difficult, but ultimately, we are confident in our solution.
## Accomplishments That We’re Proud Of
We’re extremely proud of not only successfully creating an app that detects phone usage while driving and turns safe driving into a rewarding experience, but also of learning so many new languages and frameworks in such a short time. Despite our inexperience with mobile development, we managed to integrate complex APIs like **CoreMotion**, **CoreLocation**, and **Firebase** to bring our vision to life.
Another major achievement was building something that we’re all truly passionate about. We believe **steersafe** has the potential to make a real impact on the community, helping to foster safer roads and encouraging drivers to adopt distraction-free habits that can save lives.
## What We Learned
This experience has been an incredible learning journey for us. One of the biggest takeaways has been mastering mobile app development, particularly for iOS using **Swift**. We dove into key concepts like building declarative UI with **SwiftU**I, utilizing **Apple APIs** for motion and location tracking, and handling real-time data with Firebase.
Beyond the technical side, we learned the importance of planning every step before jumping into development. From designing the user interface in **Figma** to figuring out how the frontend and backend would work together, having a solid blueprint kept us focused and ensured that what we wanted to build was both achievable and effective. This approach made our development process smoother and more efficient.
## What’s Next for steersafe
We plan to introduce a **friend system**, allowing users to connect, compete, and motivate each other. We envision a social tab where users can see each other’s recently redeemed rewards, drives, and personal records (e.g. longest time driven with no distractions). We also aim to collaborate with insurance companies and businesses to further incentivize safe driving, saving money for both parties. We envision partnerships where users can earn insurance discounts, special offers, or even loyalty rewards to favorites such as Starbucks for maintaining safe driving habits. **steersafe** aims to continue evolving and contribute to the development of smart, safe cities. | ## Inspiration
Did you know that traffic accidents are the leading cause of mortality in America? According to the US Department of Transportation, there are over 6 million automotive crashes each year. 60% of these 6 million could have been prevented had the driver been alerted a mere half second before the collision. Last summer, our teammate Colin drove 5 hours a day on what is known as "America's deadliest highway." He wished he was able to notify other cars when he wanted to pass them or merge into another lane.
## What it does
We created an app called "Aware" that allows vehicles to notify other vehicles if they wish to pass or merge. The app is purely voice command because driving should be hands-free with minimal distractions.
## How we built it
Hands-free interface using Android Text-to-Speech and Google Speech-to-Text, Android app using Android Studio, sending and receiving messages using Firebase. The reason we chose to do this as an app rather than integrated in a car is because not everyone's car has the technology to communicate with other cars (also because we're broke college students and don't have Teslas to test on). If many cars can't send/receive messages, then that defeats the purpose of our idea. However, almost everyone has a phone, meaning that the majority of drivers on the road will immediately be able to download our app and start using it to communicate with each other.
## Challenges we ran into
This is our first time using Google APIs, Android, and Firebase so there was a lot of time spent figuring out how these technologies worked.
## Accomplishments that we're proud of
Brainstorming an impactful idea! Learning new skills! Great teamwork!
## What we learned
Lots about Android development, Google APIs, Firebase, and voice integration!
## What's next for Aware
We plan to implement proactive warnings, for example if there is a pedestrian walking behind when a car is reversing. Additionally, Aware could interact with infrastructure, like detecting how much longer a light will be red or where the nearest empty parking lot is. | ## Problem Statement
As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025.
The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs.
## Solution
The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data.
We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions.
## Developing Process
Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs.
For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time.
Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring.
## Impact
* **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury.
* **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response.
* **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision.
* **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times.
* **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency.
## Challenges
One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly.
## Successes
The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals.
## Things Learnt
* **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results.
* **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution.
* **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model.
## Future Plans for SafeSpot
* First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals.
* Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it.
* The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured. | losing |
## Inspiration
Have you ever been bored at work, but your boss is constantly peering over your shoulder? RetroReddit is here to save you!!! Now you look like you're doing work, but you're actually browsing Reddit consuming the dankest of memes. Be the bon vivant of fine memes in your office today!!! Look busy while doing nothing, all while being lulled by the retro music from your childhood.
## What it does
Reddit on your console... nuff said.
## Challenges I ran into
Several of the libraries that we had worked with weren't well documented, and weren't the friendliest to work with, but were the only options we had, save writing in Bash, which none of us were all that experienced with. In one instance we had spent several hours scratching our heads on why a library wasn't functioning properly, although everything was properly installed. The author of CurseBox had not registered his work on PyPI, and instead we had pip installed another library under the same name.
## What's next for RetroReddit
Adding more macro bindings for an easier browsing experience. And adding more options for user customization. | ## Inspiration
We were all chronically addicted to Reddit, and all the productivity extensions out there took a "cold turkey" approach. We felt like our method of gradual addiction treatment is more effective.
## What it does
Over time, we slowly remove elements on reddit that are addicting (CTA's, voting counters, comment counts, etc.). This way, the user willingly develops indifference towards the platform opposed to fighting the hard wired behaviour that Reddit expertly ingrained.
## How we built it
We performed research on how drug rehabilitation is performed at rehab centres, and incorporated those practices into the design of the extension. We used jQuery to manipulate the DOM elements.
## Challenges we ran into
Time was the main constraint, as we spent the first half of the hackathon discarding dead-end ideas midway.
## Accomplishments that we're proud of
Finishing in time and shipping an application that can be used immediately by other addicted redditors.
## What we learned
jQuery and better familiarity with building chrome extensions. | ## Inspiration
YouTube money $$ encourages content creators to utilize click bait titles and other SEO tactics to receive the most views. That results in a less favorable search experience.
## What it does
Allows you to search for videos based on what is actually SAID in the video. Imagine if videos were actually just web pages and you can google search them.
## How I built it
We indexed over 300k YouTube videos (35 GB of transcript data) and grabbed their transcriptions(scope of the hackathon, but at scale there are no limitations) and used elastic search for searching
## Challenges I ran into
Aggregating big/large amounts of data is tough at hackathons, since WiFi is flaky.
## Accomplishments that I'm proud of
Being able to aggregate the data
## What I learned
Data is fun
## What's next for Invidia
We will be launching after the hackathon! | losing |
## Inspiration
At reFresh, we are a group of students looking to revolutionize the way we cook and use our ingredients so they don't go to waste. Today, America faces a problem of food waste. Waste of food contributes to the acceleration of global warming as more produce is needed to maintain the same levels of demand. In a startling report from the Atlantic, "the average value of discarded produce is nearly $1,600 annually" for an American family of four. In terms of Double-Doubles from In-n-Out, that goes to around 400 burgers. At reFresh, we believe that this level of waste is unacceptable in our modern society, imagine every family in America throwing away 400 perfectly fine burgers. Therefore we hope that our product can help reduce food waste and help the environment.
## What It Does
reFresh offers users the ability to input ingredients they have lying around and to find the corresponding recipes that use those ingredients making sure nothing goes to waste! Then, from the ingredients left over of a recipe that we suggested to you, more recipes utilizing those same ingredients are then suggested to you so you get the most usage possible. Users have the ability to build weekly meal plans from our recipes and we also offer a way to search for specific recipes. Finally, we provide an easy way to view how much of an ingredient you need and the cost of those ingredients.
## How We Built It
To make our idea come to life, we utilized the Flask framework to create our web application that allows users to use our application easily and smoothly. In addition, we utilized a Walmart Store API to retrieve various ingredient information such as prices, and a Spoonacular API to retrieve recipe information such as ingredients needed. All the data is then backed by SQLAlchemy to store ingredient, recipe, and meal data.
## Challenges We Ran Into
Throughout the process, we ran into various challenges that helped us grow as a team. In a broad sense, some of us struggled with learning a new framework in such a short period of time and using that framework to build something. We also had issues with communication and ensuring that the features we wanted implemented were made clear. There were times that we implemented things that could have been better done if we had better communication. In terms of technical challenges, it definitely proved to be a challenge to parse product information from Walmart, to use the SQLAlchemy database to store various product information, and to utilize Flask's framework to continuously update the database every time we added a new recipe.
However, these challenges definitely taught us a lot of things, ranging from a better understanding to programming languages, to learning how to work and communicate better in a team.
## Accomplishments That We're Proud Of
Together, we are definitely proud of what we have created. Highlights of this project include the implementation of a SQLAlchemy database, a pleasing and easy to look at splash page complete with an infographic, and being able to manipulate two different APIs to feed of off each other and provide users with a new experience.
## What We Learned
This was all of our first hackathon, and needless to say, we learned a lot. As we tested our physical and mental limits, we familiarized ourselves with web development, became more comfortable with stitching together multiple platforms to create a product, and gained a better understanding of what it means to collaborate and communicate effectively in a team. Members of our team gained more knowledge in databases, UI/UX work, and popular frameworks like Boostrap and Flask. We also definitely learned the value of concise communication.
## What's Next for reFresh
There are a number of features that we would like to implement going forward. Possible avenues of improvement would include:
* User accounts to allow ingredients and plans to be saved and shared
* Improvement in our search to fetch more mainstream and relevant recipes
* Simplification of ingredient selection page by combining ingredients and meals in one centralized page | # Omakase
*"I'll leave it up to you"*
## Inspiration
On numerous occasions, we have each found ourselves staring blankly into the fridge with no idea of what to make. Given some combination of ingredients, what type of good food can I make, and how?
## What It Does
We have built an app that recommends recipes based on the food that is in your fridge right now. Using Google Cloud Vision API and Food.com database we are able to detect the food that the user has in their fridge and recommend recipes that uses their ingredients.
## What We Learned
Most of the members in our group were inexperienced in mobile app development and backend. Through this hackathon, we learned a lot of new skills in Kotlin, HTTP requests, setting up a server, and more.
## How We Built It
We started with an Android application with access to the user’s phone camera. This app was created using Kotlin and XML. Android’s ViewModel Architecture and the X library were used. This application uses an HTTP PUT request to send the image to a Heroku server through a Flask web application. This server then leverages machine learning and food recognition from the Google Cloud Vision API to split the image up into multiple regions of interest. These images were then fed into the API again, to classify the objects in them into specific ingredients, while circumventing the API’s imposed query limits for ingredient recognition. We split up the image by shelves using an algorithm to detect more objects. A list of acceptable ingredients was obtained. Each ingredient was mapped to a numerical ID and a set of recipes for that ingredient was obtained. We then algorithmically intersected each set of recipes to get a final set of recipes that used the majority of the ingredients. These were then passed back to the phone through HTTP.
## What We Are Proud Of
We were able to gain skills in Kotlin, HTTP requests, servers, and using APIs. The moment that made us most proud was when we put an image of a fridge that had only salsa, hot sauce, and fruit, and the app provided us with three tasty looking recipes including a Caribbean black bean and fruit salad that uses oranges and salsa.
## Challenges You Faced
Our largest challenge came from creating a server and integrating the API endpoints for our Android app. We also had a challenge with the Google Vision API since it is only able to detect 10 objects at a time. To move past this limitation, we found a way to segment the fridge into its individual shelves. Each of these shelves were analysed one at a time, often increasing the number of potential ingredients by a factor of 4-5x. Configuring the Heroku server was also difficult.
## Whats Next
We have big plans for our app in the future. Some next steps we would like to implement is allowing users to include their dietary restriction and food preferences so we can better match the recommendation to the user. We also want to make this app available on smart fridges, currently fridges, like Samsung’s, have a function where the user inputs the expiry date of food in their fridge. This would allow us to make recommendations based on the soonest expiring foods. | ## Inspiration
Looking around you in your day-to-day life you see so many people eating so much food. Trust me, this is going somewhere. All that stuff we put in our bodies, what is it? What are all those ingredients that seem more like chemicals that belong in nuclear missiles over your 3 year old cousins Coke? Answering those questions is what we set out to accomplish with this project. But answering a question doesn't mean anything if you don't answer it well, meaning your answer raises as many or more questions than it answers. We wanted everyone, from pre-teens to senior citizens to be able to understand it. So in summary of what we wanted to do, we wanted to give all these lazy couch potatoes (us included) an easy, efficient, and most importantly, comprehendible method of knowing what it is exactly that we're consuming by the metric ton on a daily basis.
## What it does
What our code does is that it takes input either in the form of text or image, and we use it as input for an API from which we extract our final output using specific prompts. Some of our outputs are the nutritional values, a nutritional summary, the amount of exercise required to burn off the calories gained from the meal, (its recipe), and its health in comparison to other foods.
## How we built it
Using Flask, HTML,CSS, and Python for backend.
## Challenges we ran into
We are all first-timers so none of us had any idea as to how the whole thing worked, so individually we all faced our fair share of struggles with our food, our sleep schedules, and our timidness, which led to miscommunication.
## Accomplishments that we're proud of
Making it through the week and keeping our love of tech intact. Other than that we really did meet some amazing people and got to know so many cool folks. As a collective group, we really are proud of our teamwork and ability to compromise, work with each other, and build on each others ideas. For example we all started off with different ideas and different goals for the hackathon but we ended up all managing to find a project we all liked and found it in ourselves to bring it to life.
## What we learned
How hackathons work and what they are. We also learned so much more about building projects within a small team and what it is like and what should be done when our scope of what to build was so wide.
## What's next for NutriScan
-Working ML
-Use of camera as an input to the program
-Better UI
-Responsive
-Release | partial |
## Inspiration
Our team was inspired to develop Argumate by YouTube channels such as Jubilee which brought together people with diametrically opposing views and gave them a platform to argue their view. With Argumate, we seek to provide a similar platform online that, contrary to most social medias nowadays, brings together people with different perspectives instead of having them stuck in echo chambers of their own beliefs and values.
## What it does
Argumate has 2 different debate options: a private 1-on-1 section (for every user), and a public debate section (user must opt-in). Every week Argumate will have 3 different debate topics for users to participate in (both privately and publicly) and choose their initial side.
* Then for the private 1-on-1's, we will match users who chose different sides of the same debate topic and have them argue their points on their own to try to convince their "Argumate" to change their mind. For successfully changing your "Argumate's" mind, the user will gain "braincells" that will serve no purpose like Reddit "karma".
* However for the public debates, users must opt-in and get selected to become their side's Public Representative. For each debate topic, there will be 2 public debates for a total of 6 total public debates (6 Public Representatives from each side) every week (1 public debate for 6 days out of the week). From the viewers of these debates, Argumate will take an equal numbers of users from each side to vote for which side won that debate and at the end of the 2nd public debate for each debate topic, we will reveal with side has won by the total # of votes.
* Extra addition of a public chat that all users can chat in with their background color being in their side's color. There will be 3 different chats for each debate topic.
## How we built it
We created a React web app that handled data from a Firebase cloud storage. The sign-in and oauth were handled by Firebase. We created individual chat rooms to manage messaging with others, ways to view chats, and automatically assign users to a side.
## Challenges we ran into
We were originally going to implement a video call feature, but we ran into lots of issues regarding data streaming and formatting the user videos.
## Accomplishments that we're proud of
Real-time chat rooms and design of the front-end.
## What we learned
WebRTC and Socket.io, Firebase and mUI
## What's next for Argumate
We want to implement live video broadcasting for the public debates and possibly for private 1-on-1's if both parties agree. Also our team was thinking of adding reactions that the viewers of the public debate can send on-screen. | ## Inspiration
Though social media has made it easier and faster than ever to connect people, the U.S. is now more divided than ever – politically, economically, and socially. While social media is likely only one of many contributing factors to this, our goal for this project was to try to reimagine how people traditionally interact on social media and design it in a way that can better limit potential polarization or “echo chamber” effects. We challenged ourselves to think of a way to promote more balanced civil discourse among users, while still keeping them engaged and informed. This led our team to create a social media app called Civil Discord for the social interconnectivity challenge.
## What it does
Civil Discord aims to encourage open, respectful discussions among users with a variety of opinions on topics of their choosing. In addition to providing a public discussion board and private chat functionality, our app differs from traditional social media in that it offers users the option to “friend” and chat with not only others who share similar opinions, but also users who hold opposing viewpoints. The app applies NLP and sentiment analysis to suggest new connections who are similar engaged on the app hold similar and/or different perspectives on topics they are mutually interested in.
## Ethical considerations
To promote a more open and honest discussion between users, we’ve set up the chat interaction so that it is 1-on-1 and users are anonymous to each other. Given the polarizing nature of politics today, our hope is that this anonymity protects users and ensures they aren’t attacked for their ideas. However, while users can feel comfortable sharing their opinions, our chat functionality prevents users from sending messages containing profanity and vulgarity to ensure discussions remain respectful.
## Challenges we ran into
On the front end, through several members of our group were familiar with Flutter, we found design to be difficult to execute well in short time period. On the back end, we had to learn how to work backwards and recommend similar users based on several scores given by other algorithms run on the users' messages. We also had to find a way to recommend similar users in real time, which was difficult since our original models trained on a dataset with arbitrary users that would not be on the platform.
## Accomplishments that we're proud of
We’re proud that we were able to pull together a working app in a short amount of time that can potentially positively influence a relevant issue all our team members care about. A few members of our team are also currently involved in school research related to machine learning models so building this app was valuable and relevant practical experience in applying what they’ve learned so far in their research.
## What we learned
Hacking this app required many new technologies and skillsets for all our team members. Even for parts of the project that implemented technologies that we already had some familiarity with, this project stretched us to go much further beyond what we already knew, whether it involved creating a real-time front-end posting board or chat application or implementing practical machine learning models to analyze user inputs and make recommendations.
## How we built it
* Frontend: Flutter web
* Backend: Firestore/Python, Google Cloud Natural Language API, VADER, TextBlob, K-nearest-neighbors algorithm, and a reinforcement learning model | ## Inspiration
It feels like elections are taking up an increasing portion of our mindshare, with every election cycle it only gets more insane. Constantly flooded by news, by opinions – it's like we never get a break. And yet, paradoxically, voters feel increasingly less connected to their politicians. They hear soliloquies about the terrible deeds of the politician's opponent, but rarely about how their policies will affect them personally. That changes today.
It's time we come back to a democracy which lives by the words *E pluribus unum* – from many, one. Citizens should understand exactly how politician's policies will affect them and their neighbours, and from that, a general consensus may form. And campaigners should be given the tools to allow them to do so.
Our team's been deeply involved in community and politics for years – which is why we care so much about a healthy democracy. Between the three of us over the years, we've spoken with campaign / PR managers at 70+ campaigns, PACs, and lobbies, and 40+ ad teams – all in a bid to understand how technology can help propel a democracy forward.
## What it does
Rally helps politicians meet voters where they are – in a figurative and digital sense. Politicians and campaign teams can use our platform to send geographically relevant campaign advertisements to voters – tailored towards issues they care deeply about. We thoroughly analyze the campaigner's policies to give a faithful representation of their ideas through AI-generated advertisements – using their likeness – and cross-correlate it with issues the voter is likely to care, and want to learn more, about. We avoid the uncanny valley with our content, we maintain compliance, and we produce content that drives voter engagement.
## How we built it
Rally is a web app powered by a complex multi-agent chain system, which uses natural language to understand both current local events and campaign policy in real-time, and advanced text-to-speech and video-to-video lip sync/facetune models to generate a faithful personalised campaign ad, with the politician speaking to voters about issues they truly care about.
* We use Firecrawl and the Perplexity API to scrape news and economic data about the town a voter is from, and to understand a politician's policies, and store GPT-curated insights on a Supabase database.
* Then, we use GPT4o-mini to parse through all that data and generate an ad speech, faithful to the politician's style, which'll cover issues relevant to the voter.
* This speech is sent to Cartesia.ai's excellent Sonic text-to-speech model which has already been trained on short clips of the politician's voice.
* Simultaneously, GPT4o-mini decides which parts of the ad should have B-roll/stock footage displayed, and about what.
* We use this to query Pexels for stock footage to be used during the ad.
* Once the voice narration has been generated, we send it to SyncLabs for lipsyncing over existing ad/interview footage of the politician.
* Finally, we overlay the the B-roll footage (at the previously decided time stamps) on the lip synced videos, to create a convincing campaign advertisement.
* All of this is packaged in a beautiful and modern UI built using NextJS and Tailwind.
And all of this is done in the space of just a few minutes! So whether the voter comes from New York City or Fairhope, Alabama, you can be sure that they'll receive a faithful campaign ad which puts the politician's best foot forward while helping the voter understand how their policies might affect them.
## Wait a minute, is this even legal?
***YES***. Currently, bills are being passed in states around the country limiting the use of deepfakes in political campaign – and for good reason. The potential damage is obvious. However, in every single proposed bill, a campaigner is absolutely allowed to create deepfakes of themselves for their own gains. Indeed, while we absolutely support all regulation on nefarious use of deepfakes, we also deeply believe in its immense potential for good. We've even built a database of all the bills that are in debate or have been enacted regarding AI political advertisement as part of this project, feel free to take a look!
Voter apathy is an epidemic, which slowly eats away at a democracy – Rally believes that personalised campaign messaging is one of the most promising avenues to battle against that.
## Challenges we ran into
The surface area for errors increase hugely with the amount of services we integrate with. Undocumented or badly documented APIs were huge time sinks – some of these services are so new that questions haven't been asked about them online. Another very large time sink was the video manipulation through FFMPEG. It's an extremely powerful tool, and doing simple things is very easy, but doing more complicated tasks ended up being very difficult to get right.
However, the biggest challenge by far was creating advertisements that maintained state-specific compliance, meaning they had different rules and regulations for each state and had to avoid negativity, misinformation, etc. This can be very hard as LLM outputs are often very subjective and hard to evaluate deterministically. We combatted this by building a chain of LLMs informed by data from the NCSL, OpenFEC, and other reliable sources to ensure all the information that we used in the process was thoroughly sourced, leading to better outcomes for content generation. We also used validator agents to verify results from particularly critical parts of the content generation flow before proceeding.
## Accomplishments that we're proud of
We're deeply satisfied with the fact that we were able to get the end-to-end voter-to-ad pipeline going while also creating a beautiful web app. It seemed daunting at first, with so many moving pieces, but through intelligent separation of tasks we were able to get through everything.
## What we learned
Premature optimization is the killer of speed – we initially tried to do some smart splicing of the base video data so that we wouldn't lip sync parts of the video that were going to be covered by the B-roll (and thus do the lip syncing generation in parallel) – but doing all the splicing and recombining ended up taking almost as long as simply passing the whole video to the lip syncing model, and with many engineer-hours lost. It's much better to do things that don't scale in these environments (and most, one could argue).
## What's next for Rally
Customized campaign material is a massive market – scraping data online has become almost trivial thanks to tools like Firecrawl – so a more holistic solution for helping campaigns/non-profits/etc. ideate, test, and craft campaign material (with AI solutions already part of it) is a huge opportunity. | losing |
## Inspiration
With the recent Corona Virus outbreak, we noticed a major issue in charitable donations of equipment/supplies ending up in the wrong hands or are lost in transit. How can donors know their support is really reaching those in need? At the same time, those in need would benefit from a way of customizing what they require, almost like a purchasing experience.
With these two needs in mind, we created Promise. A charity donation platform to ensure the right aid is provided to the right place.
## What it does
Promise has two components. First, a donation request view to submitting aid requests and confirm aids was received. Second, a donor world map view of where donation requests are coming from.
The request view allows aid workers, doctors, and responders to specificity the quantity/type of aid required (for our demo we've chosen quantity of syringes, medicine, and face masks as examples) after verifying their identity by taking a picture with their government IDs. We verify identities through Microsoft Azure's face verification service. Once a request is submitted, it and all previous requests will be visible in the donor world view.
The donor world view provides a Google Map overlay for potential donors to look at all current donation requests as pins. Donors can select these pins, see their details and make the choice to make a donation. Upon donating, donors are presented with a QR code that would be applied to the aid's packaging.
Once the aid has been received by the requesting individual(s), they would scan the QR code, confirm it has been received or notify the donor there's been an issue with the item/loss of items. The comments of the recipient are visible to the donor on the same pin.
## How we built it
Frontend: React
Backend: Flask, Node
DB: MySQL, Firebase Realtime DB
Hosting: Firebase, Oracle Cloud
Storage: Firebase
API: Google Maps, Azure Face Detection, Azure Face Verification
Design: Figma, Sketch
## Challenges we ran into
Some of the APIs we used had outdated documentation.
Finding a good of ensuring information flow (the correct request was referred to each time) for both the donor and recipient.
## Accomplishments that we're proud of
We utilized a good number of new technologies and created a solid project in the end which we believe has great potential for good.
We've built a platform that is design-led, and that we believe works well in practice, for both end-users and the overall experience.
## What we learned
Utilizing React states in a way that benefits a multi-page web app
Building facial recognition authentication with MS Azure
## What's next for Promise
Improve detail of information provided by a recipient on QR scan. Give donors a statistical view of how much aid is being received so both donors and recipients can better action going forward.
Add location-based package tracking similar to Amazon/Arrival by Shopify for transparency | ## Inspiration
One of our own member's worry about his puppy inspired us to create this project, so he could keep an eye on him.
## What it does
Our app essentially monitors your dog(s) and determines their mood/emotional state based on their sound and body language, and optionally notifies the owner about any changes in it. Specifically, if the dog becomes agitated for any reasons, manages to escape wherever they are supposed to be, or if they fall asleep or wake up.
## How we built it
We built the behavioral detection using OpenCV and TensorFlow with a publicly available neural network. The notification system utilizes the Twilio API to notify owners via SMS. The app's user interface was created using JavaScript, CSS, and HTML.
## Challenges we ran into
We found it difficult to identify the emotional state of the dog using only a camera feed. Designing and writing a clean and efficient UI that worked with both desktop and mobile platforms was also challenging.
## Accomplishments that we're proud of
Our largest achievement was determining whether the dog was agitated, sleeping, or had just escaped using computer vision. We are also very proud of our UI design.
## What we learned
We learned some more about utilizing computer vision and neural networks.
## What's next for PupTrack
KittyTrack, possibly
Improving the detection, so it is more useful for our team member | ## Inspiration
**Addressing a Dual-Faceted Issue in Philanthropy to Support UN Sustainability Goals**
Charity Chain was inspired by the pervasive issues of mistrust and inefficiency in the philanthropic sector - a problem affecting both donors and charities. By recognizing this gap, we aimed to create a solution that not only empowers individual giving but also aligns with the broader vision of the United Nations Sustainable Development Goals, particularly Goal 17: Revitalize the global partnership for sustainable development.
## What it does
**Charity Chain: A Platform for Transparent and Effective Giving**
For Donors: Provides a transparent, traceable path for their donations, ensuring that their contributions are used effectively and for the intended purposes.
For Charities: Offers a platform to demonstrate their commitment to transparency and effectiveness, helping them build trust and secure more funding.
## How we built it
**Tech Stack**: Utilized React for a dynamic front-end, Node.js/Express.js for a robust server-side solution, and blockchain technology for secure, transparent record-keeping.
**Collaborative Design**: Engaged with charity organizations, donors, and technology experts to create a user-friendly platform that meets the needs of all stakeholders.
## Challenges we ran into
**Integrating blockchain**: As we were all new to blockchain and web3, we didn't have a basis of what to
**Onramping fiat currency**: Something we wished to do was for donors to be able to donate through using their government currency (CAD, USD, etc) and be converted to a stable cryptocurrency, such as USDT. This way we any donation that was made would be tracked and be made visible on the blockchain and donors would be able to see where their money was being used. However, to get access to API keys we would have to apply, regardless of the API we were using (thirdfy, changelly, ramp network, kraken, etc) which would take up several business days for approval.
## Accomplishments that we're proud of
Accomplishing the various aspects separately
-Learned what a blockchain is
-Learned how to create our own cryptocurrencies
-Learned react and tailwindcss
-Learned ethers.js with solidity to connect frontend to web3
## What we learned
**Blockchainining** and how we could use and create our own CryptoCurrency using Solidity and the Remix IDE.
**Ethers.js**: We learned how to use ethers.js in order to connect our frontend to web3. This allowed us to incorporate Smart Contracts into our front-end
\*\*How to query live transactions from a frront-end interface to a backend blockchain.
## What's next for Charity Chain
**Onramping fiat currency**: This allows for a simpler, more inclusive donor side that wouldn't require their knowledge of blockchain and could donate simply with paypal or a banking card.
**Purchases through CharityChain**: Hosting purchases on behalf of hosted charities (similar to how Raukten) | winning |
## Inspiration
Our inspiration for Sustain-ify came from observing the current state of our world. Despite incredible advancements in technology, science, and industry, we've created a world that's becoming increasingly unsustainable. This has a domino effect, not just on the environment, but on our own health and well-being as well. With rising environmental issues and declining mental and physical health, we asked ourselves: *How can we be part of the solution?*
We believe that the key to solving these problems lies within us—humans. If we have the power to push the world to its current state, we also have the potential to change it for the better. This belief, coupled with the idea that *small, meaningful steps taken together can lead to a big impact*, became the core principle of Sustain-ify.
## What it does
Sustain-ify is an app designed to empower people to make sustainable choices for the Earth and for themselves. It provides users with the tools to make sustainable choices in everyday life. The app focuses on dual sustainability—a future where both the Earth and its people thrive.
Key features include:
1. **Eco Shopping Assistant**: Guides users through eco-friendly shopping.
2. **DIY Assistant**: Offers DIY sustainability projects.
3. **Health Reports**: Helps users maintain a healthy lifestyle.
## How we built it
Sustain-ify was built with a range of technologies and frameworks to deliver a smooth, scalable, and user-friendly experience.
Technical Architecture:
Frontend Technologies:
* Frameworks: Flutter (Dart), Streamlit (Python) were used for the graphical user interface (GUI/front-end).
* Services in Future: Integration with third-party services such as Twilio, Lamini, and Firebase for added functionalities like messaging and real-time updates.
Backend & Web Services:
* Node.js & Express.js: For the backend API services.
* FastAPI: RESTful API pipeline used for HTTP requests and responses.
* Appwrite: Backend server for authentication and user management.
* MongoDB Atlas: For storing pre-processed data chunks into a vector index.
Data Processing & AI Models:
* ScrapeGraph.AI: LLM-powered web scraping framework used to extract structured data from online resources.
* Langchain & LlamaIndex: Used to preprocess scraped data and split it into chunks for efficient vector storage.
* BGE-Large Embedding Model: From Hugging Face, used for embedding textual content.
* Neo4j: For building a knowledge graph to improve data retrieval and structuring.
* Gemini gpt-40 & Groq: Large language models used for inference, running on LPUs (Language Processing Units) for a sustainable inference mechanism.
Additional Services:
* Serper: Provides real-time data crawling and extraction from the internet, powered by LLMs that generate queries based on the user's input.
* Firebase: Used for storing and analyzing user-uploaded medical reports to generate personalized recommendations.
Authentication & Security:
* JWT (JSON Web Tokens): For secure data transactions and user authentication.
## Challenges we ran into
Throughout the development process, we faced several challenges:
1. Ensuring data privacy and security during real-time data processing.
2. Handling large amounts of scraped data from various online sources and organizing it for efficient querying and analysis.
3. Scaling the inference mechanisms using LPUs to provide sustainable solutions without compromising performance.
## Accomplishments that we're proud of
We're proud of creating an app that:
1. Addresses both environmental sustainability and personal well-being.
2. Empowers people to make sustainable choices in their everyday lives.
3. Provides practical tools like the Eco Shopping Assistant, DIY Assistant, and Health Reports.
4. Has the potential to create a big impact through small, collective actions.
## What we learned
Through this project, we learned that:
1. Sustainability isn't just about making eco-friendly choices; it's about making *sustainable lifestyle* choices too, focusing on personal health and well-being.
2. Small, meaningful steps taken together can lead to a big impact.
3. People have the power to change the world for the better, just as they have the power to impact it negatively.
## What's next for Sustain-ify
Moving forward, we aim to:
1. Continue developing and refining our features to better serve our users.
2. Expand our user base to increase our collective impact.
3. Potentially add more features that address other aspects of sustainability.
4. Work towards our vision of creating a sustainable future where both humans and the planet can flourish.
Together, we believe we can create a sustainable future where both humans and the planet can thrive. That's the ongoing mission of Sustain-ify, and we're excited to continue bringing this vision to life! | Mafia with LLMs! Built with Python/React/Typescript/Websockets and Cartesia/Groq/Gemini. | ## Inspiration
Learning about some environmental impact of the retail industry led us to wonder about what companies have aimed for in terms of sustainability goals. The textile industry is notorious for its carbon and water footprints with statistics widely available. How does a company promote sustainability? Do people know and support about these movements?
With many movements by certain retail companies to have more sustainable clothes and supply-chain processes, we wanted people to know and support these sustainability movements, all through an interactive and fun UI :)
## What it does
We built an application to help users select suitable outfit pairings that meet environmental standards. The user is prompted to upload a picture of a piece of clothing they currently own. Based on this data, we generate potential outfit pairings from a database of environmentally friendly retailers. Users are shown prices, means of purchase, reasons the company is sustainable, as well as an environmental rating.
## How we built it
**Backend**: Google Vision API, MySQL, AWS, Python with Heroku and Flask deployment
Using the Google Vision API, we learn of the features (labels, company, type of clothes and colour) from pictures of clothes. With these features, we use Python to interact with our MySQL database of clothes to both select a recommended outfit and additional recommended clothes for other potential outfit combinations.
To generate more accurate label results, we additionally perform a Keras (with Tensorflow backend) image segmentation to crop out the background, allowing the Google Vision API to extract more accurate features.
**Frontend**: JavaScript, React, Firebase
We built the front-end with React, using Firebase to handle user authentications and act as a content delivery network.
## Challenges we ran into
The most challenging part of the project was learning to use the Google Vision API, and deploying the API on Heroku with all its dependencies.
## Accomplishments that we're proud of
Intuitive and clean UI for users that allows ease of mix and matching while raising awareness of sustainability within the retail industry, and of course, the integration and deployment of our technology stack.
## What we learned
After viewing some misfit outfit recommendations, such as a jacket with shorts, had we added a "seasonal" label, and furthermore a "dress code" label (by perhaps, integrating transfer learning to label the images), we could have given better outfit recommendations. This made us realize importance of brainstorming and planning.
## What's next for Fabrical
Deploy more sophisticated clothes matching algorithms, saving the user's outfits into a closet, in addition to recording the user's age, and their preferences as they like / dislike new outfit combinations; incorporate larger database, more metrics, and integrate the machine learning matching / cropping techniques. | winning |
## Inspiration
Our inspiration for MakeCents originated from our collective mutual passion in both finance and software. We understand how good we have it here at Western and the availability of information, but its important to look at the world around you from time to time and realize what we take for granted. We understand that for a variety of reasons, millennials in general are not considered financially literate, and when we saw that only 28% of this demographic are considered financially literate. We knew this was a statistic we wanted to change.
This is where MakeCents comes in. With the objective of making financial literacy available for all, we help users learn hundreds of financial terms and realize the path to financial literacy actually isn't as long as it seems.
## What it does
MakeCents is a Chrome extension for anyone who is looking to become more financially literate. As readers, especially those who are curious about business and finance, it may be overwhelming to see a variety of complex jargon and terminology as you are reading a business article. To save the tedious hassle of constantly Googling these terms, MakeCents will webscrape the page you are on and will create a popup that lists all the financial terms on the page along with their corresponding definitions. To make it more user friendly, these words will be highlighted on the page, where the user can hover over the words to see their definition as well as they read along. We also incorporated a website that has all these terms listed for a user to search for as well.
As new hackers, we were amazed and curious about the functionality of Dasha.ai. To make MakeCents more accessible, we envisioned users who may lack data, WiFi and/or internet, but would want to learn more about financial terminology. Thus, we decided to create a Dasha.ai component where a user can call Dasha and ask her to define a term over the phone to enhance the accessibility of our product.
## How we built it
**Chrome extension:**
The chrome extension was built entirely using javascript, css, html and node. We started off with a very basic chrome extension consisting of html and a manifest that said hello when selected and slowly worked our way up. The first functionality we added was scrapping the current page for the key terms we were looking for. We furthered this by then manipulating the html to add inline spans to highlight the desired terms. With those found terms in mind, we then added the list functionality when clicking on the extension in the top right. The list displayed shows all key words to look for on the page. Lastly, we added the hover feature, which quickly shows you the definition of any highlighted word from our dictionary to truly make your reading experience seamless.
**Website:**
The website was built using an initial React template, then adding some more components and functionality, such as the cards with definitions on it and dynamic searching throughout the glossary.
**Dasha.ai:**
Using the tutorials and sample projects provided on Dasha's website, we were able to configure Dasha to suit the needs and requirements of our project as we envisioned it. Users can place a call with Dasha to receive information about any defined finance terms. We populated Dasha's vocabulary reference file to include a large variety of financial terms and their definitions. Dasha knows what term the user is requesting by referencing the synonomous words and phrases of all these vocabularies, and assuming the user asked for the associated term. The functionality and applicability of Dasha in this instance comes to any users who don't want to use internet/data, have trouble navigating the internet, or prefer a humanistic, old fashioned feeling way of getting information.
## Challenges we ran into
**Chrome extension:**
The biggest issue by far when building the extension was having to do find creative ways to solve just about every obstacle in our way. The first problem we had, with not being able to find the text in page, required the use of XPathResult, something we were all completely unfamiliar with, to find all text that matches the keywords. There were a lot of resources online for searching for one word on a page but almost none for finding terms that could include spaces. The biggest problem we had when developing the chrome extension was talking between the context.js file, what interacts with the current webpage, and the popup.js, which exists in a completely different scope. Through communication and teamwork, we were able to come up with solutions to nearly all of our problems and come out with a product we are proud to call ours.
**Web development:**
Deploying the website was our biggest roadblock towards the end of the hackathon. Unfortunately, we were unable to fully deploy the website (although we got some cool Domain.com domains to use :)), however hope to deploy it later on.
**Dasha.ai:**
Being new to AI in general, taking on and training Dasha was definitely a learning experience. We definitely struggled but learned a lot along the way and feel more confident now if presented with the opportunity to work with AI again. The biggest challenge we faced however, despite our many attempts, was trying to host Dasha in way where users can actually call the program by calling an assigned phone number. Unfortunately, due to time constraints, we settled with the compromise of user having the ability to place a call request from Dasha.
## Accomplishments that we're proud of
The accomplishment that we are most proud of is finishing our first hackathon. It's our first time working as a team, most of our first time ever in a hackathon and one persons very first time coding. We all picked up so many new skills this weekend, like how to train an AI, launch a google chrome extension, or even someone's first program. We went in with no expectations and now have a product on the Google Extension store, which we all think is pretty cool.
## What we learned
During this hackathon, with this being our first hackathon for all of us except one, our team learned so much. Not only pertaining to coding and software, but also in areas like teamwork, productivity and working under pressure in something new. We made it a personal objective to try and complete our project while utilizing tools we have never worked with before
Because of this, we all learned many new things. Members who have never worked with AI before now know how a fully functional and coherent conversational AI works enough to manipulate it. Members who have never worked with chrome extensions and minimal web development experience, now know how to create a sophisticated, and real-world-useable chrome extension. Members who have never even coded before now know infinitely more and may have even discovered an undiscovered passion. We all learned how to be better learners, and did so in a new, exciting and prideful way.
## What's next for MakeCents
The functionality of MakeCents isn't just limited to financial terms. We envision a product that can be used to define a variety of terms, such as medical, scientific, political or technological terminology. The accessibility of information is a core value we believe in as a group, and hope MakeCents can benefit any reader, student, or person looking to become more literate in a field they are passionate and curious about. | ## Inspiration 🚀
Every student knows that feeling of utter panic when you're in your 8AM lecture, severely sleep deprived and blanking out...what did the prof just say? I guess we'll never know. Based on our *highly researched* data, we concluded quickly that students nowadays are some of the most easily distracted students to have existed in student kind (in fact, most students didn't even finish our survey). At Hack the North, inspired by some of the universal problems that students face, we wanted to develop a tool to revolutionize the way we learn and take notes.
Enter *KeyNote*, the personalized notes device leveraged by AI– made for students, by students.
## What it does ⚙️
KeyNote is a powerful note-taking device paired with a web-app that leverages AI to expand the lesson beyond the lecture. Think, the best friend that'll always share their notes and answer the questions that you have from class. All users must do is record the lecture through our device, controlling it through the "Start/Stop" buttons. Then, KeyNote will process the transcript, and provide a summary of key concepts taken from the lecture, as well as the option to ask lecture-specific questions through our chat feature-- all previous transcripts being available through our curated calendar!
KeyNote allows students to not only review and ask about their lecture, but they can make their own annotations and edits on top of the key concepts, just as they would on paper. There are additional features allowing users to add, delete and modify the notes, giving students the power over their own learning.
## How we built it 🛠️
KeyNote, our innovative web app, was constructed using TypeScript and Next.js for the front-end, while Python with FastAPI powered the backend. This blend ensured a reliable and scalable platform. Hardware integration was achieved through Python, facilitating seamless connections with our devices. Google Cloud Services played a pivotal role, enabling the retrieval and transmission of recordings. Assembly AI was leveraged for transcription tasks, and Cohere handled extensive data processing. This synergy of technologies has made KeyNote a game-changing solution for our users.
## Challenges we ran into 🔥
While developing KeyNote, we ran into several challenges over the weekend. Given our idea, we were incredibly excited to integrate Cohere’s different services within our platform. In context of Cohere, most of the features we used were experimental services, which sometimes did not have the support to maintain the context in terms of larger sets of data. In turn, we had to do batch-wise processing in order to ensure that we had full data coverage.
Another challenge was integrating the front-end and backend. All of our components were programmed with extensibility in mind, meaning we created a dynamic interface that constantly interacts with the backend in order to pass in data- figuring out how to program our calendar feature was a challenge given the fact that we were not familiar with our tech stack. On a similar note, we had to learn how routing worked in Next.Js- a specific challenge being learning dynamic routing to create unique endpoints for each Note.
## Accomplishments that we're proud of 💖
We’re incredibly proud that we were able to build a functioning device and platform that we most certainly find useful and relevant to our day-to-day life. Specifically, it was incredibly exciting exploring brand-new cutting-edge technologies that helped us make this idea a reality- what initially seemed like a wildly ambitious idea was made possible through meticulous planning, designing and learning on the go! Over this past weekend, we learnt how to create a software-hardware project, exploring how to integrate the front-end, backend, databases and hardware with each other was no-easy feat! We are incredibly excited to show you what we have in store.
## What we learned 🌍
In reflecting over the past 36 hours, we may all agree that we’re taking away both technical learnings, but also best practices when working in a collaborative environment under time constraints! We found ourselves using Githubs multifaceted services and assigning ongoing tasks to each other in order to stay on track. In addition, working with many of Cohere’s services was both incredibly exciting, but also challenging. In hindsight, we really benefited from the variety of workshops and mentorship available throughout the event. As a team, arguably our biggest takeaway was the impact that modern technology has and will have on society– many of the world's solutions can be solved with the right technology, and a little bit of sleep deprivation!
## What's next for Keynotes 💫
Looking ahead, some of the immediate additions we would like to make to KeyNotes would be adding an additional feature to allow users to jump to specific parts of the transcript based on their notes and questions. We would also love to explore some of Cohere’s other features such as Rerank, in order to make KeyNote a more personalized and efficient platform. | ## ✨ Inspiration
Driven by the goal of more accessible and transformative education, our group set out to find a viable solution. Stocks are very rarely taught in school and in 3rd world countries even less, though if used right, it can help many people go above the poverty line. We seek to help students and adults learn more about stocks and what drives companies to gain or lower their stock value and use that information to make more informed decisions.
## 🚀 What it does
Users are guided to a search bar where they can search a company stock for example "APPL" and almost instantly they can see the stock price over the last two years as a graph, with green and red dots spread out on the line graph. When they hover over the dots, the green dots explain why there is a general increasing trend in the stock and a news article to back it up, along with the price change from the previous day and what it is predicted to be from. An image shows up on the side of the graph showing the company image as well.
## 🔧 How we built it
When a user writes a stock name, it accesses yahooFinance API and gets the data on stock price from the last 3 years. It takes the data and converts to a JSON File on local host 5000. Then using flask it is converted to our own API that populates the ChartsJS API with the data on the stock. Using a Matlab Server, we then take that data to find areas of most significance (the absolute value of slope is over a certain threshold). Those data points are set as green if it is positive or red if it is negative. Those specific dates in our data points are fed back to Gemini and asks it why it is thinking the stock shifted as it did and the price changed on the day as well. The Gemini at the same time also takes another request for a phrase that easy for the json search API to find a photo for about that company and then shows it on the screen.
## 🤯 Challenges we ran into
Using the amount of API's we used and using them properly was VERY Hard especially making our own API and incorporating Flask. As well, getting Stock data to a MATLAB Server took a lot of time as it was all of our first time using it. Using POST and Fetch commands were new for us and took us a lot of time for us to get used too.
## 🏆 Accomplishments that we're proud of
Connecting a prompt to a well-crafted stocks portfolio.
learning MATLAB in a time crunch.
connecting all of our API's successfully
making a website that we believe has serious positive implications for this world
## 🧠 What we learned
MATLAB integration
Flask Integration
Gemini API
## 🚀What's next for StockSee
* Incorporating it on different mediums such as VR so users can see in real-time how stocks shift in from on them in an interactive way.
* Making a small questionnaire on different parts of the stocks to ask whether if it good to buy it at the time
* Use (MPT) and other common stock buying algorthimns to see how much money you would have made using it. | losing |
## Inspiration
Every year, millions of people around the world choose not to recycle because they don't know how! We wanted to simplify recycling for the public.
## What it does
Robot Recycler uses a Kinect to search for recyclables, then either puts things in the trash, or the recycling! It's that easy!
## How I built it
We used the Kinect, Robot Operating System, and Arduino to build Robot Recycler.
## What's next for Robot Recycler
In the future, we'd like to add more models to our Kinect's library so that no recycling ever gets put in the trash! | ## Inspiration
Are you a student? Have you experienced the struggle of trawling through rooms on campus to find a nice, quiet space to study? Well, worry no more because Study Space aims to create an intelligent solution to this decade old problem!
## What it does
Study Space is an app that keeps track of the number of people in specific locations on campus in real time. It lets the user of the app figure out which rooms in campus are the least busy thus allowing for easier access to quiet study spots.
## How we built it
To build this app we used Android Studio to create client facing android apps for users phones as well as an app to be displayed on AndroidThings screens. We also used the 'Android Nearby' feature that is a part of AndroidThings to sniff the number of wireless devices in an area, and Firebase to store the number of devices, thus determining the occupancy, within an area.
## Challenges we ran into
We ran into many issues with AndroidThings not connecting to the internet (after 8 hours we realized it was a simple configuration issue where the internet connection in the AndroidManifest.xml was set to 'no'). We also had trouble figuring out the best way to sniff out devices with WiFi connectivity in a certain area since there are privacy concerns associated with getting this kind of data. In the end, we decided that ideally this app would be incorporated within existing University affiliated apps (e.g. PennMobile App) where the user would need to accept a condition stating that the app will anonymously log the phone's location strictly for this purpose.
## What we learned
We learned that sometimes working with hardware can be a pain in the butt. However, in the end, we found this hack to be very rewarding as it allowed us to create an end product that is only able to function due to the capabilities of the hardware included in AndroidThings. We also learned how to make native android apps (it was the first time that 2 members of our group ever created an android app with native code).
## What's next for Study Space
In the future, we would like to incorporate trends into our app in order to show users charts about when study areas are at their maximum/minimum occupancy. This would allow users to better plan future study session accordingly. We would also like to include push notifications with the app so that users are informed, at a time of their choosing, of the least busy places to study on campus. | ## Inspiration
Waste Management: Despite having bins with specific labels, people often put waste into wrong bins which lead to unnecessary plastic/recyclables in landfills.
## What it does
Uses Raspberry Pi, Google vision API and our custom classifier to categorize waste and automatically sorts and puts them into right sections (Garbage, Organic, Recycle). The data collected is stored in Firebase, and showed with respective category and item label(type of waste) on a web app/console. The web app is capable of providing advanced statistics such as % recycling/compost/garbage, your carbon emissions as well as statistics on which specific items you throw out the most (water bottles, bag of chips, etc.). The classifier is capable of being modified to suit the garbage laws of different places (eg. separate recycling bins for paper and plastic).
## How We built it
Raspberry pi is triggered using a distance sensor to take the photo of the inserted waste item, which is identified using Google Vision API. Once the item is identified, our classifier determines whether the item belongs in recycling, compost bin or garbage. The inbuilt hardware drops the waste item into the correct section.
## Challenges We ran into
Combining IoT and AI was tough. Never used Firebase. Separation of concerns was a difficult task. Deciding the mechanics and design of the bin (we are not mechanical engineers :D).
## Accomplishments that We're proud of
Combining the entire project. Staying up for 24+ hours.
## What We learned
Different technologies: Firebase, IoT, Google Cloud Platform, Hardware design, Decision making, React, Prototyping, Hardware
## What's next for smartBin
Improving the efficiency. Build out of better materials (3D printing, stronger servos). Improve mechanical movement. Add touch screen support to modify various parameters of the device. | partial |
## 🌟**Inspiration**
There are over **7.2 million** people in the U.S. who are legally blind, many of whom rely on others to help them navigate and understand their environment. While technology holds the promise of increased independence, current solutions for the visually impaired often fall short—either lacking accessibility features like text-to-speech or offering overly complex interfaces.
Optica was born out of a desire to bridge **this gap**. Our app empowers visually impaired individuals by giving them a simple, intuitive tool to perceive the world independently. Through clear, human-like descriptions of their surroundings, Optica provides not just information, but confidence, autonomy, and a deeper connection to their environment.
## 🛠️ **What it does**
Optica transforms a smartphone into a tool of empowerment for the visually impaired, enabling users to independently understand their surroundings. With the press of a button, users receive clear, succinct, vivid audio descriptions of what the phone’s camera captures. Optica doesn’t just list objects; it paints a picture—communicating the relationships between objects and creating a true sense of place. Optica enables its users to engage with their environment without outside assistance.
## 🧱 **How we built it**
We developed Optica using the ML Kit Object Detection API, which enabled us to identify and classify objects in real-time. These object classifications were then fed into a custom Large Language Model (LLM) powered by TuneStudio and Cerebras, which we trained to generate coherent, natural-language descriptions. The output from this LLM was integrated with Google Cloud’s text-to-speech API to provide users with real-time audio feedback. Throughout development, we maintained a user-first mindset, ensuring that the interface was intuitive and fully accessible.
## ⚔️ **Challenges we ran into**
Developing Optica presented numerous technical and logistical challenges, particularly when it came to integrating various cutting-edge technologies. Deploying our object detection model in Android Studio took longer than anticipated, which limited the time we had to refine other components.
Communicating between our computer vision model and TuneStudio’s LLM proved to be complex, requiring us to overcome issues with API integration and SDK compatibility. Additionally, managing the project across GitHub repositories introduced git-related challenges, particularly when merging contributions from different team members.
However, these difficulties only strengthened our resolve and pushed us to learn new skills—especially in debugging, collaboration, and working across frameworks. Mentors played a crucial role in helping us push through these roadblocks, and the experience has made us better engineers and problem solvers!
## 🎖️ **Our Accomplishments**
We are incredibly proud of our **integration of computer vision and natural language processing**, a combination that allows Optica to go beyond standard object recognition! Starting from a basic CV-based idea, we pushed the boundaries by incorporating an LLM to enhance the descriptions and truly serve the visually impaired community. None of us had experience with these APIs and learned so much on this journey!
Our ability to bring together these powerful technologies to create a tool that can have a tangible, positive impact on people’s lives is an accomplishment we hold in high regard. Successfully deploying this onto a user-friendly platform was a milestone we are excited about.
## 📖 **What we learned**
Although we might have learned new languages, APIs, and git commands on a technical level, the lessons we've learned **go beyond the pages**:
* Setbacks are an inevitable part of the creative process, and staying adaptable allows you to turn challenges into opportunities!
* Starting without all the answers taught us that taking the first step is crucial for personal and project development. We learned to not get ahead of ourselves and take it slow!
* Reaching out for help from our mentors showed us the power of collaboration and shared knowledge. We would like to specifically mention Nifaseth and Harsh Deep for their help!
## ⏭️ **What's next for Optica**
We plan to continually enhance the app by improving the accuracy and breadth of the image classification model, training it on more diverse datasets that include non-conventional settings and real-world complexity. Additionally, we aim to incorporate advanced depth sensing with Google AR’s depth API to provide even more nuanced scene descriptions. On the accessibility front, we will refine the voice activation and gesture-based navigation to make the app even more intuitive. We also look forward to partnering with organizations and sponsors, like Cerebras and TuneStudio, to ensure that **Optica continues to push the boundaries of AI for social good**, helping us realize our vision of full independence for the visually impaired. | ## Inspiration
Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate.
## What it does
Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user.
## How we built it
ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI
## Challenges we ran into
Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen
## Accomplishments that we're proud of
It works as intended.
## What we learned
We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours.
## What's next for SeerAR
Expand to Apple watch and Android devices;
Improve the accuracy of object detection and recognition;
Connect with Firebase and Google cloud APIs; | ## Inspiration
The inspiration for this project came from the group's passion to build health related apps. While blindness is not necessarily something we can heal, it is something that we can combat with technology.
## What it does
This app gives blind individuals the ability to live life with the same ease as any other person. Using beacon software, we are able to provide users with navigational information in heavily populated areas such as subways or or museums. The app uses a simple UI that includes the usage of different numeric swipes or taps to launch certain features of the app. At the opening of the app, the interface is explained in its entirety in a verbal manner. One of the most useful portions of the app is a camera feature that allows users to snap a picture and instantly receive verbal cues depicting what is in their environment. The navigation side of the app is what we primarily focused on, but as a fail safe method the Lyft API was implemented for users to order a car ride out of a worst case scenario.
## How we built it
## Challenges we ran into
We ran into several challenges during development. One of our challenges was attempting to use the Alexa Voice Services API for Android. We wanted to create a skill to be used within the app; however, there was a lack of documentation at our disposal and minimal time to bring it to fruition. Rather than eliminating this feature all together, we collaborated to develop a fully functional voice command system that can command their application to call for a Lyft to their location through the phone rather than the Alexa.
Another issue we encountered was in dealing with the beacons. In a large area like what would be used in a realistic public space and setting, such as a subway station, the beacons would be placed at far enough distances to be individually recognized. Whereas, in such a confined space, the beacon detection overlapped, causing the user to receive multiple different directions simultaneously. Rather than using physical beacons, we leveraged a second mobile application that allows us to create beacons around us with an Android Device.
## Accomplishments that we're proud of
As always, we are a team of students who strive to learn something new at every hackathon we attend. We chose to build an ambitious series of applications within a short and concentrated time frame, and the fact that we were successful in making our idea come to life is what we are the most proud of. Within our application, we worked around as many obstacles that came our way as possible. When we found out that Amazon Alexa wouldn't be compatible with Android, it served as a minor setback to our plan, but we quickly brainstormed a new idea.
Additionally, we were able to develop a fully functional beacon navigation system with built in voice prompts. We managed to develop a UI that is almost entirely nonvisual, rather used audio as our only interface. Given that our target user is blind, we had a lot of difficulty in developing this kind of UI because while we are adapted to visual cues and the luxury of knowing where to tap buttons on our phone screens, the visually impaired aren't. We had to keep this in mind throughout our entire development process, and so voice recognition and tap sequences became a primary focus. Reaching out of our own comfort zones to develop an app for a unique user was another challenge we successfully overcame.
## What's next for Lantern
With a passion for improving health and creating easier accessibility for those with disabilities, we plan to continue working on this project and building off of it. The first thing we want to recognize is how easily adaptable the beacon system is. In this project we focused on the navigation of subway systems: knowing how many steps down to the platform, when they've reached the safe distance away from the train, and when the train is approaching. This idea could easily be brought to malls, museums, dorm rooms, etc. Anywhere that could provide a concern for the blind could benefit from adapting our beacon system to their location.
The second future project we plan to work on is a smart walking stick that uses sensors and visual recognition to detect and announce what elements are ahead, what could potentially be in the user's way, what their surroundings look like, and provide better feedback to the user to assure they don't get misguided or lose their way. | winning |
## Inspiration
Proximity was born from the realization that despite our hyper-connected digital world, many people feel disconnected from those physically around them. We wanted to create a platform that bridges the gap between online social networking and real-world interactions, encouraging people to form meaningful connections with others in their immediate vicinity.
## What it does
Proximity allows users to discover and interact with people within a one-mile radius. Users can set up profiles with different labels for various contexts (professional, dating, casual chatting), and the app matches them with nearby users who have similar interests or goals. Once connected, users can send meetup requests and engage in real-time chats, all while maintaining control over their privacy and visibility.
## How we built it
We developed Proximity using a modern web stack:
Frontend: React for a dynamic and responsive user interface
Backend: Node.js with Express for a robust API
Database: MongoDB for flexible data storage
Mapping: MapboxGL for location visualization
Real-time Communication: WebSocket for instant messaging
We integrated geolocation services to track and update user positions and implemented a matching algorithm based on user preferences and proximity.
## Challenges we ran into
1. Balancing user privacy with location-based functionality
2. Optimizing performance for real-time updates of nearby users on the map
3. Ensuring reliable WebSocket connections for the chat feature
4. Managing complex states across components for chat sessions
5. Implementing an efficient algorithm for proximity-based user matching
## Accomplishments that we're proud of
1. Creating a seamless, intuitive user experience for location-based social discovery
2. Implementing a real-time chat system that works smoothly within the app's context
3. Developing a flexible user profile system that adapts to different social contexts
4. Successfully integrating mapping functionality with user discovery features
5. Building a scalable backend architecture capable of handling real-time data updates
## What we learned
1. Advanced React state management techniques
2. Real-time data handling with WebSockets
3. Geolocation API integration and optimization
4. Best practices for user data privacy and security in location-based apps
5. Efficient database querying for location-based data
6. Collaborative development workflows and version control
## What's next for Proximity
1. Implementing push notifications for new nearby users and messages
2. Expanding the matching algorithm to include more sophisticated preferences
3. Adding features for creating and discovering local events
4. Developing mobile apps for iOS and Android for increased accessibility
5. Implementing AI-driven suggestions for potential connections
6. Enhancing privacy features with optional anonymity modes
7. Integrating with other social platforms for easier onboarding and profile creation | ## Inspiration
The inspiration for Hangout came during my trip to Toronto for a hackathon. I(Rishi) spent three days feeling bored, believing I didn’t have any friends nearby. After leaving, I realized I had several friends in the area and could have had a much more enjoyable time if I had known. This experience sparked the idea for Hangout, an app that connects friends who are close by.
## What it does
Hangout notifies users when their friends are within a 1 km radius, giving them the opportunity to meet up and hang out if they choose. It’s a simple way to stay connected with friends in real-time and avoid missing out on spontaneous hangouts.
## How we built it
We built Hangout using Flutter for the frontend to ensure a smooth and cross-platform experience. Firebase was used for real-time data, managing friend lists and location updates. The app monitors friends' locations and triggers a notification if they are nearby, giving users the option to connect.
## Challenges we ran into
One of the major challenges was optimizing the app’s location tracking without heavily draining the battery. It was tricky to balance the need for accurate, real-time updates with maintaining a light resource footprint. Another challenge was safeguarding users’ privacy, particularly when handling location data, which required encryption and strict permission handling.
## Accomplishments that we're proud of
We’re proud of building a fully functional and user-friendly mobile app that solves a real-world problem. The optimization of geolocation services to balance performance and accuracy is a key accomplishment, as well as successfully integrating real-time notifications and privacy protections.
## What we learned
We learned a lot about geolocation services, notifications, and real-time database interactions. Working with Flutter and Firebase taught us valuable lessons in efficient app performance and creating a seamless user experience. Additionally, understanding the importance of privacy when dealing with location data was crucial.
## What's next for Hangouts
Next, we plan to implement additional features like customizable radius settings, group notifications for multiple friends nearby, and better integration with social media to enhance the user experience. We’re also considering ways to enhance user privacy even further and improve battery optimization for long-term use. | ## Inspiration
It’s no secret that the COVID-19 pandemic ruined most of our social lives. ARoom presents an opportunity to boost your morale by supporting you to converse with your immediate neighbors and strangers in a COVID safe environment.
## What it does
Our app is designed to help you bring your video chat experience to the next level. By connecting to your webcam and microphone, ARoom allows you to chat with people living near you virtually. Coupled with an augmented reality system, our application also allows you to view 3D models and images for more interactivity and fun. Want to chat with new people? Open the map offered by ARoom to discover the other rooms available around you and join one to start chatting!
## How we built it
The front-end was created with Svelte, HTML, CSS, and JavaScript. We used Node.js and Express.js to design the backend, constructing our own voice chat API from scratch. We used VS Code’s Live Share plugin to collaborate, as many of us worked on the same files at the same time. We used the A-Frame web framework to implement Augmented Reality and the Leaflet JavaScript library to add a map to the project.
## Challenges we ran into
From the start, Svelte and A-Frame were brand new frameworks for every member of the team, so we had to devote a significant portion of time just to learn them. Implementing many of our desired features was a challenge, as our knowledge of the programs simply wasn’t comprehensive enough in the beginning. We encountered our first major problem when trying to implement the AR interactions with 3D models in A-Frame. We couldn’t track the objects on camera without using markers, and adding our most desired feature, interactions with users was simply out of the question. We tried to use MediaPipe to detect the hand’s movements to manipulate the positions of the objects, but after spending all of Friday night working on it we were unsuccessful and ended up changing the trajectory of our project.
Our next challenge materialized when we attempted to add a map to our function. We wanted the map to display nearby rooms, and allow users to join any open room within a certain radius. We had difficulties pulling the location of the rooms from other files, as we didn’t understand how Svelte deals with abstraction. We were unable to implement the search radius due to the time limit, but we managed to add our other desired features after an entire day and night of work.
We encountered various other difficulties as well, including updating the rooms when new users join, creating and populating icons on the map, and configuring the DNS for our domain.
## Accomplishments that we're proud of
Our team is extremely proud of our product, and the effort we’ve put into it. It was ¾ of our members’ first hackathon, and we worked extremely hard to build a complete web application. Although we ran into many challenges, we are extremely happy that we either overcame or found a way to work around every single one. Our product isn’t what we initially set out to create, but we are nonetheless delighted at its usefulness, and the benefit it could bring to society, especially to people whose mental health is suffering due to the pandemic. We are also very proud of our voice chat API, which we built from scratch.
## What we learned
Each member of our group has learned a fair bit over the last 36 hours. Using new frameworks, plugins, and other miscellaneous development tools allowed us to acquire heaps of technical knowledge, but we also learned plenty about more soft topics, like hackathons and collaboration. From having to change the direction of our project nearly 24 hours into the event, we learned that it’s important to clearly define objectives at the beginning of an event. We learned that communication and proper documentation is essential, as it can take hours to complete the simplest task when it involves integrating multiple files that several different people have worked on. Using Svelte, Leaflet, GitHub, and Node.js solidified many of our hard skills, but the most important lessons learned were of the other variety.
## What's next for ARoom
Now that we have a finished, complete, usable product, we would like to add several features that were forced to remain in the backlog this weekend. We plan on changing the map to show a much more general location for each room, for safety reasons. We will also prevent users from joining rooms more than an arbitrary distance away from their current location, to promote a more of a friendly neighborhood vibe on the platform. Adding a video and text chat, integrating Google’s Translation API, and creating a settings page are also on the horizon. | losing |
## Inspiration
Everyone in society is likely going to buy a home at some point in their life. They will most likely meet realtors, see a million listings, gather all the information they can about the area, and then make a choice. But why make the process so complicated?
MeSee lets users pick and recommend regions of potential housing interest based on their input settings, and returns details such as: crime rate, public transportation accessibility, number of schools, ratings of local nearby business, etc.
## How we built it
Data was sampled by an online survey on what kind of things people looked for when house hunting. The most repeated variables were then taken and data on them was collected. Ratings were pulled from Yelp, crime data was provided by CBC, public transportation data by TTC, etc. The result is a very friendly web-app.
## Challenges we ran into
Collecting data in general was difficult because it was hard to match different datasets with each other and consistently present them since they were all from from different sources. It's still a little patchy now, but the data is now there!
## Accomplishments that we're proud of
Finally choosing an idea 6 hours into the hackathon, get the data, get at least four hours of sleep, and establish open communication with each other as we didn't really know each other until today!
## What we learned
Our backend learned to use different callbacks, front end learned that googlemaps API is definitely out to get him, and our designer learned Adobe Xd to better illustrate what the design looked like and how it functioned.
## What's next for MeSee
There's still a long ways before Mesee can cover more regions, but if it continues, it'd definitely be something our team would look into. Furthermore, collecting more sampling data would definitely be beneficial in improving the variables available to users by Mesee. Finally, making Mesee mobile would also be a huge plus. | ## Inspiration
As recent or soon to be graduates, we personally understand the desire to relocate and expand our world views. There is so much potential out there, but it's hard to know what city is best as we all have unique needs and wants.
## What it does
By gathering aspects that students care about when researching a city, we visualize the data based on selected preferences and suggest potential cities. Clicking on a city shows more information about that city and how it compares to others.
## How I built it
We initially narrowed our focus to a set of users: recently graduated students. Then, we discussed several user journeys and sought out specific pain points. We conducted some research to find out what type of criterias people look into when deciding where to move, and then found open datasets from statscan and other online sources to support these criteria.
We pulled the 2016 Canadian Census Data information on the biggest cities in Canada. We sorted this data into specific categories, and compiled static JSON files of the cities. We then fed this information into our web app powered by React where we visualized it using Mapbox and different graphing techniques.
## Challenges I ran into
Going from a well designed static prototype to an implemented version is a big jump as the data had to be manipulated to fit the visualization library we used. The Stats Canada data was also unreliable and oddly formatted, leading to a lot of difficulties.
## Accomplishments that I'm proud of
We managed to design a data visualization that makes use of multiple datasets and combined them in a cohesive way that helps students make an informed decision.
## What I learned
We learned that the quality of the data sets was not only dependent on the source it came from, but also the richness of the data in providing value in visualization. In certain fields the data was especially shallow which made it difficult to draw any useful visualizations.
## What's next for LeaveTheNest
We would love to explore how students can learn from our visualization and where to expand next. Right now we focused on Canadian data, but the next step would be to include American cities and beyond. We would also love to explore more intricate data visualizations that can dig deeper into the data and provide more value. | ## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur. | partial |
## We wanted to help the invisible people of Toronto, many homeless people do not have identification and often have a hard time keeping it due to their belongings being stolen. This prevents many homeless people to getting the care that they need and the access to resources that an ordinary person does not need to think about.
**How**
Our application would be set up as a booth or kiosks within pharmacies or clinics so homeless people can be verified easily.
We wanted to keep information of our patients to be secure and tamper-proof so we used the Ethereum blockchain and would compare our blockchain with the information of the patient within our database to ensure they are the same otherwise we know there was edits or a breach.
**Impact**
This would solve problems such as homeless people getting the prescriptions they need at local clinics and pharmacies. As well shelters would benefit from this as our application can track the persons: age, medical visits, allergies and past medical history experiences.
**Technologies**
For our facial recognition we used Facenet and tensor flow to train our models
For our back-end we used Python-Flask to communicate with Facenet and Node.JS to handle our routes on our site.
As well Ether.js handled most of our back-end code that had to deal with our smart contract for our blockchain.
We used Vue.JS for our front end to style our site. | ## Inspiration
Suppose we go out for a run early in the morning without our wallet and cellphone, our service enables banking systems to use facial recognition as a means of payment enabling us to go cashless and cardless.
## What it does
It uses deep neural networks in the back end to detect faces at point of sale terminals and match them with those stored in the banking systems database and lets the customer purchase a product from a verified seller almost instantaneously. In addition, it allows a bill to be divided between customers using recognition of multiple faces. It works in a very non-invasive manner and hence makes life easier for everyone.
## How we built it
Used dlib as the deep learning framework for face detection and recognition, along with Flask for the web API and plain JS on the front end. The front end uses AJAX to communicate with the back end server. All requests are encrypted using SSL (self-signed for the hackathon).
## Challenges we ran into
We attempted to incorporate gesture recognition into the service, but it would cause delays in the transaction due to extensive training/inference based on hand features. This is a feature to be developed in the future, and has the potential to distinguish and popularize our unique service
## Accomplishments that we're proud of
Within 24 hours, we are able to pull up a demo for payment using facial recognition simply by having the customer stand in front of the camera using real-time image streaming. We were also able to enable payment splitting by detection of multiple faces.
## What we learned
We learned to set realistic goals and pivot in the right times. There were points where we thought we wouldn't be able to build anything but we persevered through it to build a minimum viable product. Our lesson of the day would therefore be to never give up and always keep trying -- that is the only reason we could get our demo working by the end of the 24 hour period.
## What's next for GazePay
We plan on associating this service with bank accounts from institutions such as Scotiabank. This will allow users to also see their bank balance after payment, and help us expand our project to include facial recognition ATMs, gesture detection, and voice-enabled payment/ATMs for them to be more accessible and secure for Scotiabank's clients. | ## Inspiration
When visiting a clinic, two big complaints that we have are the long wait times and the necessity to use a kiosk that thousands of other people have already touched. We also know that certain methods of filling in information are not accessible to everyone (For example, someone with Parkinsons disease writing with a pen). In response to these problems, we created Touchless.
## What it does
* Touchless is an accessible and contact-free solution for gathering form information.
* Allows users to interact with forms using voices and touchless gestures.
* Users use different gestures to answer different questions.
* Ex. Raise 1-5 fingers for 1-5 inputs, or thumbs up and down for yes and no.
* Additionally, users are able to use voice for two-way interaction with the form. Either way, surface contact is eliminated.
* Applicable to doctor’s offices and clinics where germs are easily transferable and dangerous when people touch the same electronic devices.
## How we built it
* Gesture and voice components are written in Python.
* The gesture component uses OpenCV and Mediapipe to map out hand joint positions, where calculations could be done to determine hand symbols.
* SpeechRecognition recognizes user speech
* The form outputs audio back to the user by using pyttsx3 for text-to-speech, and beepy for alert noises.
* We use AWS Gateway to open a connection to a custom lambda function which has been assigned roles using AWS Iam Roles to restrict access. The lambda generates a secure key which it sends with the data from our form that has been routed using Flask, to our noSQL dynmaoDB database.
## Challenges we ran into
* Tried to set up a Cerner API for FHIR data, but had difficulty setting it up.
* As a result, we had to pivot towards using a noSQL database in AWS as our secure backend database for storing our patient data.
## Accomplishments we’re proud of
This was our whole team’s first time using gesture recognition and voice recognition, so it was an amazing learning experience for us. We’re proud that we managed to implement these features within our project at a level we consider effective.
## What we learned
We learned that FHIR is complicated. We ended up building a custom data workflow that was based on FHIR models we found online, but due to time constraints we did not implement certain headers and keys that make up industrial FHIR data objects.
## What’s next for Touchless
In the future, we would like to integrate the voice and gesture components more seamlessly into one rather than two separate components. | partial |
## Check it out on GitHub!
The machine learning and web app segments are split into 2 different branches. Make sure to switch to these branches to see the source code! You can view the repository [here](https://github.com/SuddenlyBananas/be-right-back/).
## Inspiration
Inspired in part by the Black Mirror episode of the same title (though we had similar thoughts before we made the connection).
## What it does
The goal of the project is to be able to talk to a neural net simulation of your Facebook friends you've had conversations with. It uses a standard base model and customizes it based on message upload input. However, we ran into some struggles that prevented the full achievement of this goal.
The user downloads their message history data and uploads it to the site. Then, they can theoretically ask the bot to emulate one of their friends and the bot customizes the neural net model to fit the friend in question.
## How we built it
Tensor Flow for the machine learning aspect, Node JS and HTML5 for the data-managing website, Python for data scraping. Users can interact with the data through a Facebook Messenger Chat Bot.
## Challenges we ran into
AWS wouldn't let us rent a GPU-based E2 instance, and Azure didn't show anything for us either. Thus, training took much longer than expected.
In fact, we had to run back to an apartment at 5 AM to try to run it on a desktop with a GPU... which didn't end up working (as we found out when we got back half an hour after starting the training set).
The Facebook API proved to be more complex than expected, especially negotiating the 2 different user IDs assigned to Facebook and Messenger user accounts.
## Accomplishments that we're proud of
Getting a mostly functional machine learning model that can be interacted with live via a Facebook Messenger Chat Bot.
## What we learned
Communication between many different components of the app; specifically the machine learning server, data parsing script, web server, and Facebook app.
## What's next for Be Right Back
We would like to fully realize the goals of this project by training the model on a bigger data set and allowing more customization to specific users. | ## Inspiration
Falls are the leading cause of injury and death among seniors in the US and cost over $60 billion in medical expenses every year. With every one in four seniors in the US experiencing a fall each year, attempts at prevention are badly needed and are currently implemented through careful monitoring and caregiving. However, in the age of COVID-19 (and even before), remote caregiving has been a difficult and time-consuming process: caregivers must either rely on updates given by the senior themselves or monitor a video camera or other device 24/7. Tracking day-to-day health and progress is nearly impossible, and maintaining and improving strength and mobility presents unique challenges.
Having personally experienced this exhausting process in the past, our team decided to create an all-in-one tool that helps prevent such devastating falls from happening and makes remote caregivers' lives easier.
## What it does
NoFall enables smart ambient activity monitoring, proactive risk assessments, a mobile alert system, and a web interface to tie everything together.
### **Ambient activity monitoring**
NoFall continuously watches and updates caregivers with the condition of their patient through an online dashboard. The activity section of the dashboard provides the following information:
* Current action: sitting, standing, not in area, fallen, etc.
* How many times the patient drank water and took their medicine
* Graph of activity throughout the day, annotated with key events
* Histogram of stand-ups per hour
* Daily activity goals and progress score
* Alerts for key events
### **Proactive risk assessment**
Using the powerful tools offered by Google Cloud, a proactive risk assessment can be activated with a simple voice query to a smart speaker like Google Home. When starting an assessment, our algorithms begin analyzing the user's movements against a standardized medical testing protocol for screening a patient's risk of falling. The screening consists of two tasks:
1. Timed Up-and-Go (TUG) test. The user is asked to sit up from a chair and walk 10 feet. The user is timed, and the timer stops when 10 feet has been walked. If the user completes this task in over 12 seconds, the user is said to be of at a high risk of falling.
2. 30-second Chair Stand test: The user is asked to stand up and sit down on a chair repeatedly, as fast as they can, for 30 seconds. If the user not is able to sit down more than 12 times (for females) and 14 times (for males), they are considered to be at a high risk of falling.
The videos of the tests are recorded and can be rewatched on the dashboard. The caregiver can also view the results of tests in the dashboard in a graph as a function of time.
### **Mobile alert system**
When the user is in a fallen state, a warning message is displayed in the dashboard and texted using SMS to the assigned caregiver's phone.
## How we built it
### **Frontend**
The frontend was built using React and styled using TailwindCSS. All data is updated from Firestore in real time using listeners, and new activity and assessment goals are also instantly saved to the cloud.
Alerts are also instantly delivered to the web dashboard and caretakers' phones using IFTTT's SMS Action.
We created voice assistant functionality through Amazon Alexa skills and Google home routines. A voice command triggers an IFTTT webhook, which posts to our Flask backend API and starts risk assessments.
### **Backend**
**Model determination and validation**
To determine the pose of the user, we utilized Google's MediaPipe library in Python. We decided to use the BlazePose model, which is lightweight and can run on real-time security camera footage. The BlazePose model is able to determine the pixel location of 33 landmarks of the body, corresponding to the hips, shoulders, arms, face, etc. given a 2D picture of interest. We connected the real-time streaming from the security camera footage to continuously feed frames into the BlazePose model. Our testing confirmed the ability of the model to determine landmarks despite occlusion and different angles, which would be commonplace when used on real security camera footage.
**Ambient sitting, standing, and falling detection**
To determine if the user is sitting or standing, we calculated the angle that the knees make with the hips and set a threshold, where angles (measured from the horizontal) less than that number are considered sitting. To account for the angle where the user is directly facing the camera, we also determined the ratio of the hip-to knee length to the hip-to-shoulder length, reasoning that the 2D landmarks of the knees would be closer to the body when the user is sitting. To determine the fallen status, we determined if the center of the shoulders and the center of the knees made an angle less than 45 degrees for over 20 frames at once. If the legs made an angle greater than a certain threshold (close to 90 degrees), we considered the user to be standing. Lastly, if there was no detection of landmarks, we considered the status to be unknown (the user may have left the room/area). Because of the different possible angles of the camera, we also determined the perspective of the camera based on the convergence of straight lines (the straight lines are determined by a Hough transformation algorithm). The convergence can indicate how angled the camera is, and the thresholds for the ratio of lengths can be mathematically transformed accordingly.
**Proactive risk assessment analysis**
To analyze timed up-and-go tests, we first determined if the user is able to change his or her status from sitting to standing, and then determined the distance the user has traveled by determining the speed from finite difference calculation of the velocity from the previous frame. The pixel distance was then transformed based on the distance between the user's eyes and the height of the user (which is pre-entered in our website) to determine the real-world distance the user has traveled. Once the user reaches 10 meters cumulative distance traveled, the timer stops and is reported to the server.
To analyze 30-second chair stand tests, the number of transitions between sitting and standing were counted. Once 30 seconds has been reached, the number of times the user sat down is half of the number of transitions, and the data is sent to the server.
## Challenges we ran into
* Figuring out port forwarding with barebones IP camera, then streaming the video to the world wide web for consumption by our model.
* Calibrating the tests (time limits, excessive movements) to follow the standards outlined by research. We had to come up with a way to mitigate random errors that could trigger fast changes in sitting and standing.
* Converting recorded videos to a web-compatible format. The videos saved by python's video recording package was only compatible with saving .avi videos, which was not compatible with the web. We had to use scripted ffmpeg to dynamically convert the videos into .mp4
* Live streaming the processed Python video to the front end required processing frames with ffmpeg and a custom streaming endpoint.
* Determination of a model that works on realtime security camera data: we tried Openpose, Posenet, tf-pose-estimation, and other models, but finally we found that MediaPipe was the only model that could fit our needs
## Accomplishments that we're proud of
* Making the model ignore the noisy background, bad quality video stream, dim lighting
* Fluid communication from backend to frontend with live updating data
* Great team communication and separation of tasks
## What we learned
* How to use IoT to simplify and streamline end-user processes.
* How to use computer vision models to analyze pose and velocity from a reference length
* How to display data in accessible, engaging, and intuitive formats
## What's next for NoFall
We're proud of all the features we have implemented with NoFall and are eager to implement more. In the future, we hope to generalize to more camera angles (such as a bird's-eye view), support lower-light and infrared ambient activity tracking, enable obstacle detection, monitor for signs of other conditions (heart attack, stroke, etc.) and detect more therapeutic tasks, such as daily cognitive puzzles for fighting dementia. | ## Inspiration: As per the Stats provided by Annual Disability Statistics Compendium, 19,344,883 civilian veterans ages 18 years and over live in the community in 2013, of which 5,522,589 were individuals with disabilities . DAV - Disabled American Veterans organization has spent about $ 61.8 million to buy and operate vehicles to act as a transit service for veterans but the reach of this program is limited.
Following these stats we wanted to support Veterans with something more feasible and efficient.
## What it does: It is a web application that will serve as a common platform between DAV and Uber. Instead of spending a huge amount on buying cars the DAV instead pay Uber and Uber will then provide free rides to veterans. Any veteran can register with his Veteran ID and SSN. During the application process our Portal matches the details with DAV to prevent non-veterans from using this service. After registration, Veterans can request rides on our website, that uses Uber API and can commute free.
## How we built it: We used the following technologies:
Uber API ,Google Maps, Directions, and Geocoding APIs, WAMP as local server.
Boot-Strap to create website, php-MyAdmin to maintain SQL database and webpages are designed using HTML, CSS, Javascript, Python script etc.
## Challenges we ran into: Using Uber API effectively, by parsing through data and code to make javascript files that use the API endpoints. Also, Uber API has problematic network/server permission issues.
Another challenge was to figure out the misuse of this service by non-veterans. To save that, we created a dummy Database, where each Veteran-ID is associated with corresponding 4 digits SSN. The pair is matched when user registers for free Uber rides. For real-time application, the same data can be provided by DAV and that can be used to authenticate a Veteran.
## Accomplishments that we're proud of: Finishing the project well in time, almost 4 hours before. From a team of strangers, brainstorming ideas for hours and then have a finished product in less than 24 hours.
## What we learned: We learnt to use third party APIs and gained more experience in web-development.
## What's next for VeTransit: We plan to launch a smartphone app that will be developed for the same service.
It will also include Speech recognition. We will display location services for nearby hospitals and medical facilities based on veteran’s needs. Using APIs of online job providers, veterans will receive data on jobs.
To access the website, Please register as user first.
During that process, It will ask Veteran-ID and four digits of SSN.
The pair should match for successful registration.
Please use one of the following key pairs from our Dummy Data, to do that:
VET00104 0659
VET00105 0705
VET00106 0931
VET00107 0978
VET00108 0307
VET00109 0674 | winning |
# cirkvito
Ever wanna quickly check your Intro to CS Logic course homework without firin' up the ol' Quartus-Beast? Love circuits, but hate Java applets? Or maybe you're dying to make yourself a shiny half-adder but don't have the time (or dexterity) to fashion one out of physical components?
## Inspo
Enter *cirkvito* (serk-vee-toh)! Circuits are awesome but finding good software to simulate them is hard. We chose to build a tool to provide ourselves with an alternative to clunky and overly-complex Flash or Java applets. (Appropriately, its name is a translation of the word "circuits" in Esperanto.) Ableson's *The Structure and Interpretation of Computer Programs* mentioned this idea, too.
## What it Does
cirkvito is a drag and drop single-page-app that simulates simple circuits. You click and drag components on the page and connect them in whatever way you want and end up with something cool. Click buttons to zoom in and out, hold the space bar to pan around, and shift-click to select or delete multiple components.
## Some Challenges
We built cirkvito with a lot of JavaScript by taking advantage of *canvas* and *jQuery*. We learned that UI is really hard to get right by rolling our own buttons and zoom/recenter system.
## What We're Proud of
We're proud that cirkvito successfully simulates not only combinational circuits like adders, but also sequential circuits with flip flops made from elementary logic gates (albeit with a manual "clock"). Also, we designed the data structures to represent circuit components and the way they're connected on our own.
## What's Next?
A fully-responsive app that's able to resize components depending on the window size would have been nice to have. Also, being able to support custom circuits (allowing builders to "bundle" custom configurations of logic gates into black boxes) would make it easier to build more complex circuits. | ## Inspiration
Ideas for interactions from:
* <http://paperprograms.org/>
* <http://dynamicland.org/>
but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows.
## What it does
Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer.
## How I built it
A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard.
## Challenges I ran into
* Reliable tracking under different light conditions.
* Feedback effects from projected light.
* Tracking the keyboard reliably.
* Hooking into macOS to control window focus
## Accomplishments that I'm proud of
Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system.
Cool emergent things like combining pieces of paper + the side ideas I mention below.
## What I learned
Some interesting side ideas here:
* Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect
* Would be fun to use a deep learning thing to identify and compute with arbitrary objects
## What's next for Computertop Desk
* Pointing tool (laser pointer?)
* More robust CV pipeline? Machine learning?
* Optimizations: run stuff on GPU, cut latency down, improve throughput
* More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once | ## Inspiration
The inspiration of our game came from the arcade game Cyclone, where the goal is to click the button when the LED lands on a signaled part of the circle.
## What it does
The goal of our game is to click the button when the LED reaches a designated part of the circle (the very last LED). Upon successfully doing this it will add 1 to your score, as well as increasing the speed of the LED, continually making it harder and harder to achieve this goal. The goal is for the player to get as high of a score as possible, as the higher your score is, the harder it will get. Upon clicking the wrong designated LED, the score will reset, as well as the speed value, effectively resetting the game.
## How we built it
The project was split into two parts; one was the physical building of the device and another was the making of the code.
In terms of building the physical device, at first we weren’t too sure what we wanted to do, so we ended up with a mix up of parts we could use. All of us were pretty new to using the Arduino, and its respective parts, so it was initially pretty complicated, before things started to fall into place. Through the use of many Youtube videos, and tinkering, we were able to get the physical device up and running. Much like our coding process, the building process was very dynamic. This is because at first, we weren’t completely sure which parts we wanted to use, so we had multiple components running at once, which allowed for more freedom and possibilities. When we figured out which components we would be using, everything sort of fell into place.
For the code process, it was quite messy at first. This was because none of us were completely familiar with the Arduino libraries, and so it was a challenge to write the proper code. However, with the help of online guides and open source material, we were eventually able to piece together what we needed. Furthermore, our coding process was very dynamic. We would switch out components constantly, and write many lines of code that was never going to be used. While this may have been inefficient, we learned much throughout the process, and it kept our options open and ideas flowing.
## Challenges we ran into
In terms of main challenges that we ran into along the way, the biggest challenge was getting our physical device to function the way we wanted it to. The initial challenge came from understanding our device, specifically the Arduino logic board, and all the connecting parts, which then moved to understanding the parts, as well as getting them to function properly.
## Accomplishments that we're proud of
In terms of main accomplishments, our biggest accomplishment is overall getting the device to work, and having a finished product. After running into many issues and challenges regarding the physical device and its functions, putting our project together was very satisfying, and a big accomplishment for us. In terms of specific accomplishments, the most important parts of our project was getting our physical device to function, as well as getting the initial codebase to function with our project. Getting the codebase to work in our favor was a big accomplishment, as we were mostly reliant on what we could find online, as we were essentially going in blind during the coding process (none of us knew too much about coding with Arduino).
## What we learned
During the process of building our device, we learned a lot about the Arduino ecosystem, as well as coding for it. When building the physical device, a lot of learning went into it, as we didn’t know that much about using it, as well as applying programs for it. We learned how important it is to have a strong connection for our components, as well as directly linking our parts with the Arduino board, and having it run proper code.
## What's next for Cyclone
In terms of what’s next for Cyclone, there are many possibilities for it. Some potential changes we could make would be making it more complex, and adding different modes to it. This would increase the challenge for the player, and give it more replay value as there is more to do with it. Another potential change we could make is to make it on a larger scale, with more LED lights and make attachments, such as the potential use of different types of sensors. In addition, we would like to add an LCD display or a 4 digit display to display the player’s current score and high score. | winning |
## Inspiration
There are very small but impactful ways to be eco-conscious 🌱 in your daily life, like using reusable bags, shopping at thrift stores, or carpooling. We know one thing for certain; people love rewards ✨. So we thought, how can we reward people for eco-conscious behaviour such as taking the bus or shopping at sustainable businesses?
We wanted a way to make eco-consciousness simple, cost-effective, rewarding, and accessible to everyone.
## What it does
Ecodes rewards you for every sustainable decision you make. Some examples are: shopping at sustainable partner businesses, taking the local transit, and eating at sustainable restaurants. Simply scanning an Ecode at these locations will allow you to claim EcoPoints that can be converted into discounts, coupons or gift cards to eco-conscious businesses. Ecodes also sends users text-based reminders when acting sustainably is especially convenient (ex. take the bus when the weather is unsafe for driving). Furthermore, sustainable businesses also get free advertising, so it's a win-win for both parties! See the demo [here](https://drive.google.com/file/d/1suT7tPila3rz4PSmoyl42G5gyAwrC_vu/view?usp=sharing).
## How we built it
We initially prototyped UI/UX using Figma, then built onto a React-Native frontend and a Flask backend. QR codes were generated for each business via python and detected using a camera access feature created in React-Native. We then moved on to use the OpenWeatherMaps API and the Twilio API in the backend to send users text-based eco-friendly reminders.
## Challenges we ran into
Implementing camera access into the app and actually scanning specific QR codes that corresponded to a unique business and number of EcoPoints was a challenge. We had to add these technical features to the front-end seamlessly without much effort from the user but also have it function correctly. But after all, there's nothing a little documentation can't solve! In the end, we were able to debug our code and successfully implement this key feature.
## Accomplishments that we're proud of
**Kemi** is proud that she learned how to implement new features such as camera access in React Native. 😙
**Akanksha** is proud that she learnt Flask and interfacing with Google Maps APIs in python. 😁
**Vaisnavi** is proud that she was able to generate multiple QR codes in python, each with a unique function. 😝
**Anna** is proud to create the logistics behind the project and learnt about frontend and backend development. 😎
Everyone was super open to working together as a team and helping one another out. As as a team, we learnt a lot from each other in a short amount of time, and the effort was worth it!
## What we learned
We took the challenge to learn new skills outside of our comfort zone, learning how to add impressive features to an app such as camera access, QR code scanning, counter updates, and aesthetic UI. Our final hack turned out to be better than we anticipated, and inspired us to develop impactful and immensely capable apps in the future :)
## What's next for Ecodes
Probably adding a location feature to send users text-based reminders to the user, informing them that an Ecode is nearby. We can use the Geolocation Google Maps API and Twilio API to implement this. Additionally, we hope to add a carpooling feature which enables users to earn points together by carpooling with one another!! | ## Inspiration
With the effects of climate change becoming more and more apparent, we wanted to make a tool that allows users to stay informed on current climate events and stay safe by being warned of nearby climate warnings.
## What it does
Our web app has two functions. One of the functions is to show a map of the entire world that displays markers on locations of current climate events like hurricanes, wildfires, etc. The other function allows users to submit their phone numbers to us, which subscribes the user to regular SMS updates through Twilio if there are any dangerous climate events in their vicinity. This SMS update is sent regardless of whether the user has the app open or not, allowing users to be sure that they will get the latest updates in case of any severe or dangerous weather patterns.
## How we built it
We used Angular to build our frontend. With that, we used the Google Maps API to show the world map along with markers, with information we got from our server. The server gets this climate data from the NASA EONET API. The server also uses Twilio along with Google Firebase to allow users to sign up and receive text message updates about severe climate events in their vicinity (within 50km).
## Challenges we ran into
For the front end, one of the biggest challenges was the markers on the map. Not only, did we need to place markers on many different climate event locations, but we wanted the markers to have different icons based on weather events. We also wanted to be able to filter the marker types for a better user experience. For the back end, we had challenges to figure out Twilio to be able to text users, Google firebase for user sign-in, and MongoDB for database operation. Using these tools was a challenge at first because this was our first time using these tools. We also ran into problems trying to accurately calculate a user's vicinity to current events due to the complex nature of geographical math, but after a lot of number crunching, and the use of a helpful library, we were accurately able to determine if any given event is within 50km of a users position based solely on the coordiantes.
## Accomplishments that we're proud of
We are really proud to make an app that not only informs users but can also help them in dangerous situations. We are also proud of ourselves for finding solutions to the tough technical challenges we ran into.
## What we learned
We learned how to use all the different tools that we used for the first time while making this project. We also refined our front-end and back-end experience and knowledge.
## What's next for Natural Event Tracker
We want to perhaps make the map run faster and have more features for the user, like more information, etc. We also are interested in finding more ways to help our users stay safer during future climate events that they may experience. | ## Inspiration
Some things can only be understood through experience, and Virtual Reality is the perfect medium for providing new experiences. VR allows for complete control over vision, hearing, and perception in a virtual world, allowing our team to effectively alter the senses of immersed users. We wanted to manipulate vision and hearing in order to allow players to view life from the perspective of those with various disorders such as colorblindness, prosopagnosia, deafness, and other conditions that are difficult to accurately simulate in the real world. Our goal is to educate and expose users to the various names, effects, and natures of conditions that are difficult to fully comprehend without first-hand experience. Doing so can allow individuals to empathize with and learn from various different disorders.
## What it does
Sensory is an HTC Vive Virtual Reality experience that allows users to experiment with different disorders from Visual, Cognitive, or Auditory disorder categories. Upon selecting a specific impairment, the user is subjected to what someone with that disorder may experience, and can view more information on the disorder. Some examples include achromatopsia, a rare form of complete colorblindness, and prosopagnosia, the inability to recognize faces. Users can combine these effects, view their surroundings from new perspectives, and educate themselves on how various disorders work.
## How we built it
We built Sensory using the Unity Game Engine, the C# Programming Language, and the HTC Vive. We imported a rare few models from the Unity Asset Store (All free!)
## Challenges we ran into
We chose this project because we hadn't experimented much with visual and audio effects in Unity and in VR before. Our team has done tons of VR, but never really dealt with any camera effects or postprocessing. As a result, there are many paths we attempted that ultimately led to failure (and lots of wasted time). For example, we wanted to make it so that users could only hear out of one ear - but after enough searching, we discovered it's very difficult to do this in Unity, and would've been much easier in a custom engine. As a result, we explored many aspects of Unity we'd never previously encountered in an attempt to change lots of effects.
## What's next for Sensory
There's still many more disorders we want to implement, and many categories we could potentially add. We envision this people a central hub for users, doctors, professionals, or patients to experience different disorders. Right now, it's primarily a tool for experimentation, but in the future it could be used for empathy, awareness, education and health. | partial |

## Inspiration
Social anxiety affects hundreds of thousands of people and can negatively impact social interaction and mental health. Around campuses and schools, we were inspired by bulletin boards with encouraging anonymous messages, and we felt that these anonymous message boards were an inspiring source of humanity. With Bulletin, we aim to bring this public yet anonymous way of spreading words of wisdom to as many people as possible. Previous studies have even shown that online interaction decreased social anxiety in people with high levels of anxiety or depression.
## What it does
Bulletin is a website for posting anonymous messages. Bulletin's various boards are virtual reality spaces for users to enter messages. Bulletin uses speech-to-text to create a sense of community within the platform, as everything you see has been spoken by other users. To ensure anonymity, Bulletin does not store any of its users data, and only stores a number of recent messages. Bulletin uses language libraries to detect and filter negative words and profanity. To try Bulletin (<https://bulletinvr.online>), simply enter one of the bulletin boards and double tap or press the enter key to start recording your message.

## What is WebVR?
WebVR, or Web-based virtual reality, allows users to experience a VR environment within a web browser. As a WebVR app, Bulletin can also be accessed on the Oculus Rift, Oculus Go, HTC Vive, Windows Mixed Reality, Samsung Gear VR, Google Cardboard, and your computer or mobile device. As the only limit is having an internet connection, Bulletin is available to all and seeks to bring people together through the power of simple messages.
## How we built it
We use the A-Frame JavaScript framework to create WebVR experiences. Voice recognition is handled with the HTML Speech Recognition API.
The back-end service is written in Python. Our JS scripts use AJAX to make requests to the Flask-powered server, which queries the database and returns the messages that the WebVR front-end should display. When the user submits a message, we run it through the Python `fuzzy-wuzzy` library, which uses the Levenshtein metric to make sure it is appropriate and then save it to the database.
## Challenges we ran into
**Integrating A-Frame with our back-end was difficult**. A-Frame is simple of itself to create very basic WebVR scenes, but creating custom JavaScript components which would communicate with the Flask back-end proved time-consuming. In addition, many of the community components we tried to integrate, such as an [input mapping component](https://github.com/fernandojsg/aframe-input-mapping-component), were outdated and had badly-documented code and installation instructions. Kevin and Hamilton had to resort to reading GitHub issues and pull requests to get some features of Bulletin to work properly.
## Accomplishments that we're proud of
We are extremely proud of our website and how our WebVR environment turned out. It's exceeded all expectations, and features such as multiple bulletin boards and recording by voice were never initially planned, but work consistently well. Integrating the back-end with the VR front-end took time, but was extremely satisfying; when a user sends a message, other users will near-instantaneously see their bulletin update.
We are also proud of using a client-side speech to text service, which improves security and reduces website bandwith and allows for access via poor internet connection speeds.
Overall, we're all proud of building an awesome website.
## What we learned
Hamilton learned about the A-Frame JavaScript library (and JavaScript itself), which he had no experience with previously. He developed the math involved with rendering text in the WebVR environment.
Mykyta and Kevin learned how to use the HTML speech to text API and integrate the WebVR scenes with the AJAX server output.
Brandon learned to use the Google App Engine to host website back-ends, and learned about general web deployment.
## What's next for Bulletin
We want to add more boards to Bulletin, and expand possible media to also allowing images to be sent. We're looking into more sophisticated language libraries to try and better block out hate speech.
Ultimately, we would like to create an adaptable framework to allow for anyone to include a private Bulletin board in their own website. | ## Inspiration
We wanted to create a device that ease the life of people who have disabilities and with AR becoming mainstream it would only be proper to create it.
## What it does
Our AR Headset converts speech to text and then displays it realtime on the monitor to allow the user to read what the other person is telling them making it easier for the first user as he longer has to read lips to communicate with other people
## How we built it
We used IBM Watson API in order to convert speech to text
## Challenges we ran into
We have attempted to setup our system using the Microsoft's Cortana and the available API but after struggling to get the libraries ti work we had to resort to using an alternative method
## Accomplishments that we're proud of
Being able to use the IBM Watson and unity to create a working prototype using the Kinect as the Web Camera and the Oculus rift as the headset thus creating an AR headset
## What we learned
## What's next for Hear Again
We want to make the UI better, improve the speed to text recognition and transfer our project over to the Microsoft Holo Lens for the most nonintrusive experience. | ## Inspiration
The idea for VenTalk originated from an everyday stressor that everyone on our team could relate to; commuting alone to and from class during the school year. After a stressful work or school day, we want to let out all our feelings and thoughts, but do not want to alarm or disturb our loved ones. Releasing built-up emotional tension is a highly effective form of self-care, but many people stay quiet as not to become a burden on those around them. Over time, this takes a toll on one’s well being, so we decided to tackle this issue in a creative yet simple way.
## What it does
VenTalk allows users to either chat with another user or request urgent mental health assistance. Based on their choice, they input how they are feeling on a mental health scale, or some topics they want to discuss with their paired user. The app searches for keywords and similarities to match 2 users who are looking to have a similar conversation. VenTalk is completely anonymous and thus guilt-free, and chats are permanently deleted once both users have left the conversation. This allows users to get any stressors from their day off their chest and rejuvenate their bodies and minds, while still connecting with others.
## How we built it
We began with building a framework in React Native and using Figma to design a clean, user-friendly app layout. After this, we wrote an algorithm that could detect common words from the user inputs, and finally pair up two users in the queue to start messaging. Then we integrated, tested, and refined how the app worked.
## Challenges we ran into
One of the biggest challenges we faced was learning how to interact with APIs and cloud programs. We had a lot of issues getting a reliable response from the web API we wanted to use, and a lot of requests just returned CORS errors. After some determination and a lot of hard work we finally got the API working with Axios.
## Accomplishments that we're proud of
In addition to the original plan for just messaging, we added a Helpful Hotline page with emergency mental health resources, in case a user is seeking professional help. We believe that since this app will be used when people are not in their best state of minds, it's a good idea to have some resources available to them.
## What we learned
Something we got to learn more about was the impact of user interface on the mood of the user, and how different shades and colours are connotated with emotions. We also discovered that having team members from different schools and programs creates a unique, dynamic atmosphere and a great final result!
## What's next for VenTalk
There are many potential next steps for VenTalk. We are going to continue developing the app, making it compatible with iOS, and maybe even a webapp version. We also want to add more personal features, such as a personal locker of stuff that makes you happy (such as a playlist, a subreddit or a netflix series). | partial |
## Inspiration
I'm lazy and voice recognition / nlp continues to blow my mind with its accuracy.
## What it does
Using Voice recognition and Natural Language Processing you can talk to your browser and it will do your bidding, no hands required!
I also built in "Demonstration" so if ever the AI doesn't do what you want you can give it a sample command and the Demonstrate what to click on / type while the bot watches! All of these training demonstrations get added to a centralized database so that everyone together makes the bot smarter!
## How I built it
Chrome Extension, Nuance APIs MIX.NLU and Voice Recognition, Angular JS, Firebase
## Challenges I ran into
Nuance API took a little while to figure out, also sending inputs into the browser on the right elements is tricky.
## Accomplishments that I'm proud of
Making is all work together and in such a short time! :D
## What I learned
## What's next for AI-Browser
I want to take the time to properly implement the training portion | ## Inspiration
We were inspired by the recent interest of many companies in drone delivery and drone search. In particular, we wanted to bring drone abilities to the consumer – and we ended up doing even more.
There are many applications that can stem from our work, from search and rescue missions, drone delivery, or just finding your keys. In addition, we’ve brought the ability for a consumer to, with just their voice, train a classifier for object recognition.
[Voice Controlled Delivery Drone](https://youtu.be/8HKiQQVDcKQ)
[Real Time Search Drone](https://www.youtube.com/watch?v=CjqaV1Kw308)
## What it does
We build a pipeline that allows anyone to visually search for objects using a drone and simplified computer vision and machine learning.
It consists of mainly 3 parts:
1) A search drone (controlled normally with your phone) that performs image classification in real time for a given object
2) Being able to train an image classifier on any object by just using your voice.
3) A voice-controlled drone that can perform targeted delivery
## How we built it
We used an Amazon Echo to handle voice input, and the transcribed input was sent to a AWS Lambda server. Depending on the text, it was classified into one of several categories (such as commands). This server updated a Firebase database with the appropriate commands/information. Next, our local computers were notified whenever the database changed, and executed appropriate commands -- whether that be train an image classifier or fly a drone.
To get the non-programmable drone to become a search drone, we had it live stream its video feed to an Android phone, and we had a script that constantly took screenshots of the Android phone and stored them on our computer. Then we could use this images either for training data or to classify them in real time, using image segmentation and IBM Watson.
To train a classifier with only your voice, we would take the search term and use the Bing Search API to get images associated with that term. This served as the training data. We would then feed this training data into IBM Watson to build a classifier. This classifier could later be used for the search drone. All the consumer had to do was use their voice to do computer vision -- we took care of getting the data and applying the machine learning applications.
## Challenges we ran into
We are working with sandboxed technologies meant for the average consumer – but that are not developer friendly. It wasn't possible to take pictures or move the drone programmatically. We had to hack creative ways to enable this new capabilities for technologies, such as the screenshot pulling described above.
Additionally, the stack of coordinating communication with Alexa’s server, databases, and sending commands to the drone was quite a relay.
## Accomplishments that we're proud of
-Being able to create a super consumer-friendly way of training image classifiers.
-Taking a non-programmable drone and being able to hack with it still
-Being able to do voice control in general!
## What we learned
Hardware is finnicky.
## What's next for Recon
Even more precise control of the drone as well as potentially control over multiple drones. | ## Inspiration
Long waiting queues! Who would like to speed up processes that take forevvveeerrrrrrrrrr....
We were imagining to ourselves:
* What is the single-handed most annoying thing that everyone has to deal with from time-to-time?
* Would speeding up the process make it more efficient for others?
* How important will it be for the person to make this process speedy?
* Who would this benefit? Both the person and organizations/companies related to them?
WE SCREAMED AT THE SAME TIME "ALLHEALTH". Not really, but you get the gist. This is when we came up with an idea
## What it does
Our app is an AI-powered diagnostic tool that collects multi-modal user input—such as images, audio, and text—and processes this data to offer potential health diagnoses and advice. It’s designed to work like a virtual medical assistant, allowing users to input symptoms through a conversational interface and receive quick diagnostic feedback.
We have a family built in functionality to accommodate multiple users on one app. It also helps doctors to quickly access data from users if users wish to share to particular hospitals, providing invaluable data sets and easy response from doctors.
## How we built it
## Frontend
The frontend is built using React, offering a dynamic and responsive user interface where users can interact with a chatbot-like system. Users can submit data such as:
Audio files (e.g., for cough detection) via the HTML file input and conversion to base64 format.
Image files (e.g., to analyze visual injuries or conditions like rashes, bruises) using a similar image upload mechanism.
Text-based responses to questions about symptoms and conditions.
A styled-components library is used to create customizable and adaptive UI elements, such as a body-part selector that highlights affected areas dynamically.
Backend
The backend is powered by Flask, a lightweight Python web framework, that handles:
Processing user inputs: audio, images, and text are analyzed using separate AI models.
The communication between the React frontend and backend via API routes.
Librosa and Pydub are used for audio processing. These libraries convert audio files into frequency and decibel information, which are then analyzed to detect abnormalities (e.g., cough sound analysis).
For image analysis, the app uses a pre-trained Convolutional Neural Network (CNN) model to classify injuries based on visual input. The images are decoded and processed as inputs to the CNN.
The text-based input relies on a combination of sentiment analysis and natural language processing (NLP) models. We use the OpenAI API to analyze the text and match it with relevant diagnostic outcomes.
Data Handling
Images: After receiving base64-encoded image data from the frontend, the backend decodes and pre-processes the images, which are passed through the CNN for classification.
Audio: The base64-encoded audio files are converted back into waveform data, and features such as decibel levels and frequencies are extracted to detect patterns related to specific ailments (e.g., cough detection).
Text: User text inputs are analyzed through sentiment analysis models to evaluate the severity of symptoms, and NLP models match these symptoms with known conditions.
Diagnosis and Confidence Scores
The app provides a list of potential diagnoses with confidence scores based on the user’s input data. The backend leverages machine learning models to assess the likelihood of different conditions and offers suggestions for treatment or further medical consultation.
Tech Stack
Frontend: React, styled-components, HTML5, CSS3, JavaScript
Backend: Flask (Python)
Audio processing: Pydub, Librosa
Image processing: Pre-trained CNN for classification, PyTorch, Tensor Flow
Text processing: OpenAI API for NLP and sentiment analysis
Data transmission: RESTful APIs using Axios for communication between React frontend and Flask backend
## Challenges we ran into
With this being our first ever hackathon, we faced several challenges, from time management to technical hurdles. One of the biggest obstacles was integrating multiple technologies—such as audio processing with Pydub, image classification with pre-trained models, and using the OpenAI API for NLP—into a cohesive app. We also encountered difficulties in ensuring seamless communication between the React frontend and Flask backend, especially when handling large files like audio and images. Debugging these real-time interactions and ensuring cross-browser compatibility added additional complexity, but we persisted and learned a lot throughout the process.
## Accomplishments that we're proud of
This hackathon was both of our first, and the fact that as duo, we were able to come out with a working product, that we believe has the potential to become something much bigger than just a hackathon submission is an accomplishment in itself. From waking up 13 hours late into the competition to finishing our final working demo only 30 minutes before the submission, we're proud to announce the release of Doctor Doctor. We're proud to have accomplished a convoluted model integrated into a chatgpt wrapper with a touch a Fourier on the side... We're proud to have created a front end which is stylish and cool and wow.... Lastly we're proud to have made a tool that may one contribute to making the world a better place (Ooo dramatic)
## What we learned
Nothing... we're the best(Lol jk). In all seriousness, we've learned quite a bit over the last 36 hours:
1) Don't oversleep at the start of the competition(take shifts if need be)
2) Maybe have a team of 4 damn it was rough with only 2 people
3) Come to the competition with some level of understanding of whos doing what
4) General new technical concepts such as: random-forest, most of react cause wow leetcode != building a product from scratch, the fundamentals of working with and storing data to use at a later date, and a lot of css concepts I never though I'd stuggle so much on.
Okay, for real we really did learn:
* We learned how to collaborate as a team and be good with version control like GitHub (HATE MERGE CONFLICTS)
* Learning how to seamlessly integrate a tech stack while still including ML models was definitely a hassle, but a rewarding one
* Using React and Flask for the first time to create a project was a tough hurdle, but we passed it
* Scraping API from the most random resources and learning how to use them effectively felt like discovering fire
## What's next for Doctor Doctor
Doctor Doctor has the potential to shake up healthcare in a big way, especially for hospitals and underserved regions. Imagine reducing those crazy long waiting times at hospitals by using quick, accurate assessments to figure out who needs urgent care first. Plus, it could bring AI-powered medical knowledge to places that really need it, like third-world countries where access to doctors is limited. This app could help doctors with initial diagnoses, give personalized health advice, and empower people to take control of their health. The idea is to make healthcare more accessible, especially for those who usually get left out, and help ease the load on already overwhelmed systems. It’s all about making healthcare faster, smarter, and available to everyone who needs it. Sponsorships from hospitals and markets could improve the drug market and make it much more accessible. | partial |
## Inspiration
The inspiration for InstaPresent came from our frustration with constantly having to create presentations for class and being inspired by the 'in-game advertising' episode on Silicon Valley.
## What it does
InstaPresent is a tool that uses your computer's microphone to generate a presentation in real-time. It can retrieve images and graphs and summarize your words into bullet points.
## How we built it
We used Google's Text To Speech API to process audio from the laptop's microphone. The Text To Speech is captured when the user speaks and when they stop speaking, the aggregated text is sent to the server via WebSockets to be processed.
## Challenges We ran into
Summarizing text into bullet points was a particularly difficult challenge as there are not many resources available for this task. We ended up developing our own pipeline for bullet-point generation based on part-of-speech and dependency analysis. We also had plans to create an Android app for InstaPresent, but were unable to do so due to limited team members and time constraints. Despite these challenges, we enjoyed the opportunity to work on this project.
## Accomplishments that we're proud of
We are proud of creating a web application that utilizes a variety of machine learning and non-machine learning techniques. We also enjoyed the challenge of working on an unsolved machine learning problem (sentence simplification) and being able to perform real-time text analysis to determine new elements.
## What's next for InstaPresent
In the future, we hope to improve InstaPresent by predicting what the user intends to say next and improving the text summarization with word reordering. | ## Check it out on GitHub!
The machine learning and web app segments are split into 2 different branches. Make sure to switch to these branches to see the source code! You can view the repository [here](https://github.com/SuddenlyBananas/be-right-back/).
## Inspiration
Inspired in part by the Black Mirror episode of the same title (though we had similar thoughts before we made the connection).
## What it does
The goal of the project is to be able to talk to a neural net simulation of your Facebook friends you've had conversations with. It uses a standard base model and customizes it based on message upload input. However, we ran into some struggles that prevented the full achievement of this goal.
The user downloads their message history data and uploads it to the site. Then, they can theoretically ask the bot to emulate one of their friends and the bot customizes the neural net model to fit the friend in question.
## How we built it
Tensor Flow for the machine learning aspect, Node JS and HTML5 for the data-managing website, Python for data scraping. Users can interact with the data through a Facebook Messenger Chat Bot.
## Challenges we ran into
AWS wouldn't let us rent a GPU-based E2 instance, and Azure didn't show anything for us either. Thus, training took much longer than expected.
In fact, we had to run back to an apartment at 5 AM to try to run it on a desktop with a GPU... which didn't end up working (as we found out when we got back half an hour after starting the training set).
The Facebook API proved to be more complex than expected, especially negotiating the 2 different user IDs assigned to Facebook and Messenger user accounts.
## Accomplishments that we're proud of
Getting a mostly functional machine learning model that can be interacted with live via a Facebook Messenger Chat Bot.
## What we learned
Communication between many different components of the app; specifically the machine learning server, data parsing script, web server, and Facebook app.
## What's next for Be Right Back
We would like to fully realize the goals of this project by training the model on a bigger data set and allowing more customization to specific users. | ## Where we got the spark?
**No one is born without talents**.
We all get this situation in our childhood, No one gets a chance to reveal their skills and gets guided for their ideas. Some skills are buried without proper guidance, we don't even have mates to talk about it and develop our skills in the respective field. Even in college if we are starters we have trouble in implementation. So we started working for a solution to help others who found themselves in this same crisis.
## How it works?
**Connect with neuron of your same kind**
From the problem faced we are bridging all bloomers on respective fields to experts, people in same field who need a team mate (or) a friend to develop the idea along. They can become aware of source needed to develop themselves in that field by the guidance of experts and also experienced professors.
We can also connect with people all over globe using language translator, this makes us feels everyone feel native.
## How we built it
**1.Problem analysis:**
We ran through all problems all over the globe in the field of education and came across several problems and we chose a problem that gives solution for several problems.
**2.Idea Development:**
We started to examine the problems and lack of features and solution for topic we chose and solved all queries as much as possible and developed it as much as we can.
**3.Prototype development:**
We developed a working prototype and got a good experience developing it.
## Challenges we ran into
Our plan is to get our application to every bloomers and expertise, but what will make them to join in our community. It will be hard to convince them that our application will help them to learn new things.
## Accomplishments that we're proud of
The jobs which are currently popular may or may not be popular after 10 years. Our World will always looks for a better version of our current version . We are satisfied that our idea will help 100's of children like us who don't even know about the new things in todays new world. Our application may help them to know the things earlier than usual. Which may help them to lead a path in their interest. We are proud that we are part of their development.
## What we learned
We learnt that many people are suffering from lack of help for their idea/project and we felt useless when we learnt this. So we planned to build an web application for them to help with their project/idea with experts and same kind of their own. So, **Guidance is important. No one is born pro**
We learnt how to make people understand new things based on the interest of study by guiding them through the path of their dream.
## What's next for EXPERTISE WITH
We're planning to advertise about our web application through all social medias and help all the people who are not able to get help for development their idea/project and implement from all over the world. to the world. | winning |
## What it does
Think "virtual vision stick on steroids"! It is a wearable device that AUDIBLY provides visually impaired people with information on the objects in front of them as well as their proximity.
## How we built it
We used computer vision from Python and OpenCV to recognize objects such as "chair" and "person" and then we used an Arduino to interface with an ultrasonic sensor to receive distance data in REAL TIME. On top of that, the sensor was mounted on a servo motor, connected to a joystick so the user can control where the sensor scans in their field of vision.
## Challenges we ran into
The biggest challenge we ran into was integrating the ultrasonic sensor data from the Arduino with the OpenCV live object detection data. This is because we had to grab data from the Arduino (the code is in C++) and use it in our OpenCV program (written in Python). We solved this by using PySerial and calling our friends Phoebe Simon Ryan and Olivia from the Anti Anti Masker Mask project for help!
## Accomplishments that we're proud of
Using hardware and computer vision for the first time!
## What we learned
How to interface with hardware, work as a team, and be flexible (we changed our idea and mechanisms like 5 times).
## What's next for All Eyez On Me
Refine our design so it's more STYLISH :D | ## Inspiration
The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities
## What it does
The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame
## How I built it
We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well.
## Challenges I ran into
The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene.
## What I learned
I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network.
## What's next for Let Me See
We want to further improve our analysis and reduce our analyzing time. | ## Inspiration
About 0.2 - 2% of the population suffers from deaf-blindness and many of them do not have the necessary resources to afford accessible technology. This inspired us to build a low cost tactile, braille based system that can introduce accessibility into many new situations that was previously not possible.
## What it does
We use 6 servo motors controlled by Arduino that mimic braille style display by raising or lowering levers based upon what character to display. By doing this twice per second, even long sentences can be transmitted to the person. All the person needs to do is put their palm on the device. We believe this method is easier to learn and comprehend as well as way cheaper than refreshable braille displays which usually cost more than $5,000 on an average.
## How we built it
We use Arduino and to send commands, we use PySerial which is a Python Library. To simulate the reader, we have also build a smartbot with it that relays information to the device. For that we have used Google's Dialogflow.
We believe that the production cost of this MVP is less than $25 so this product is commercially viable too.
## Challenges we ran into
It was a huge challenge to get the ports working with Arduino. Even with the code right, pyserial was unable to send commands to Arduino. We later realized after long hours of struggle that the key to get it to work is to give some time to the port to open and initialize. So by adding a wait of two seconds and then sending the command, we finally got it to work.
## Accomplishments that we're proud of
This was our first hardware have to pulling something like that together a lot of fun!
## What we learned
There were a lot of things that were learnt including the Arduino port problem. We learnt a lot about hardware too and how serial ports function. We also learnt about pulses and how sending certain amount of pulses we are able to set the servo to a particular position.
## What's next for AddAbility
We plan to extend this to other businesses by promoting it. Many kiosks and ATMs can be integrated with this device at a very low cost and this would allow even more inclusion in the society. We also plan to reduce the prototype size by using smaller motors and using steppers to move the braille dots up and down. This is believed to further bring the cost down to around $15. | winning |
## Inspiration and what it does
I understand that medical conditions for some people can already be a pain in their daily life. What doesn't make it better is when you have to be extra cautious while shopping since some ingredients may have an adverse reaction due to your medical condition.
That’s why I created CareScan. Just hold your camera to the ingredients list of the product and we'll tell you if it's okay to buy.
## How it works
When you scan the ingredient list of the product the app uses computer vision to parse out the ingredients and uses an algorithm uses ai to search the web to find the overall sentiment about the product and its effects on your medical condition. Depending on this sentiment, the different ingredients are highlighted using augmented reality in real-time, the ingredients that are bad for you are highlighted red and the goods ones are highlighted green. The app processes the entire analysis and gives the product a score out of 100, to make it easy for the user to compare between products.
## What I learned and some improvements
I've learned a lot about app design and what does and doesn't make a good-looking app. This project has definitely inspired me to look more into the entire process of app development! Something that will eventually need to be added to the app is a way to take into account how much of a specific ingredient is being used because right now, the app cannot tell the difference between an ingredient that is 99% of a product and 1% of a product. | ## Inspiration
Many of us, including our peers, struggle on deciding what to cook. We usually have a fridge full of items but are not sure what exactly to make with those items. This leads us to eating outside or buying even more groceries to follow along to a recipe.
* We want to be able to use what we have
* Reduce our waste
* Get new and easy ideas
## What it does
The user first takes a picture of the items in their fridge. They can then upload the image to our application. Using computer vision technology, we detect what are the exact items present in the picture (their fridge). After obtaining a list of the ingredients the user in their fridge this data is then passed along and processed with a database of 1000 quick and easy recipes.
## How we built it
* We designed the mobile and desktop website using Figma
* The website was developed using JavaScript and node.js
* We use Google Cloud Vision API to detect items in the picture
* This list of items in then processed along a database of recipes
* Best matching recipes are returned to the user
## Challenges we ran into
We ran through a lot of difficulties and challenges while building this web app most of which we were able to overcome with help from each other and learning on the fly.
The first challenge we ran into was building and training a machine learning model to apply multi-class object detection on the images the user inputs. This is tricky as there is no proper dataset of images of vegetables, fruits, meats, condiments, other items all together. After various experiments on our own machine learning models from scratch we then attempted using multiple pre-existing models and tools for our case. We found Google Cloud Vision API was doing the best job out of all that was available. Thus, we invested in Google Vision and using their API for our prototype currently.
The second challenge was getting the correct recipes according to the data received from the artificial intelligence. We are using a database of 1000 recipes and set a threshold for the minimum of number of items needed to match (ingredients the user has - to - ingredients the recipe requires). Our assumption is the user already has the basic ingredients such as salt, pepper, salt, butter, oil, etc.
## Accomplishments that we're proud of
* Coming up with an idea that solves a problem every member of our team and many peers we interviewed face
* Using modern artificial intelligence to solve a major part of our problem (detecting ingredients/groceries) from a given image
* Designing a a very good looking and user-friendly UI with an excellent user-experience (quick and easy)
## What we learned
Each team member learned a new or enhanced a current skill during this hackathon which is what we were here for. We learned to use newer tools, such as google cloud, figma, others to streamline our product development.
## What's next for Xcellent Recipes
\**We truly believe in our product and its usefulness for customers. We will continue working on Xcellent Recipes with a product launch in the future. The next steps include: \**
1. Establishing a backend server
2. Create or obtain our own data for training a ML model for our use case
3. Fine tune recipes
4. Company Launch | ## Inspiration
Almost 2.2 million tonnes of edible food is discarded each year in Canada alone, resulting in over 17 billion dollars in waste. A significant portion of this is due to the simple fact that keeping track of expiry dates for the wide range of groceries we buy is, put simply, a huge task. While brainstorming ideas to automate the management of these expiry dates, discussion came to the increasingly outdated usage of barcodes for Universal Product Codes (UPCs); when the largest QR codes can store [thousands of characters](https://stackoverflow.com/questions/12764334/qr-code-max-char-length), why use so much space for a 12 digit number?
By building upon existing standards and the everyday technology in our pockets, we're proud to present **poBop**: our answer to food waste in homes.
## What it does
Users are able to scan the barcodes on their products, and enter the expiration date written on the packaging. This information is securely stored under their account, which keeps track of what the user has in their pantry. When products have expired, the app triggers a notification. As a proof of concept, we have also made several QR codes which show how the same UPC codes can be encoded alongside expiration dates in a similar amount of space, simplifying this scanning process.
In addition to this expiration date tracking, the app is also able to recommend recipes based on what is currently in the user's pantry. In the event that no recipes are possible with the provided list, it will instead recommend recipes in which has the least number of missing ingredients.
## How we built it
The UI was made with native android, with the exception of the scanning view which made use of the [code scanner library](https://github.com/yuriy-budiyev/code-scanner).
Storage and hashing/authentication were taken care of by [MongoDB](https://www.mongodb.com/) and [Bcrypt](https://github.com/pyca/bcrypt/) respectively.
Finally in regards to food, we used [Buycott](https://www.buycott.com/)'s API for UPC lookup and [Spoonacular](https://spoonacular.com/) to look for valid recipes.
## Challenges we ran into
As there is no official API or publicly accessible database for UPCs, we had to test and compare multiple APIs before determining Buycott had the best support for Canadian markets. This caused some issues, as a lot of processing was needed to connect product information between the two food APIs.
Additionally, our decision to completely segregate the Flask server and apps occasionally resulted in delays when prioritizing which endpoints should be written first, how they should be structured etc.
## Accomplishments that we're proud of
We're very proud that we were able to use technology to do something about an issue that bothers not only everyone on our team on a frequent basis (something something cooking hard) but also something that has large scale impacts on the macro scale. Some of our members were also very new to working with REST APIs, but coded well nonetheless.
## What we learned
Flask is a very easy to use Python library for creating endpoints and working with REST APIs. We learned to work with endpoints and web requests using Java/Android. We also learned how to use local databases on Android applications.
## What's next for poBop
We thought of a number of features that unfortunately didn't make the final cut. Things like tracking nutrition of the items you have stored and consumed. By tracking nutrition information, it could also act as a nutrient planner for the day. The application may have also included a shopping list feature, where you can quickly add items you are missing from a recipe or use it to help track nutrition for your upcoming weeks.
We were also hoping to allow the user to add more details to the items they are storing, such as notes for what they plan on using it for or the quantity of items.
Some smaller features that didn't quite make it included getting a notification when the expiry date is getting close, and data sharing for people sharing a household. We were also thinking about creating a web application as well, so that it would be more widely available.
Finally, we strongly encourage you, if you are able and willing, to consider donating to food waste reduction initiatives and hunger elimination charities.
One of the many ways to get started can be found here:
<https://rescuefood.ca/>
<https://secondharvest.ca/>
<https://www.cityharvest.org/>
# Love,
# FSq x ANMOL | losing |
## Inspiration
being students and always on the move between internships and school terms, finding good housing for the semesters
have always been a tough task. We wanted to streamline and minimize the process, allowing users to have a more pleasant experience finding their next home.
## What it does
User signs up for Housr and enters criteria for their ideal housing, Housr then scrapes kijiji (Canadian Craiglists). It also views the images linked to postings and uses computer vision based on GoogleNet Inception V2 to generate an overall rating for the posting.
## How we built it
Housr is built on node.js with a native boot-strapped front end, authentication is handled by Firebase, and documentation signing is handled with docuSign APIs. The machine learning model was created in Tensorflow Keras, using image augmentation on a pretrained model to train it to recognize apartment quality.
## Challenges we ran into
docuSign APIs were hard to setup and use. The online documentation was extremely vague and misleading, with code samples that are outdated and do not work. In addition, finding a good model to analyze pictures was very difficult. We initially had our own deep convolutional model in Keras, but our hardware was not good enough to make it sufficient complex. So, this model did not achieve the necessary accuracy.
## Accomplishments that we're proud of
The model using Inception V2 achieved much better accuracy than the convolutional model
## What we learned
## What's next for housr | ## Inspiration: We were looking for housing around our college when we thought of this idea.
## What does it do? An interactive map suggests housing according to the office/college location. It shows the time to travel and the approximate cost of commuting via Driving.
## How we built it? NPM, React.js, and several Google Maps APIs.
## Challenges we ran into: Dynamically populating housing options after every search. Optimize the housing locater algorithm to suggest better housing around the campus and office.
## Accomplishments that we're proud of: Being able to successfully code an application that directly caters to the need of several people.
## What we learned: We learned a lot about NPM, React, and Google Maps APIs.
## What's next for KnowMyHousing: Optimizing the housing locater algorithm to suggest housing based on availability and other filters we plan to add. | ## Inspiration
Living in the big city, we're often conflicted between the desire to get more involved in our communities, with the effort to minimize the bombardment of information we encounter on a daily basis. NoteThisBoard aims to bring the user closer to a happy medium by allowing them to maximize their insights in a glance. This application enables the user to take a photo of a noticeboard filled with posters, and, after specifying their preferences, select the events that are predicted to be of highest relevance to them.
## What it does
Our application uses computer vision and natural language processing to filter any notice board information in delivering pertinent and relevant information to our users based on selected preferences. This mobile application lets users to first choose different categories that they are interested in knowing about and can then either take or upload photos which are processed using Google Cloud APIs. The labels generated from the APIs are compared with chosen user preferences to display only applicable postings.
## How we built it
The the mobile application is made in a React Native environment with a Firebase backend. The first screen collects the categories specified by the user and are written to Firebase once the user advances. Then, they are prompted to either upload or capture a photo of a notice board. The photo is processed using the Google Cloud Vision Text Detection to obtain blocks of text to be further labelled appropriately with the Google Natural Language Processing API. The categories this returns are compared to user preferences, and matches are returned to the user.
## Challenges we ran into
One of the earlier challenges encountered was a proper parsing of the fullTextAnnotation retrieved from Google Vision. We found that two posters who's text were aligned, despite being contrasting colours, were mistaken as being part of the same paragraph. The json object had many subfields which from the terminal took a while to make sense of in order to parse it properly.
We further encountered troubles retrieving data back from Firebase as we switch from the first to second screens in React Native, finding the proper method of first making the comparison of categories to labels prior to the final component being rendered. Finally, some discrepancies in loading these Google APIs in a React Native environment, as opposed to Python, limited access to certain technologies, such as ImageAnnotation.
## Accomplishments that we're proud of
We feel accomplished in having been able to use RESTful APIs with React Native for the first time. We kept energy high and incorporated two levels of intelligent processing of data, in addition to smoothly integrating the various environments, yielding a smooth experience for the user.
## What we learned
We were at most familiar with ReactJS- all other technologies were new experiences for us. Most notably were the opportunities to learn about how to use Google Cloud APIs and what it entails to develop a RESTful API. Integrating Firebase with React Native exposed the nuances between them as we passed user data between them. Non-relational database design was also a shift in perspective, and finally, deploying the app with a custom domain name taught us more about DNS protocols.
## What's next for notethisboard
Included in the fullTextAnnotation object returned by the Google Vision API were bounding boxes at various levels of granularity. The natural next step for us would be to enhance the performance and user experience of our application by annotating the images for the user manually, utilizing other Google Cloud API services to obtain background colour, enabling us to further distinguish posters on the notice board to return more reliable results. The app can also be extended to identifying logos and timings within a poster, again catering to the filters selected by the user. On another front, this app could be extended to preference-based information detection from a broader source of visual input. | losing |
## Inspiration
Selina was desperately trying to get to PennApps on Friday after she disembarked her Greyhound. Alas, she had forgotten that only Bolt Bus and Megabus end their journeys directly next to the University of Pennsylvania, so she was a full 45 minute walk away from Penn Engineering. Full of hope, she approached the SEPTA stop marked on Google Maps, but was quickly rebuffed by the lack of clear markings and options for ticket acquirement. It was dark and cold, so she figured she might as well call a $5 Lyft. But when she opened the app, she was met with the face of doom: "Poor Network Connectivity". But she had five bars! If only, she despaired as she hunted for wifi, there were some way she could book that Lyft with just a phone call.
## What it does
Users can call 1-888-970-LYFF, where an automated chatbot will guide them through the process of ordering a Lyft to their final destination. Users can simply look at the street name and number of the closest building to acquire their current location.
## How I built it
We used the Nexmo API from Vonage to handle the voice aspect, Amazon Lex to create a chatbot and parse the speech input, Amazon Lambda to implement the internal application logic, the Lyft API for obvious reasons, and Google Maps API to sanitize the locations.
## Challenges I ran into
Nexmo's code to connect the phone API to Amazon Lex was overloading the buffer, causing the bot to become unstable. We fixed this issue, submitting a pull request for Nexmo's review.
## Accomplishments that I'm proud of
We got it to work end to end!
## What I learned
How to use Amazon lambda functions, setup an EC2 instance, that API's don't always do what the documentation says they do.
## What's next for Lyff
Instead of making calls in Lyft's sandbox environment, we'll try booking a real Lyft on our phone without using the Lyft app :) Just by making a call to 1-888-970-LYFF. | ## Inspiration
*"I have an old Nokia mobile phones, that doesn't have internet access nor acess to download & install the Lyft app; How can I still get access to Lyft?"*
>
> Allow On-Demand Services Like Uber, Lyft to be more mainstream in developing world where there is limitied to no internet access. Lyft-powered SMS.
>
>
>
## What it does
>
> Have all the functionalities that a Lyft Application would have via SMS only. No wifi or any type of internet access. Functionalities include and are not limited to request a ride, set origin and destination, pay and provide review/feedback.
>
>
>
## How I built it
>
> Used Google Polymer to build the front end. For the backend we used the Lyft API to take care of rides. The location/address have been sanitize using Google Places API before it gets to the Lyft API. The database is powered by MongoDB, spun off the application using Node.js via Cloud9 cloud IDE. Finally, Twilio API which allow user/client to interface with only SMS.
>
>
>
## Challenges I ran into
>
> The Lyft API did not have a NodeJS wrapper so we had to create our own such that we were able to perform all the necessary functions needed for our project.
>
>
>
## Accomplishments that I'm proud of
>
> Our biggest accomplishment has to be that we completed all of our objectives for this project. We completed this project such that it is in a deployable state and anybody can test out the application from their own device. In addition, all of us learned new technologies such as Google Polymer, Twilio API, Lyft API, and NodeJS.
>
>
>
## What I learned
>
> Emerging markets
>
>
>
## What's next for Lyft Offline
>
> We plan to polish the application and fix any bugs found as well as get approval from Lyft to launch our application for consumers to use.
>
>
>
## Built With
* Google Polymer
* Node.js
* Express
* MongoDB
* Mongoose
* Passport
* Lyft API & Auth
* Google API & user end-points
* Twilio API | *EduQueue presents a clean and minimalistic solution to waiting in line for office hours!*
## Inspiration
Overcrowded office hours are frustrating for **both students and instructors**
Neglected students and overwhelmed instructors do not create an effective learning environment.
## What it does
* EduQueue is a webpage that **features a question form for students to submit a question based on category and follower count.**
* They are then added to the **waitlist queue,** which can be viewed in full in the queue viewer.
* The queue viewer features a **real-time clock display and a list of entries,** which are expandable for a more convenient view.
* You can **delete the question entry** in the queue with a press of a button.
* You can also **follow other questions you find interesting,** and get answers even faster.
* When your turn in the queue is up, **you will be notified.**
## How we built it
We developed this webpage through the use of **HTML, CSS, and JavaScript.** We used **HTTP** to connect our front end with the back end by transfering data. For collaboration, we used **VS Code, Repl.it, and Git** for compiling and comitting codes. **JSON** was used to save data to local storage. Additional tools for design involved **Font Awesome CSS Library, Figma and InkScape.**
## Challenges we ran into
* Joining front end and back end together as a team (we worked in 2 parts separately)
* Git commits, merge conflicts, and pulls
* Time manangement and shaping the scale of our problem that we wish to solve
## Accomplishments that we're proud of
* Active use of HTTP and application of asynchronous programming
* Clean and Dynamic UI/UX design
* Implementing notification for students who reach the front of the queue
## What we learned
* Efficient collaboration with teammates
* Intricate management of simulatneous front end and back end programming
* Understanding and utilizing local storage
* Streamlined design and implementation of UX/UI
* Effective time and resource management
## What's next for EduQueue
* **Implementation of API for school credentials log-in, mathematic notations, and/or code block markdown**
* **Use of LLM to detect similar questions and suggetions** for following questions
* Adding **option for students to yield when they are not ready** to ask questions when reached the front of the queue
* Additional **UI/UX expansion for TAs/Instructors,** so they can also use the webpage to manage their availability during office hours | partial |
## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | ## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC. | ## Inspiration
Inspiration
As a junior software engineer, I've always been fascinated by the intersection of artificial intelligence and creative expression. This project was born out of my passion for both music and visual art, and the desire to explore how AI can bridge these two worlds. I saw it as perfect for Interactive Media.
## What it does
The Spotify A.I. Art Generator is a unique application that leverages the Dall-E API for image generation and the Spotify API for music analysis. It takes a user-provided playlist name, retrieves relevant data from Spotify, and generates a one-of-a-kind artwork inspired by the musical essence of the song.
## How we built it
This project was built using Javascript as the primary programming language with a FERN stack. I integrated the Dall-E API to handle image generation and the Spotify API to extract key musical features. The user interface was developed using a React app for simplicity and ease of use.
## Challenges we ran into
While developing this project, I faced a few challenges. One significant hurdle was ensuring seamless integration between the Dall-E and Spotify APIs. Additionally, fine-tuning the generated images through prompt engineering to truly capture the essence of a song proved to be a delicate task.
## Accomplishments that we're proud of
I'm particularly proud of achieving a harmonious fusion between the worlds of music and visual art through the power of AI. Additionally, successfully implementing a user-friendly interface and ensuring a smooth user experience was a significant accomplishment.
## What we learned
Through this project, I gained a deeper understanding of working with APIs, especially in the context of complex tasks like image generation and music analysis. I also honed my skills in Javascript and learned valuable techniques for handling and processing large amounts of data as well as using OAuth flows.
## What's next for Spotify A.I. Art Generator
Moving forward, I plan to enhance the project by incorporating more advanced image-generation techniques and refining the music-to-art translation process. Additionally, I aim to explore options for deploying this application as a web-based service to reach a wider audience. This project has immense potential for further innovation and I'm excited to see where it goes. | winning |
## Hackers
Scott Blender and Jackie Gan
## Inspiration
This project was inspired by a book called "Hacking Healthcare". It discussed innovations and changes that were needed in the healthcare field and explained how human-centered design can make a positive impact in the field. This project seeks to use human-centered design as the inspiration for how the platform is built.
## What it does
Our application creates an ease-of-use communication pipeline to provide sustainable healthcare alternatives for patients. Our client utilizes Sonr technology to privatize sensitive patient data to allow them to control who has access to their records. In addition, the client helps facilitate communication between patients and doctors to allow doctors to recommend sustainable alternatives besides coming into the office, reducing the effects and need of transportation. These can include telehealth services, non-pharmaceutical interventions, and other sustainable options. By reducing the need for transportation to and from healthcare providers and pharmacies, more effective and sustainable ways can be advanced in the healthcare space for treating patient recovery,
## How we built it
This app is primarily built using golang and utilizes the Motor API built through Sonr. Due to the sensitive content shared across this site, Sonr is a great way to maximize patient privacy and provide confidential communication between patients and doctors.
## Challenges we ran into
Our team were the first users ever to develop a golang app that uses the Sonr platform on a Windows operating environment. This presented many difficulties, and ultimately, led us to having to focus on finishing the design of the backend in Sonr for the web application. This was caused by a persistent error in login authentication. Through this, though, we persisted and continued to develop out our backend system to integrate patient and primary care provider data transfer.
## Accomplishments that we're proud of
We were able to build a semi-working back-end in Sonr! Learning about blockchain, Web3, and what Sonr does inspired me and my teammate to work on developing an app that relies on sensitive data transfer. In addition, we already pitched to the Sonr team at PennApps and received positive feedback on the idea and plan for implementation using Sonr.
## What we learned
We learned a lot about back-end, schemas, objects, and buckets. Schemas, objects, and buckets are the primary ways data is structured in Sonr. By learning the building blocks of how to store, pass, and collect data, we learned how to appropriately construct a data storage solution. In addition, this was our teams first time ever using golang and competing at PennApps, so it was a great experience to learn and new language and make new connections.
## What's next for Sustainabilicare
The future looks bright. With continued support and debugging in using Sonr, we can continue to elevate our project and make it an actual backend solution. We plan on creating a formal pitch, building out a fully functional front-end, and learning more about Sonr structures to enhance the way our backend works. | ## We wanted to help the invisible people of Toronto, many homeless people do not have identification and often have a hard time keeping it due to their belongings being stolen. This prevents many homeless people to getting the care that they need and the access to resources that an ordinary person does not need to think about.
**How**
Our application would be set up as a booth or kiosks within pharmacies or clinics so homeless people can be verified easily.
We wanted to keep information of our patients to be secure and tamper-proof so we used the Ethereum blockchain and would compare our blockchain with the information of the patient within our database to ensure they are the same otherwise we know there was edits or a breach.
**Impact**
This would solve problems such as homeless people getting the prescriptions they need at local clinics and pharmacies. As well shelters would benefit from this as our application can track the persons: age, medical visits, allergies and past medical history experiences.
**Technologies**
For our facial recognition we used Facenet and tensor flow to train our models
For our back-end we used Python-Flask to communicate with Facenet and Node.JS to handle our routes on our site.
As well Ether.js handled most of our back-end code that had to deal with our smart contract for our blockchain.
We used Vue.JS for our front end to style our site. | **Inspiration**
Brought up in his rural hometown of Faizabad in India, one of our team members has seen several villagers unable to access modern medical advice. This often led to unfortunate catastrophes that could have been prevented easily. To answer the medical information needs of the lower-income section of society, we created a forum for health-related questions such as medications and diet. We hoped to leverage the decentralized power of blockchain technology to help bridge the gap between quality healthcare advice and the resources of the less fortunate.
**What it does**
Faiza provides a decentralized user experience for posting and answering important medical questions. Each question asked by a user will create a tradeable NFT that is stored on a decentralized database available on the Sonr ecosystem. The prices of these posts are determined by the “karma,” or the number of upvotes that it has. Higher popularity signals increased karma and a larger associated intrinsic value of the “post” NFT. Hence, we create a self-regulating token marketplace to openly trade the created NFTs.
Additionally, we distribute the initial ownership between the post creator and the most upvoted comment to incentivize doctors to provide high-value, relevant information to patients.
Lastly, by using state-of-the-art natural language processing systems, we efficiently perform sentiment analysis on the posts and categorize them by certain tags. Additionally, we filter out relevant comments, deleting ones that are considered spam or derogatory.
**How we built it**
In order to establish a P2P NFT network, we employed Sonr.io, a decentralized network program. Our NFTs are stored on schemas that are pre-defined by us. The Sonr team was very helpful in helping us learn and implement their tech. In particular, we’d like to give a huge shout-out to Ian for helping us understand his speedway API, it really helped speed production :)
To incorporate the NLP, we utilized Cohere to create robust and easily deployable models. We created multiple models such as one to classify the toxicity of comments and another to perform sentiment analysis on a post and categorize it.
Furthermore, we utilized Node for the implementation of the backend and Bootstrap React to build out an aesthetically pleasing front end. Moreover, to facilitate the multiple API calls between our servers, we used Heroku to host them in the cloud and Postman to validate them.
**Challenges we ran into**
To be honest, we weren’t completely familiar with web3 and blockchain. So, it took us a while to conceptually understand Sonr and the integration of its ecosystem. We have to once again thank Ian for the tremendous amount of support he has provided to us on this journey!
Additionally, as we were making API calls between multiple servers (NLP, Node, Web3), they were often conflicting requests. Dealing with the sheer number of requests was difficult to handle and test using Postman.
**Accomplishments that we're proud of**
We’re proud to officially be Web 3.0 Developers :)
Prior to this hackathon, we had little to no experience working with web3 and blockchain technologies. However, this hackathon was a HUGE learning curve. While extremely difficult at first, we are proud to deploy a fully functional decentralized database, with the capability of storing tradeable NFTs and all the elements of a dApp.
**What we learned**
All of us have had a remarkable learning experience. While one of us became proficient in web3, the others learned about testing API calls using Postman, throttling speeds through developer tools, and deploying servers through Heroku CLI.
**What's next for Faiza**
Given the current functionality of Faiza, we hope to include all relevant features in the coming weeks. For instance, we haven’t been able to create a currency that can be liquidated into tangible assets.
Additionally, we hope to implement credibility levels for the users providing medical advice to a post. Using sorting algorithms, we can determine the rank of the importance of comments while displaying them on the user interface.
Lastly, we would love to simplify Faiza and deploy it in the native village of Faizabad, which was the initial fuel for the motivation of the project. We hope to see Faiza making a tangible difference at the grassroots level, changing lives one comment at a time. | partial |
## Inspiration
Discord is a text and voice communication tool that has quickly grew in popularity. It started out as a communication tool for online gaming and has then shifted its focus over the years to becoming an all-purpose communication tool. With the rise of "Work from home" culture and online socializing, Discord is being used for purposes other than gaming and has turned into a platform that brings entire communities together.
Fast forward a couple of years, I'm imagining a future where health care professionals use Discord and other similar communication platforms as a tool. I built MediBot, a Discord bot, that takes advantage of Discord's powerful functionalities and allows physicians and pharmacists to connect with their patients.
## What it does
* Allows the health care professional to set a medical prescription reminder for their patients
* Sends a private message as a reminder to the patient that it is time to take their medication
* Allows the health care professional to keep track of the patient's adherence to the prescription
* Gathers medical information about the patient and displays it as a table
## How we built it
* Bot built with Python + discord.py API
* Patient info display functionality built with Matplotlib
## Challenges we ran into
* Trying to learn discord.py and Matplotlib in a short amount of time
* Finding a way to store multiple patients' info
* Getting the bot to handle multiple patients' reminders at once
## Accomplishments that we're proud of
* As an inexperienced coder attending my first hackathon, I'm happy that I was able to get a working prototype!
## What we learned
* discord.py and Matplotlib
## What's next for MediBot
* Implementing a database to store patients' information
* More customization options for reminders (multiple reminders a day, set helpful images, etc.)
* Being able to set more than one prescription to a patient
* Further developing the Matplotlib patient info functionality (display a patient's full medical history, plot a patient's medication schedule, plot when the medication will have maximum effect, etc.) | ## Inspiration
Being certified in Standard First Aid, we personally know the great difference that quick intervention can have on a victim. For example, according to the American Heart Association, if CPR is performed in the first few minutes of cardiac arrest, it can double or triple a person's chance of survival. Unfortunately, only 46% of people who suffer Cardiac Arrest outside of hospitals get the immediate aid they need before professional help arrives. Thus was the creation of Heal-A-Kit.
## What it does
Heal-A-Kit is an app that can identify serious health issues like Heart Attacks, Strokes, Seizures, etc. through the input of various symptoms and provide multiple treatment plans. Additionally, it offers users a step-by-step guide of CPR and encourages them to continue compressions until professional help arrives.
## How we built it
To program our Discord bot we used Python with Discord.py, which is an API wrapper for Discord’s JS API. We chose to use Discord.py as opposed to Discord.js because Python is a more familiar language for us compared to JavaScript and their runtimes such as Node.js.
## Challenges we ran into
Aside from simple syntax errors, we had an issue at first making the bots' messages aesthetically pleasing. We then realized that using embeds would be a clean way to display messages from the bot. Another challenge we ran into was with our logic in the compare command. Originally it would break out the loop once finding one condition that matches with the symptom and to combat this we created a for loop that would keep searching through our data if there were any more conditions and if the string was empty (i.e there were no conditions for the symptoms) then instead of returning an empty string it would return “No conditions match the given symptom."
## Accomplishments that we're proud of
Being able to make a Discord bot for the first time was a massive accomplishment. In addition, the fact that our Discord bot is fully functional and helpful in a real life scenario is something to be proud of.
## What we learned
We learnt how to use Discord’s API wrapper for Python (Discord.py) and how to use modules in Python. In addition, learning how to use object-oriented programming instead of functional programming for the Discord bot which makes the code cleaner and more efficient.
## What's next for Heal-A-Kit
Currently, we have a fully functioning Discord bot, and a prototype for our mobile application. In the future, we plan on adding more health conditions and their respective symptoms and treatments to “heal” as many people as possible. We also plan on further optimizing our search bar, so that even if the scientific or exact wordings are not used, the app can detect the symptom and thus be able to help the victim. | View presentation at the following link: <https://youtu.be/Iw4qVYG9r40>
## Inspiration
During our brainstorming stage, we found that, interestingly, two-thirds (a majority, if I could say so myself) of our group took medication for health-related reasons, and as a result, had certain external medications that result in negative drug interactions. More often than not, one of us is unable to have certain other medications (e.g. Advil, Tylenol) and even certain foods.
Looking at a statistically wider scale, the use of prescription drugs is at an all-time high in the UK, with almost half of the adults on at least one drug and a quarter on at least three. In Canada, over half of Canadian adults aged 18 to 79 have used at least one prescription medication in the past month. The more the population relies on prescription drugs, the more interactions can pop up between over-the-counter medications and prescription medications. Enter Medisafe, a quick and portable tool to ensure safe interactions with any and all medication you take.
## What it does
Our mobile application scans barcodes of medication and outputs to the user what the medication is, and any negative interactions that follow it to ensure that users don't experience negative side effects of drug mixing.
## How we built it
Before we could return any details about drugs and interactions, we first needed to build a database that our API could access. This was done through java and stored in a CSV file for the API to access when requests were made. This API was then integrated with a python backend and flutter frontend to create our final product. When the user takes a picture, the image is sent to the API through a POST request, which then scans the barcode and sends the drug information back to the flutter mobile application.
## Challenges we ran into
The consistent challenge that we seemed to run into was the integration between our parts.
Another challenge that we ran into was one group member's laptop just imploded (and stopped working) halfway through the competition, Windows recovery did not pull through and the member had to grab a backup laptop and set up the entire thing for smooth coding.
## Accomplishments that we're proud of
During this hackathon, we felt that we *really* stepped out of our comfort zone, with the time crunch of only 24 hours no less. Approaching new things like flutter, android mobile app development, and rest API's was daunting, but we managed to preserver and create a project in the end.
Another accomplishment that we're proud of is using git fully throughout our hackathon experience. Although we ran into issues with merges and vanishing files, all problems were resolved in the end with efficient communication and problem-solving initiative.
## What we learned
Throughout the project, we gained valuable experience working with various skills such as Flask integration, Flutter, Kotlin, RESTful APIs, Dart, and Java web scraping. All these skills were something we've only seen or heard elsewhere, but learning and subsequently applying it was a new experience altogether. Additionally, throughout the project, we encountered various challenges, and each one taught us a new outlook on software development. Overall, it was a great learning experience for us and we are grateful for the opportunity to work with such a diverse set of technologies.
## What's next for Medisafe
Medisafe has all 3-dimensions to expand on, being the baby app that it is. Our main focus would be to integrate the features into the normal camera application or Google Lens. We realize that a standalone app for a seemingly minuscule function is disadvantageous, so having it as part of a bigger application would boost its usage. Additionally, we'd also like to have the possibility to take an image from the gallery instead of fresh from the camera. Lastly, we hope to be able to implement settings like a default drug to compare to, dosage dependency, etc. | losing |
## Inspiration
Hearing loss consistently ranks among the top five causes of years lived with a disability in Canada. Overall 60% of the Canadians aged 19 -79 have a hearing health problem. This can lead to hinderance in participation towards educational and employment opportunities. Currently almost all universities have student volunteers writing notes for hearing impaired students during lectures so that they do not miss out on the content. This can be a huge barrier to students with disability and can even restrict them for making use of the all the available resources that everyone else has the access to.
## What it does
'Notes for All' hopes to improve the participation of hearing impaired students in their educational activities and help provide them with an inclusive way of participating in lectures.
It does so by giving the ability to transcribe lectures in real time and storing them under your account. This app also gives the opportunity to summarize the lecture highlighting the important points. Another feature of our project is the unique identification of the all the transcriptions and summaries that can be used by other students in the class to access the transcriptions and summary notes as well.
## Challenges we ran into
Making the real Time Transcription and the summary generation done simultaneously was challenging but rewarding. We made us of the Assembly AI API that was provided to us to accomplish the same.
## Accomplishments that we're proud of
We are definitely proud of completing this project and actually bring it to life.
## What's next for Notes For All
We want to make this a centralized system where each user can have their own account, be able to add notes to their own account from other students through the unique ids associated with each note that is taken using Notes for All. We also hope to incorporate annotation functionality to make it all in one place for taking notes for anyone and everyone. | The Wolf helps you follow the social scent of stocks. Curating sentiment data from Twitter and matching this against stock valuations, the Wolf looks for ongoing correlations between social sentiments and stock action. Like any loyal companion, the Wolf provides information on correlations, tracks daily baselines, and alerts you to hot trends! | ## Inspiration
Databases are wonderfully engineered for specific tasks. Every time someone wants to add a different type of data or use their data with different access pattern, they either need to either use a sub-optimal choice of database (one that they already support), or support a totally new database. The former damages performance, while the latter is extremely costly in both complexity and engineering effort. For example, Druid on 100GB of time series data is about 100x faster than MySQL, but it's slower on other types of data.
## What it does
We set up a simple database auto-selector that makes the decision of whether to use Druid or MySQL. We set up a metaschema for data -- thus we can accept queries and then direct them to the database containing the relevant data.
Our core technical contributions are a tool that assigns data to the appropriate database based on the input data and a high-level schema for incoming data.
We demonstrated our approach by building a web app, StockSolver, that shows these trade-offs and the advantages of using 1DB for database selection. It has both time-series data and text data. Using our metaschema we and 1DB can easily mix-and-match data between Druid and MongoDB. 1DB finds that the time-series data should be stored on Druid, while MongoDB should store the text data. We show the results of making these decisions in our demo!
## How we built it
We created a web app for NASDAQ financial data. We used react and node.js to build our website. We set up MongoDB on Microsoft's Cosmos DB and Druid on the Google Cloud Platform.
## Challenges we ran into
It was challenging just to set up each of these databases and load large amounts of data onto them. It was even more challenging to try to load data and build queries that the database was not necessarily made for in order to make clear comparisons between the performance of the databases in differ use-cases. Building the queries to back the metaschema was also quite challenging.
## Accomplishments that we're proud of
Building an end-to-end system from databases to 1DB to our data visualizations.
## What we learned
We collectively had relatively little database experience and thus we learned how to better work with different databases.
## What's next for 1DB: One Database to rule them all
We would like to support more databases and to experiment with using more complex heuristics to select among databases. An extension that follows naturally from our work is to have 1DB track query usage statistics and over time, make the decision to select among supported databases. The extra level of indirection makes these switches natural and can be potentially automated. | losing |
## Inspiration
In 2010, when Haiti was rocked by an earthquake that killed over 150,000 people, aid workers manned SMS help lines where victims could reach out for help. Even with the international humanitarian effort, there was not enough manpower to effectively handle the volume of communication. We set out to fix that.
## What it does
EmergAlert takes the place of a humanitarian volunteer at the phone lines, automating basic contact. It allows victims to request help, tell their location, place calls and messages to other people, and inform aid workers about their situation.
## How we built it
We used Mix.NLU to create a Natural Language Understanding model that categorizes and interprets text messages, paired with the Smooch API to handle SMS and Slack contact. We use FHIR to search for an individual's medical history to give more accurate advice.
## Challenges we ran into
Mentoring first time hackers was both a challenge and a joy.
## Accomplishments that we're proud of
Coming to Canada.
## What we learned
Project management is integral to a good hacking experience, as is realistic goal-setting.
## What's next for EmergAlert
Bringing more depth to the NLU responses and available actions would improve the app's helpfulness in disaster situations, and is a good next step for our group. | ## Inspiration
As college students learning to be socially responsible global citizens, we realized that it's important for all community members to feel a sense of ownership, responsibility, and equal access toward shared public spaces. Often, our interactions with public spaces inspire us to take action to help others in the community by initiating improvements and bringing up issues that need fixing. However, these issues don't always get addressed efficiently, in a way that empowers citizens to continue feeling that sense of ownership, or sometimes even at all! So, we devised a way to help FixIt for them!
## What it does
Our app provides a way for users to report Issues in their communities with the click of a button. They can also vote on existing Issues that they want Fixed! This crowdsourcing platform leverages the power of collective individuals to raise awareness and improve public spaces by demonstrating a collective effort for change to the individuals responsible for enacting it. For example, city officials who hear in passing that a broken faucet in a public park restroom needs fixing might not perceive a significant sense of urgency to initiate repairs, but they would get a different picture when 50+ individuals want them to FixIt now!
## How we built it
We started out by brainstorming use cases for our app and and discussing the populations we want to target with our app. Next, we discussed the main features of the app that we needed to ensure full functionality to serve these populations. We collectively decided to use Android Studio to build an Android app and use the Google Maps API to have an interactive map display.
## Challenges we ran into
Our team had little to no exposure to Android SDK before so we experienced a steep learning curve while developing a functional prototype in 36 hours. The Google Maps API took a lot of patience for us to get working and figuring out certain UI elements. We are very happy with our end result and all the skills we learned in 36 hours!
## Accomplishments that we're proud of
We are most proud of what we learned, how we grew as designers and programmers, and what we built with limited experience! As we were designing this app, we not only learned more about app design and technical expertise with the Google Maps API, but we also explored our roles as engineers that are also citizens. Empathizing with our user group showed us a clear way to lay out the key features of the app that we wanted to build and helped us create an efficient design and clear display.
## What we learned
As we mentioned above, this project helped us learn more about the design process, Android Studio, the Google Maps API, and also what it means to be a global citizen who wants to actively participate in the community! The technical skills we gained put us in an excellent position to continue growing!
## What's next for FixIt
An Issue’s Perspective
\* Progress bar, fancier rating system
\* Crowdfunding
A Finder’s Perspective
\* Filter Issues, badges/incentive system
A Fixer’s Perspective
\* Filter Issues off scores, Trending Issues | ## Inspiration
In times of disaster, the capacity of rigid networks like cell service and internet dramatically decreases at the same time demand increases as people try to get information and contact loved ones. This can lead to crippled telecom services which can significantly impact first responders in disaster struck areas, especially in dense urban environments where traditional radios don't work well. We wanted to test newer radio and AI/ML technologies to see if we could make a better solution to this problem, which led to this project.
## What it does
Device nodes in the field network to each other and to the command node through LoRa to send messages, which helps increase the range and resiliency as more device nodes join. The command & control center is provided with summaries of reports coming from the field, which are visualized on the map.
## How we built it
We built the local devices using Wio Terminals and LoRa modules provided by Seeed Studio; we also integrated magnetometers into the devices to provide a basic sense of direction. Whisper was used for speech-to-text with Prediction Guard for summarization, keyword extraction, and command extraction, and trained a neural network on Intel Developer Cloud to perform binary image classification to distinguish damaged and undamaged buildings.
## Challenges we ran into
The limited RAM and storage of microcontrollers made it more difficult to record audio and run TinyML as we intended. Many modules, especially the LoRa and magnetometer, did not have existing libraries so these needed to be coded as well which added to the complexity of the project.
## Accomplishments that we're proud of:
* We wrote a library so that LoRa modules can communicate with each other across long distances
* We integrated Intel's optimization of AI models to make efficient, effective AI models
* We worked together to create something that works
## What we learned:
* How to prompt AI models
* How to write drivers and libraries from scratch by reading datasheets
* How to use the Wio Terminal and the LoRa module
## What's next for Meshworks - NLP LoRa Mesh Network for Emergency Response
* We will improve the audio quality captured by the Wio Terminal and move edge-processing of the speech-to-text to increase the transmission speed and reduce bandwidth use.
* We will add a high-speed LoRa network to allow for faster communication between first responders in a localized area
* We will integrate the microcontroller and the LoRa modules onto a single board with GPS in order to improve ease of transportation and reliability | winning |
## Inspiration
Our journey with PathSense began with a deeply personal connection. Several of us have visually impaired family members, and we've witnessed firsthand the challenges they face navigating indoor spaces. We realized that while outdoor navigation has seen remarkable advancements, indoor environments remained a complex puzzle for the visually impaired.
This gap in assistive technology sparked our imagination. We saw an opportunity to harness the power of AI, computer vision, and indoor mapping to create a solution that could profoundly impact lives. We envisioned a tool that would act as a constant companion, providing real-time guidance and environmental awareness in complex indoor settings, ultimately enhancing independence and mobility for visually impaired individuals.
## What it does
PathSense, our voice-centric indoor navigation assistant, is designed to be a game-changer for visually impaired individuals. At its heart, our system aims to enhance mobility and independence by providing accessible, spoken navigation guidance in indoor spaces.
Our solution offers the following key features:
1. Voice-Controlled Interaction: Hands-free operation through intuitive voice commands.
2. Real-Time Object Detection: Continuous scanning and identification of objects and obstacles.
3. Scene Description: Verbal descriptions of the surrounding environment to build mental maps.
4. Precise Indoor Routing: Turn-by-turn navigation within buildings using indoor mapping technology.
5. Contextual Information: Relevant details about nearby points of interest.
6. Adaptive Guidance: Real-time updates based on user movement and environmental changes.
What sets PathSense apart is its adaptive nature. Our system continuously updates its guidance based on the user's movement and any changes in the environment, ensuring real-time accuracy. This dynamic approach allows for a more natural and responsive navigation experience, adapting to the user's pace and preferences as they move through complex indoor spaces.
## How we built it
In building PathSense, we embraced the challenge of integrating multiple cutting-edge technologies. Our solution is built on the following technological framework:
1. Voice Interaction: Voiceflow
* Manages conversation flow
* Interprets user intents
* Generates appropriate responses
2. Computer Vision Pipeline:
* Object Detection: Detectron
* Depth Estimation: DPT (Dense Prediction Transformer)
* Scene Analysis: GPT-4 Vision (mini)
3. Data Management: Convex database
* Stores CV data and mapping information in JSON format
4. Semantic Search: Cohere's Rerank API
* Performs semantic search on CV tags and mapping data
5. Indoor Mapping: MappedIn SDK
* Provides floor plan information
* Generates routes
6. Speech Processing:
* Speech-to-Text: Groq model (based on OpenAI's Whisper)
* Text-to-Speech: Unreal Engine
7. Video Input: Multiple TAPO cameras
* Stream 1080p video of the environment over Wi-Fi
To tie it all together, we leveraged Cohere's Rerank API for semantic search, allowing us to find the most relevant information based on user queries. For speech processing, we chose a Groq model based on OpenAI's Whisper for transcription, and Unreal Engine for speech synthesis, prioritizing low latency for real-time interaction. The result is a seamless, responsive system that processes visual information, understands user requests, and provides spoken guidance in real-time.
## Challenges we ran into
Our journey in developing PathSense was not without its hurdles. One of our biggest challenges was integrating the various complex components of our system. Combining the computer vision pipeline, Voiceflow agent, and MappedIn SDK into a cohesive, real-time system required careful planning and countless hours of debugging. We often found ourselves navigating uncharted territory, pushing the boundaries of what these technologies could do when working in concert.
Another significant challenge was balancing the diverse skills and experience levels within our team. While our diversity brought valuable perspectives, it also required us to be intentional about task allocation and communication. We had to step out of our comfort zones, often learning new technologies on the fly. This steep learning curve, coupled with the pressure of working on parallel streams while ensuring all components meshed seamlessly, tested our problem-solving skills and teamwork to the limit.
## Accomplishments that we're proud of
Looking back at our journey, we're filled with a sense of pride and accomplishment. Perhaps our greatest achievement is creating an application with genuine, life-changing potential. Knowing that PathSense could significantly improve the lives of visually impaired individuals, including our own family members, gives our work profound meaning.
We're also incredibly proud of the technical feat we've accomplished. Successfully integrating numerous complex technologies - from AI and computer vision to voice processing - into a functional system within a short timeframe was no small task. Our ability to move from concept to a working prototype that demonstrates the real-world potential of AI-driven indoor navigation assistance is a testament to our team's creativity, technical skill, and determination.
## What we learned
Our work on PathSense has been an incredible learning experience. We've gained invaluable insights into the power of interdisciplinary collaboration, seeing firsthand how diverse skills and perspectives can come together to tackle complex problems. The process taught us the importance of rapid prototyping and iterative development, especially in a high-pressure environment like a hackathon.
Perhaps most importantly, we've learned the critical importance of user-centric design in developing assistive technology. Keeping the needs and experiences of visually impaired individuals at the forefront of our design and development process not only guided our technical decisions but also gave us a deeper appreciation for the impact technology can have on people's lives.
## What's next for PathSense
As we look to the future of PathSense, we're brimming with ideas for enhancements and expansions. We're eager to partner with more venues to increase our coverage of mapped indoor spaces, making PathSense useful in a wider range of locations. We also plan to refine our object recognition capabilities, implement personalized user profiles, and explore integration with wearable devices for an even more seamless experience.
In the long term, we envision PathSense evolving into a comprehensive indoor navigation ecosystem. This includes developing community features for crowd-sourced updates, integrating augmented reality capabilities to assist sighted companions, and collaborating with smart building systems for ultra-precise indoor positioning. With each step forward, our goal remains constant: to continually improve PathSense's ability to provide independence and confidence to visually impaired individuals navigating indoor spaces. | ## Inspiration
During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before.
## What It Does
Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions.
## How We Built It
We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML.
Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API.
We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions.
After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React.
## Challenges We Ran Into
From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models.
We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities.
## What We Learned
We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time.
On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work)
## What’s Next for Hack2
The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward. | ## Our Inspiration 🫡
We started by thinking about one of the biggest privileges we have—*the ability to see*. There are so many people out there who don’t have that privilege, and for those who are visually impaired, even daily tasks like navigating their environment can be challenging. Sure, there are tools like smart canes or AI vision systems that can help, but we felt those solutions are still not enough. Relying on a cane means you’re always hoping it’ll detect what’s around the corner!!
It’s often said that blind people have a heightened sense of hearing, and we thought, why not use that to give them a spatial sense of their surroundings—sort of like a 3D soundscape. Instead of relying on sight, we could use sound to guide them, like having your ears tell you where to go and what to avoid. That way, we can offer more independence and make navigation more intuitive.
## What does our project do? 🤔
There’s a reason we called our project **Path4Me**—it’s about making navigation personalized and intuitive for visually impaired users. Let us walk you through how it works.
At its core, Path4Me is a cap with tech mounted on it. When a visually impaired person wears it, they’ll start by rotating in place, completing a full 360-degree turn. This initial spin helps the sensors calibrate and build a clear map of the area around them(Normally, we would have 4 cameras and more sensors, but we were limited on hardware and time, so we made it so that users are doing a spin so that we know whats around them).
Next, the user sets a target. For the hackathon, we’re focusing on finding doors, but imagine expanding this to other destinations—different objects or points in a room. The smart tech in the cap identifies the location of the door based on that initial calibration.
Now, here’s where it gets exciting. The user wears a headset that plays **3D spatial sound**. This sound guides them toward the door. If the sound is coming from the left, the user moves in that direction until the sound shifts to the front, indicating they’re heading straight towards their destination.

The beauty of this system is that it eliminates the need for a cane, and it’s **all happening in real-time**. There’s no delay, no waiting for a large language model (LLM) or AI to analyze the environment and tell them where to move. The sound feedback is instant, guiding the user every millisecond, making navigation seamless and smooth.
## How we built it 👨💻
This project was one of the most math-heavy and technically demanding tasks we've tackled so far. From sensor calibration to spatial sound calculation, everything was built on fundamental math principles. Let us walk you through how we implemented the system, step by step.
#### 1️⃣ Configuring the Raspberry Pi
First, we set up a Raspberry Pi and made sure everyone on the team could SSH into it remotely, allowing us to work directly on the device without unnecessary network calls. Since minimizing latency was a priority, we decided to avoid external API calls. Real-time responsiveness is key in a project like this, so we needed all computations to happen locally.
#### 2️⃣ Sensor Integration
We mounted a camera and an **MPU6050 module** (which combines an accelerometer and gyroscope) onto the Raspberry Pi. This is where the math challenge began. The MPU6050 outputs raw data in terms of acceleration and angular velocity, but this data isn't immediately useful for navigation.
We had to convert these raw sensor readings into spatial coordinates. This involved a lot of matrix transformations and trigonometry, as the sensor gives you orientation in terms of Euler angles (roll, pitch, and yaw). Once we calculated the user’s position relative to the surrounding space, we were ready for the next step.
#### 3️⃣ 360-Degree Rotation for Calibration
For calibration, the user rotates 360 degrees, and we take images every 90 degrees (4 total). Using **OpenCV**, we capture these images and overlay visual markers that indicate angles in the environment. This step also required understanding coordinate transformations to properly align the captured data with the real-world space.
#### 4️⃣ Image Processing and Door Detection
Once the images were captured, we sent them to the cloud (using **Cloudinary** for image storage) asynchronously, which helped reduce lag. We then used **ChatGPT's LLM API** to analyze the images. The LLM was trained to recognize objects and deduce the location of the door by analyzing the angles present in the images.
The LLM provided us with the exact angle at which the door was situated, relative to the user’s starting position. This process, although relying on machine learning, was still deeply rooted in the geometric relationships between objects, requiring knowledge of linear algebra and coordinate geometry.
#### 5️⃣ Spatial Sound Calculations
Once we had the angle of the door, we needed to calculate how to guide the user toward it using 3D spatial sound. This involved some complex trigonometry—specifically using **sine** and **cosine** waves to generate the audio signals that simulate the directionality of sound.
* **Sine and cosine functions** were used to simulate the sound direction. For example, if the door was to the left, we would generate a sound that plays more strongly in the left ear and gradually shift it to both ears as the user aligns with the door.
* We created a custom algorithm for determining whether the sound should guide the user to the left, right, or straight ahead. This real-time sound guidance system required heavy math on how to modify audio waves to mimic spatial sound.
The algorithms to create this 3D spatial sound system from scratch involved figuring out how to map the angle of the door to a directional sound cue, ensuring that as the user moves, the sound adjusts in real-time based on their new orientation.
#### 6️⃣ Sound Output
Finally, we connected a pair of wired headphones to the Raspberry Pi, allowing the user to receive these spatial sound cues. Since we created our own 3D spatial sound system, every sound played is calculated based on the user’s current orientation and the direction of the target (in this case, the door). This immediate feedback is crucial for real-time navigation, ensuring no delays as the user moves through space.
In summary, we built everything from the ground up, using raw sensor data, image processing, trigonometric calculations, and real-time sound manipulation—all based on complex mathematical principles. Every step, from calibration to navigation, was carefully designed to ensure a seamless, real-time experience for visually impaired users.
## Challenges 😫
#### 1️⃣ Challenge 01
One of the main challenges we faced was using the gyroscope to get accurate readings for the **z-axis** (which represents the rotation around the vertical axis). Gyroscopes often suffer from **drift**, a problem where even small measurement errors accumulate over time, leading to increasingly inaccurate values.
This drift could seriously affect our tool’s ability to correctly track the user’s orientation, which is crucial for navigation. We had to address minute details such as **sensor noise** and **temperature fluctuations**, which unexpectedly caused these drifts. We implemented filtering techniques like complementary filters to correct the drift and improve stability. These filters help merge data from both the accelerometer and gyroscope to provide more reliable angle measurements.
#### 2️⃣ Challenge 02
Another challenge was generating real-time **3D spatial sound**, which was more difficult than expected. Most of the popular libraries like OpenAL or FMOD handle 3D sound but they render the audio in advance before playback, which wasn’t suitable for our needs.
We required dynamic sound adjustments to match the user’s movement and position instantly. Therefore, we built our own 3D audio framework from scratch, allowing us to fine-tune the sound cues based on real-time data from the sensors.
The most difficult aspect, though, was creating an algorithm that accurately calculated the **angle of the sound** based on both the gyroscope’s orientation and the destination’s angle (what we called `Angle A`). Unexpected edge cases kept emerging that we hadn't initially considered. For instance, we found that small errors in the gyroscope’s readings could cause significant deviations in determining the direction of the sound. This required us to adjust our algorithm to handle cases where the user was facing odd angles, or when the destination was directly behind them.
## What we learned 🧠
This project taught us a lot, both technically and personally. Here's what we learned:
1. We realized how crucial it is to handle sensor data carefully.
2. We understood the importance of optimizing for **low-latency performance**, especially when working with the Raspberry Pi. Minimizing network calls and building our own 3D audio system pushed us to think about real-time processing in new ways.
3. Creating a custom 3D sound system from scratch was both challenging and rewarding. We learned how **sinusoidal waves** and other principles of sound physics work to create directional audio, which was key to our project’s success.
## What's next for Path4Me 🔮
For our next steps, we plan to integrate LiDAR sensors into the project. Currently, our system provides users with directional guidance (left, right, etc.), but it lacks the ability to convey distance information. Adding LiDAR sensors will allow us to measure distances to objects in real-time, enabling a more immersive and practical navigation experience.
For instance, if an object is far away, the sound would be softer, and as the user moves closer, the intensity of the sound would increase. This would simulate a more natural 3D soundscape, giving visually impaired users not only directional cues but also depth perception through sound. The combination of directional and distance-based sound would make navigating both indoor and outdoor environments much easier and safer. | winning |
## Inspiration
---
I recently read Ben Sheridan's paper on Human Centered Artificial Intelligence where he argues that Ai is best used as a tool that accelerates humans rather than trying to *replace* them. We wanted to design a "super-tool" that meaningfully augmented a user's workday. We felt that current calendar apps are a messy and convoluted mess of grids flashing lights, alarms and events all vying for the user's attention. The chief design behind Line is simple, **your workday and time is linear so why shouldn't your calendar be linear?**. Now taking this base and augmenting it with *just the right amount* of Ai.
## What it does
You get a calendar that tells you about an upcoming lunch with a person at a restaurant, gives you some information about the restaurant along with links to reviews and directions that you can choose to view. No voice-text frustration, no generative clutter.
## How we built it
We used React Js for our frontend along with a docker image for certain backend tasks and hosting a small language model for on metal event summarization **you can self host this too for an off the cloud experience**, if provided the you.com API key is used to get up to date and accurate information via the smart search query.
## Challenges we ran into
We tackled a lot of challenges particularly around the interoperability of our tech stack particularly a potential multi-database system that allows users to choose what database they wanted to use we simply ran out of time to implement this so for our demo we stuck with a firebase implementation , we also wanted to ensure that the option to host your own docker image to run some of the backend functions was present and as a result a lot of time was put into making both an appealing front and backend.
## Accomplishments that we're proud of
We're really happy to have been able to use the powerful you.com smart and research search APIs to obtain precise data! Currently even voice assistants like Siri or Google use a generative approach and if quizzed on subjects that are out of their domain of knowledge they are likely to just make things up (including reviews and addresses), which could be super annoying on a busy workday and we're glad that we've avoided this pitfall, we're also really happy at how transparent our tech stack is leaving the door open for the open source community to assist in improving our product!
## What we learned
We learnt a lot over the course of two days, everything from RAG technology to Dockerization, Huggingface spaces, react js, python and so much more!
## What's next for Line Calendar
Improvements to the UI, ability to swap out databases, connections via the google calendar API and notion APIs to import and transform calendars from other software. Better context awareness for the you.com integration. Better backend support to allow organizations to deploy and scale on their own hardware. | ## Inspiration
### The problem:
• Calendars are incredibly rigid.
• Todo lists need to be used in tandem with something else to keep track of timing.
• Solutions to this (specialized productivity apps) always seem to have incredibly large learning curves.
### The solution:
We need a combined calendar and to-do list that abstracts everything behind ambient AI (doesn't require prompting, acts based on profiled information on the user and their tendencies)
adapts to the complexity the user needs using natural language input.
## How we built it
• Designed Figma prototype
• Built landing page
• Implemented with Next.js and Tailwind
## Challenges we ran into
• Implementing NLI
## Accomplishments that we're proud of
• Design concept and prototype
## What's next for Sundial Lite
• Seek a better approach in development and try Swift for further implementation | ## Inspiration
We always that moment when you're with your friends, have the time, but don't know what to do! Well, SAJE will remedy that.
## What it does
We are an easy-to-use website that will take your current location, interests, and generate a custom itinerary that will fill the time you have to kill. Based around the time interval you indicated, we will find events and other things for you to do in the local area - factoring in travel time.
## How we built it
This webapp was built using a `MEN` stack. The frameworks used include: MongoDB, Express, and Node.js. Outside of the basic infrastructure, multiple APIs were used to generate content (specifically events) for users. These APIs were Amadeus, Yelp, and Google-Directions.
## Challenges we ran into
Some challenges we ran into revolved around using APIs, reading documentation and getting acquainted to someone else's code. Merging the frontend and backend also proved to be tough as members had to find ways of integrating their individual components while ensuring all functionality was maintained.
## Accomplishments that we're proud of
We are proud of a final product that we legitimately think we could use!
## What we learned
We learned how to write recursive asynchronous fetch calls (trust me, after 16 straight hours of code, it's really exciting)! Outside of that we learned to use APIs effectively.
## What's next for SAJE Planning
In the future we can expand to include more customizable parameters, better form styling, or querying more APIs to be a true event aggregator. | losing |
## Inspiration
Since the pandemic, millions of people worldwide have turned to online alternatives to replace public fitness facilities and other physical activities. At-home exercises have become widely acknowledged, but the problem is that there is no way of telling whether people are doing the exercises accurately and whether they notice potentially physically damaging bad habits they may have developed. Even now, those habits may continuously affect and damage their bodies if left unnoticed. That is why we created **Yudo**.
## What it does
Yudo is an exercise web app that uses **TensorFlow AI**, a custom-developed exercise detection algorithm, and **pose detection** to help users improve their form while doing various exercises.
Once you open the web app, select your desired workout and Yudo will provide a quick exercise demo video. The closer your form matches the demo, the higher your accuracy score will be. After completing an exercise, Yudo will provide feedback generated via **ChatGPT** to help users identify and correct the discrepancies in their form.
## How we built it
We first developed the connection between **TensorFlow** and streaming Livestream Video via **BlazePose** and **JSON**. We used the video's data and sent it to TensorFlow, which returned back a JSON object of the different nodes and coordinates which we used to draw the nodes onto a 2D canvas that updates every single frame and projected this on top of the video element. The continuous flow of JSON data from Tensorflow helped create a series of data sets of what different planks forms would look like. We used our own created data sets, took the relative positions of the relevant nodes, and then created mathematical formulas which matched that of the data sets.
After a discussion with Sean, a MLH member, we decided to integrate OpenAI into our project by having it provide feedback based on how well your plank form is. We did so by utilizing the **ExpressJS** back-end to handle requests for the AI-response endpoint. In the process, we also used **Nodaemon**, a process for continuously restarting servers on code change, to help with our development. We also used **Axios** to send data back and forth between the front end and backend
The front end was designed using **Figma** and **Procreate** to create a framework that we could base our **React** components on. Since it was our first time using React and Tensorflow, it took a lot of trial and error to get CSS and HTML elements to work with our React components.
## Challenges we ran into
* Learning and implementing TensorFlow AI and React for the first time during the hackathon
* Creating a mathematical algorithm that accurately measures the form of a user while performing a specific exercise
* Making visual elements appear and move smoothly on a live video feed
## Accomplishments that we're proud of
* This is our 2nd hackathon (except Darryl)
* Efficient and even work distribution between all team members
* Creation of our own data set to accurately model a specific exercise
* A visually aesthetic, mathematically accurate and working application!
## What we learned
* How to use TensorFlow AI and React
* Practical applications of mathematics in computer science algorithms
## What's next for Yudo
* Implementation of more exercises
* Faster and more accurate live video feed and accuracy score calculations
* Provide live feedback during the duration of the exercise
* Integrate a database for users to save their accuracy scores and track their progress | ## Inspiration
We wanted to get better at sports, but we don't have that much time to perfect our moves.
## What it does
Compares your athletic abilities to other users by building skeletons of both people and showing you where you can improve.
Uses ML to compare your form to a professional's form.
# Tells you improvements.
## How I built it
We used OpenPose to train a dataset we found online and added our own members to train for certain skills. Backend was made in python which takes the skeletons and compares them to our database of trained models to see how you preform. The skeleton for both videos are combined side by side in a video and sent to our react frontend.
## Challenges I ran into
Having multiple libraries out of date and having to compare skeletons.
## Accomplishments that I'm proud of
## What I learned
## What's next for trainYou | ## 💡 Inspiration
* The pandemic has restricted us to stay at home and has taken a huge toll in our physical well-being
* Exercising within our house boundaries is a real challenge.
* We've developed a novel application to accurately track the count of certain curated indoor exercises and get the amount of calories burnt
* This is a cheap, free-to-use alternative to measure the effectivness of your workout session
## 💻 What it does
* The website uses AI to recognise the number of *pushups/squats and bicep curls*
* It then calculates the calories burnt and notifies the user in their mobile phones
* The user can select any kind of excerises and do them.
![Model]()
## ⚙️ How we built it
* The site runs on mediapipe, posenet, js.
* We've used mediapipe to detect user motion and then calculate the number of reps.
* A report is generated and sent as a message using the Twilio API.
* The user can end the session anytime if they wanted, just by clicking "Stop".

## 🧠 Challenges we ran into
* Application hangs, screen freezes because the tensorflow was blocking the camera.
* Organising the structure of the project.
* Tweaking with the mediapipe AI model to accurately detect the type of motion.
## 📖 What we learned
* Mediapipe using Javascript
* Running AI models for posture detection.
* Using Twilio for sending messages.
* AssemblyAI API for posting user datas and result in CockroachDB
## 📧 Use of Google Cloud
* Google Cloud offers text to voice conversion.
* We used google cloud speech conversation for voice control exercise web application.
## 📧 Use of Assembly.AI API
* We used Assembly.AI API for storing user info in CochroachDB.
* It used for a safe and secure transformation of data.
* We will be using user authentication for user login in future.
## 📖 Use of Deso
* Deso is a decentralized social application and it is open source & on chain open data
* We used deso for login, logout purpose and also for transactions occurs in our website.
## 📧 Use of Twilio
* We used Twilio to send users report to our users.
* Twilio is safe and secure API for sending text messages.

## 🚀 What's next for FitnessZone
* Parsing the voice commands using NLP.
* Smart execrise recommendation system.
* Accurate detection using deep learning models.
* More exercise recognization.
## 🏅 Accomplishments that we're proud of
* We're glad to sucessfully complete this project!
* The end goal was achieved to a satisfactory level and the outcome would help us as well to excerise at home.
## 🔨 How to run
* Fork repo
* Run index.html file in the html folder | winning |
## Inspiration
Post operative care drastically improves the quality of life of individuals who have undergone surgery. When completed routinely, post operative care not only increases the process of healing but also helps individuals regain their range of motion and return to their daily activities faster.
So why do people need an app for this? Why not just call mom?
People often forget about the numerous medications (frequency, refills...), rehabilitation instructions, physiotherapy appointments and the different types of exercises they require before and after surgery. OpBuddy is here to solve that!
## What it does
OpBuddy organizes a patient's rehabilitation details and medication in one simple app. Users can take a photo of their operative care instructions (pre and post) and prescription forms. This information will become reminders for the patient to take their medication, refill their prescriptions, and to also prepare for operative care. Users can see their calculated coverage to provide more transparency with the hidden costs of medical procedures including pre and post-op medication,and rehabilitation appointments. OpBuddy calculates third party insurance coverage and keeps a cumulative total of running costs for users to ensure better budgeting. OpBuddy also reminds users of upcoming doctors appointments with the date, location and time to ensure a user never misses an appointment!
## How we built it
Prototype completed on Adobe XD.
Programmed using React, Node.js, JavaScript and the Azure Computer Vision API.
## Challenges we ran into
Difficulties with API integration.
## Accomplishments that we're proud of
Integrating the Azure Computer Vision API to successfully function with the app.
## What we learned
Front-end design, API integration, using Azure APIs.
## What's next for OpBuddy
Calendar syncing with the Google Calendar API such that users can ensure that their appointments are not clashing with any other aspect of their life. We also hope to enable family sharing between patients and their loved ones for more seamless healthcare. For more geriatric populations, loved ones can check whether the patient has taken their medication, when their appointments are and when to go for refills. | # Check out our [slides](https://docs.google.com/presentation/d/1K41ArhGy6HgdhWuWSoGtBkhscxycKVTnzTSsnapsv9o/edit#slide=id.g30ccbcf1a6f_0_150) and come over for a demo!
## Inspiration
The inspiration for EYEdentity came from the need to enhance patient care through technology. Observing the challenges healthcare professionals face in quickly accessing patient information, we envisioned a solution that combines facial recognition and augmented reality to streamline interactions and improve efficiency.
## What it does
EYEdentity is an innovative AR interface that scans patient faces to display their names and critical medical data in real-time. This technology allows healthcare providers to access essential information instantly, enhancing the patient experience and enabling informed decision-making on the spot.
## How we built it
We built EYEdentity using a combination of advanced facial recognition and facial tracking algorithms and the new Snap Spectacles. The facial recognition component was developed using machine learning techniques to ensure high accuracy, while the AR interface was created using cutting-edge software tools that allow for seamless integration of data visualization in a spatial format. Building on the Snap Spectacles provided us with a unique opportunity to leverage their advanced AR capabilities, resulting in a truly immersive user experience.
## Challenges we ran into
One of the main challenges we faced was ensuring the accuracy and speed of the facial recognition system in various lighting conditions and angles. Additionally, integrating real-time data updates into the AR interface required overcoming technical hurdles related to data synchronization and display.
## Accomplishments that we're proud of
We are proud of successfully developing a prototype that demonstrates the potential of our technology in a real-world healthcare setting. The experience of building on the Snap Spectacles allowed us to create a user experience that feels natural and intuitive, making it easier for healthcare professionals to engage with patient data.
## What we learned
Throughout the development process, we learned the importance of user-centered design in healthcare technology. Communicating with healthcare professionals helped us understand their needs and refine our solution to better serve them. We also gained valuable insights into the technical challenges of integrating AR with real-time data.
## What's next for EYEdentity
Moving forward, we plan to start testing in clinical environments to gather more feedback and refine our technology. Our goal is to enhance the system's capabilities, expand its features, and ultimately deploy EYEdentity in healthcare facilities to revolutionize patient care. | ## Inspiration
Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need.
## What it does
It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive.
## How we built it
The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted.
## Challenges we ran into
There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users.
## Accomplishments that we're proud of
We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before. | partial |
## Inspiration
All of us have gone through the painstaking and difficult process of onboarding as interns and making sense of huge repositories with many layers of folders and files. We hoped to shorten this or remove it completely through the use of Code Flow.
## What it does
Code Flow exists to speed up onboarding and make code easy to understand for non-technical people such as Project Managers and Business Analysts. Once the user has uploaded the repo, it has 2 main features. First, it can visualize the entire repo by showing how different folders and files are connected and providing a brief summary of each file and folder. It can also visualize a file by showing how the different functions are connected for a more technical user. The second feature is a specialized chatbot that allows you to ask questions about the entire project as a whole or even specific files. For example, "Which file do I need to change to implement this new feature?"
## How we built it
We used React to build the front end. Any folders uploaded by the user through the UI are stored using MongoDB. The backend is built using Python-Flask. If the user chooses a visualization, we first summarize what every file and folder does and display that in a graph data structure using the library pyvis. We analyze whether files are connected in the graph based on an algorithm that checks features such as the functions imported, etc. For the file-level visualization, we analyze the file's code using an AST and figure out which functions are interacting with each other. Finally for the chatbot, when the user asks a question we first use Cohere's embeddings to check the similarity of the question with the description we generated for the files. After narrowing down the correct file, we use its code to answer the question using Cohere generate.
## Challenges we ran into
We struggled a lot with narrowing down which file to use to answer the user's questions. We initially thought to simply use Cohere generate to reply with the correct file but knew that it isn't specialized for that purpose. We decided to use embeddings and then had to figure out how to use those numbers to actually get a valid result. We also struggled with getting all of our tech stacks to work as we used React, MongoDB and Flask. Making the API calls seamless proved to be very difficult.
## Accomplishments that we're proud of
This was our first time using Cohere's embeddings feature and accurately analyzing the result to match the best file. We are also proud of being able to combine various different stacks and have a working application.
## What we learned
We learned a lot about NLP, how embeddings work, and what they can be used for. In addition, we learned how to problem solve and step out of our comfort zones to test new technologies.
## What's next for Code Flow
We plan on adding annotations for key sections of the code, possibly using a new UI so that the user can quickly understand important parts without wasting time. | ## Inspiration
During my last internship, I worked on an aging product with numerous security vulnerabilities, but identifying and fixing these issues was a major challenge. One of my key projects was to implement CodeQL scanning to better locate vulnerabilities. While setting up CodeQL wasn't overly complex, it became repetitive as I had to manually configure it for every repository, identifying languages and creating YAML files. Fixing the issues proved even more difficult as many of the vulnerabilities were obscure, requiring extensive research and troubleshooting. With that experience in mind, I wanted to create a tool that could automate this process, making code security more accessible and ultimately improving internet safety
## What it does
AutoLock automates the security of your GitHub repositories. First, you select a repository and hit install, which triggers a pull request with a GitHub Actions configuration to scan for vulnerabilities and perform AI-driven analysis. Next, you select which vulnerabilities to fix, and AutoLock opens another pull request with the necessary code modifications to address the issues.
## How I built it
I built AutoLock using Svelte for the frontend and Go for the backend. The backend leverages the Gin framework and Gorm ORM for smooth API interactions, while the frontend is powered by Svelte and styled using Flowbite.
## Challenges we ran into
One of the biggest challenges was navigating GitHub's app permissions. Understanding which permissions were needed and ensuring the app was correctly installed for both the user and their repositories took some time. Initially, I struggled to figure out why I couldn't access the repos even with the right permissions.
## Accomplishments that we're proud of
I'm incredibly proud of the scope of this project, especially since I developed it solo. The user interface is one of the best I've ever created—responsive, modern, and dynamic—all of which were challenges for me in the past. I'm also proud of the growth I experienced working with Go, as I had very little experience with it when I started.
## What we learned
While the unstable CalHacks WiFi made deployment tricky (basically impossible, terraform kept failing due to network issues 😅), I gained valuable knowledge about working with frontend component libraries, Go's Gin framework, and Gorm ORM. I also learned a lot about integrating with third-party services and navigating the complexities of their APIs.
## What's next for AutoLock
I see huge potential for AutoLock as a startup. There's a growing need for automated code security tools, and I believe AutoLock's ability to simplify the process could make it highly successful and beneficial for developers across the web. | ## Off The Grid
Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app
# Inspiration
Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better.
# Technology Used
Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access.
# Challenges
Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas
# Team Members
Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project! | partial |
## Inspiration
According to the Canadian Centre for Injury Prevention and Control, in Canada, an average of 4 people are killed every day as a result of impaired driving. In 2019, there were 1,346 deaths caused by impaired driving, which represents approximately 30% of all traffic-related deaths in Canada. Impaired driving is also a leading criminal cause of death in Canada and it is estimated that it costs the country around $20 billion per year. These statistics show that impaired driving is a significant problem in Canada and highlights the need for continued efforts to reduce the number of deaths and injuries caused by this preventable behavior.
## What it does
This program calculates the users blood alcohol concentration using a few input parameters to determine whether it would be safe for the user to drive their vehicle (Using Ontario's recommended blood alcohol concentraition 0.08). The program uses the users weight, sex, alcohol consumed (grams) (shots [1.5oz], wine glass [5oz], beer cup [12oz]), alcoholic beverage. The alcoholic beverage is a local database constructed using some of the most popoular drinks.
With the above parameters, we used the Widmark formula described in the following paper (<https://www.yasa.org/upload/uploadedfiles/alcohol.pdf>). The Widmark is a rough estimate of the blood alcohol concentration and shouldn't be taken as a definitive number. The Widmark formula is:
BAC=(Alcohol consumed in grams / (Body weight in grams \* r)) \* 100
## How we built it
We used ReactJS for front-end and Firebase for the backend. For the Google technology we decided to integrate the Firebase Realtime Database. We store all of the drinks on there so that whenever we reload the page or access the website on different devices, we can continue from where we left off. Your alcohol blood concentration also depends on how much time has passed since you drank the drink, so we are able to store the time and update it continuously to show more accurate results.
## Challenges we ran into
* Incorporating time into elapsed time calculations
* Use State hooks constantly delayed
* Reading data from database
## Accomplishments that we're proud of
* Our first hackathon!
## What we learned
* How to fetch data from a database.
## What's next for Alcohol Monitor
* Photo scan of drink
* More comprehensive alcohol database
* More secure database implementation
* User logins
* Mobile implementation
* BOOTSTRAP! | ## PENNAPPS SEMI FINALIST
---
## What is BOMO?
BOMO is an AR mobile app that allows clinicians, coaches, and patients alike to easily quantify body movement.
*You’re playing a game of intense basketball. As you are about to shoot the tie-breaking shot, you tear your ACL. You go to the doctor sulking in defeat, and during weeks of physical therapy, your PT arbitrarily asks you to “try a little harder than last week” during each session. However, your PT soon hears about the app BOMO, where they are now able to objectively track your knee recovery with just a smart phone. With BOMO, you are now able to have extremely detailed records of your rehabilitation and see your progress in an immersive and personalized way without the use of expensive hardware or software.*
BOMO can not only track joint flexion angles, which are useful for measuring flexibility progress, but it can also track joint movement in *3d*, allowing users to analyze common dysfunctions such as lateral knee movement during walking and running, tracking overall joint stability, and measuring muscular imbalances…all with just a smartphone.
## What BOMO really solves
The 3 C’s of Healthcare: Cost, convenience, consistency.
**COST**:
Software to accurately track joint flexion, walking patterns, and body movement is extremely costly and hard to setup in a normal environment: e.g., a home, gym, or small doctor’s office.
**CONVENIENCE**:
In order to actually get accurate measurmenets, you likely have to go to a motion analysis lab and have the ability to access one in the first place.
**CONSISTENCY**:
Physical therapists and doctors often don’t take enough care to consistently take accurate measurments and often “eyeball” their results.
## How we built it
Unfortunately, ARKit doesn't offer support for image/marker tracking, so we used Vuforia as the base library for our computer vision.
The entire thing is built in Objective-C, C++, and Swift, and BOMO processes everything locally. Currently, our prototype includes the use of several physical markers to track joint movement, but given enough time in the future, pose estimation technology and improved phone hardware will allow us to track joint movement natively, without any markers–– similar to the Xbox kinect.
## Inspiration
*Tyler* previously did physical therapy research in a Motion Analysis lab, constantly monitoring the walking rehabilitation progress of patients and using cumbersome, expensive software and hardware to get relatively simple data. *Jake* works in a neuromechanics lab and designs Virtual Reality experiences to experiment with anxiety, trauma, and movement disorders. Combining this with the fact that there are currently no reliable real-time motion tracking apps on the app store, the need to make a medical and mobile-first, AR motion tracking app became obvious.
## The implications of BOMO
Aside from basic tracking joint movement, we can track the velocity of different limb movements and plan on implementing a feature that can calculate power produced based on the test subject’s weight or weight lifted during a movement, allowing BOMO to easily make its way into the fitness market not only as an injury prevention and analysis tool, but as a sports metrics tracker.
Because BOMO exists purely in a smartphone based environment, it can potentially drastically reduce the cost of gathering useful physical metrics from patients and athletes.
BOMO also potentially expands the medical community’s ability to acquire field data and design mobile-first, travel-friendly research. In an ideal scenario, BOMO exists within an entire Telehealth-based system, where patients can log data at home and contribute remotely to a huge database of medical data– further contributing to the development of machine learning’s impact in healthcare. | ## Inspiration
To spread the joy of swag collecting.
## What it does
A Hack the North Simulator, specifically of the Sponsor Bay on the second floor. The player will explore sponsor booths, collecting endless amounts of swag along the way. A lucky hacker may stumble upon the elusive, exclusive COCKROACH TROPHY, or a very special RBC GOLD PRIZE!
## How we built it
Unity, Aseprite, Cakewalk
## What we learned
We learned the basics of Unity and git, rigidbody physics, integrating audio into Unity, and the creation of game assets. | losing |
## Inspiration
EV vehicles are environment friendly and yet it does not receive the recognition it deserves. Even today we do not find many users driving electric vehicles and we believe this must change. Our project aims to provide EV users with a travel route showcasing optimal (and functioning) charging stations to enhance the use of Electric Vehicles by resolving a major concern, range anxiety. We also believe that this will inherently promote the usage of electric vehicles amongst other technological advancements in the car industry.
## What it does
The primary aim of our project is to display the **ideal route** to the user for the electric vehicle to take along with the **optimal (and functional) charging stations** using markers based on the source and destination.
## How we built it
Primarily, in the backend, we integrated two APIs. The **first API** call is used to fetch the longitude as well as latitude coordinates of the start and destination addresses while the **second API** was used to locate stations within a **specific radius** along the journey route. This computation required the start and destination addresses leading to the display of the ideal route containing optimal (and functioning) charging points along the way. Along with CSS, the frontend utilizes **Leaflet (SDK/API)** to render the map which not only recommends the ideal route showing the source, destination, and optimal charging stations as markers but also provides a **side panel** displaying route details and turn-by-turn directions.
## Challenges we ran into
* Most of the APIs available to help develop our application were paid
* We found a **scarcity of reliable data sources** for EV charging stations
* It was difficult to understand the documentation for the Maps API
* Java Script
## Accomplishments that we're proud of
* We developed a **fully functioning app in < 24 hours**
* Understood as well as **integrated 3 APIs**
## What we learned
* Team work makes the dream work: we not only played off each others strengths but also individually tried things that are out of our comfort zone
* How Ford works (from the workshop) as well as more about EVs and charging stations
* We learnt about new APIs
* If we have a strong will to learn and develop something new, we can no matter how hard it is; We just have to keep at it
## What's next for ChargeRoute Navigator: Enhancing the EV Journey
* **Profile** | User Account: Display the user's profile picture or account details
* **Accessibility** features (e.g., alternative text)
* **Autocomplete** Suggestions: Provide autocomplete suggestions as users type, utilizing geolocation services for accuracy
* **Details on Clicking the Charging Station (on map)**: Provide additional information about each charging station, such as charging speed, availability, and user ratings
* **Save Routes**: Allow users to save frequently used routes for quick access.
* **Traffic Information (integration with GMaps API)**: Integrate real-time traffic data to optimize routes
* **User feedback** about (charging station recommendations and experience) to improve user experience | ## Inspiration
As a team we wanted to pursue a project that we could see as a creative solution to an important issue currently and may have a significant impact for the future. GM's sponsorship challenge provided us with the most exciting problem to tackle - overcoming the limitations in electric vehicle (EV) charging times. We as a team believe that EVs are the future in transportation and our project reflects those ideals.
## What it does
Due to the rapid adoption of EVs in the near future and the slower progress of charging station infrastructure, waiting for charging time could become a serious issue. Sharger is a mobile/web application that connects EV drivers with EV owners. If charging stations are full and waitlists are too long, EV drivers cannot realistically wait for other drivers for hours to finish charging. Hence, we connect them to EV owners who rent their charging stations from their home. Drivers have access to a map with markers of nearby homes made available by the owners. Through the app they can book availability at the homes and save time from waiting at public charging stations. Home owners are able to fully utilize their home chargers by using them during the night for their own EVs and renting them out to other EV owners during the day.
## How we built it
The app was written in JavaScript and built using React, Express, and MongoDB technologies . It starts with a homepage with a login and signup screen. From there, drivers can utilize the map that was developed with Google Cloud API . The map allows drivers to find nearby homes by displaying markers for all nearby homes made available by the owners. The drivers can then book a time slot. The owners have access to a separate page that allows them to list their homes similar to Airbnb's format. Instead of bedrooms and bathrooms though, they can list their charger type, charger voltage, bedding availability in addition to a photo of the home and address. Home owners have the option to make their homes available whenever they want. Making a home unavailable will remove the marker from the drivers' map.
## Challenges we ran into
As a team we faced many challenges both technical and non-technical. The concept of our project is complex so we were heavily constrained by time. Also all of us learned a new tool in order to adapt to our project requirements.
## Accomplishments that we're proud of
As a team we are really proud of our idea and our team effort. We truly believe that our idea, through its capability to utilize a convenient resource in unused home chargers, will help contribute to the widespread adoption of EVs in the near future. All of our team members worked very hard and learned new technologies and skills to overcome challenges, in order to develop the best possible product we could in our given time.
## What we learned
* Express.js
* Bootstrap
* Google Cloud API
## What's next for Sharger
* implement a secure authorization feature
* implement a built-in navigation system or outsource navigation to google maps
* outsource bedding feature to Airbnb
* home rating feature
* develop a bookmark feature for drivers to save home locations
* implement an estimated waiting time based off past home charging experiences | ## Inspiration - That annoying feeling when i ran out snack at midnight and have to go out in cold to buy them.
## What it does - Fully automates the process of buying grocery items from finding what is missing to ordering and delivering them at your doorstep
## How I built - Built the backend services with node.js, AWS, GCP Vision API and front end with React-native
## Challenges I ran into - No resources available to
## Accomplishments that we are proud of - built the frontend components and backend sevice which detects the items through an image
## What I learned - Vision API, AWS, GCP, Node.js, React-Native
## What's next for SmartO
Build a backend service that will order the groceries from store and deliver them to you at your desired time. | partial |
# [Try it out!](https://ipfe.elguindi.xyz)
## Inspiration
Intrigued by the ability to access a large amount of data on the distributed network, we set out to classify files by similarity for interesting exploration. While there were powerful search tools like Google to access information on the web, we were unsure how to explore all of the data available on the distributed network.
## What it does
This project visualizes the vast quantities of data stored on the InterPlanetary File System using an intuitive 3D graph. Connected nodes are nearby and have similar content and all nodes are colour-coded based on their file type, with larger nodes representing larger files. Hovering over a node gives more information about the file and clicking on the node downloads the file.
Nodes can be dragged with the cursor and the view of the graph can be zoomed in or out with the scroll wheel.
## How we built it
We built this application using 3 important technologies: Golang, Python, and Three.js (JavaScript). Behind the scenes we used the powerful technologies of Estuary in order to interface and get files from the IPFS and Co:here's Embed platform in order to quantify the similarity of two files.
Our pipeline consists of fetching the headers of around 2000 files on the IPFS, embedding the texts into vectors, performing a reduction in vector space dimension with principal component analysis, classifying texts based on k nearest neighbors, and visualizing the resulting neighbors as a 3D graph.
## Challenges we ran into
* The data in the IPFS was too large to download and process so we embedded the files based only on their metadata.
* Co:here's embed model was unable to process more than 500 lines in one request.
* Data retrieval from IPFS was slower than centralized systems.
* Determining the best way to summarize the multi-dimensional data into 3-dimensional data.
* We were unable to fine-tune the Co:here command model.
## Accomplishments that we're proud of
* Reverse engineering the Estuary API to be able to access all files hosted on the IPFS through the miners with multiple scripts in Go using parallel processing.
* Performance with concurrence while fetching and formatting the file headers from the network.
* The handling of large data in an efficient pipeline.
* The use of Co:here embeddings in order to generate 3D vectors with minimal information loss with principal component analysis.
* The efficient and intuitive representation of the collected data which was categorized with k nearest neighbors.
## What we learned
This hackathon has served as an opportunity to learn uncountable things, but I would like to highlight a couple. To begin, we were able to learn about useful and important technologies that facilitated us to make the project possible, including the Estuary and Co:here APIs, and we improved in our abilities to code in Python, Golang, and Javascript. Furthermore, the presentations hosted by various sponsors were a nice opportunity to be able to talk with and meet successful individuals in the field of technology and get their advice on the future of technology, and how to improve ourselves as members of a team and technically.
## What's next for IPFE: InterPlanetary File Explorer
Since we were unable to process all of the file content during the vector embedding process due to file space and time feasibility limitations, IPFE can be improved by using the file content to influence the vector embedding of the files for a more accurate graph. Additionally, we were only able to scratch the surface of the number of files on the IPFS. This project can be scaled up to many more files, where individual "InterPlanetary Cluster" could consist of similar files and make up a whole "galaxy" of files that can be visually inspected and explored. | # 🎓 **Inspiration**
Entering our **junior year**, we realized we were unprepared for **college applications**. Over the last couple of weeks, we scrambled to find professors to work with to possibly land a research internship. There was one big problem though: **we had no idea which professors we wanted to contact**. This naturally led us to our newest product, **"ScholarFlow"**. With our website, we assure you that finding professors and research papers that interest you will feel **effortless**, like **flowing down a stream**. 🌊
# 💡 **What it Does**
Similar to the popular dating app **Tinder**, we provide you with **hundreds of research articles** and papers, and you choose whether to approve or discard them by **swiping right or left**. Our **recommendation system** will then provide you with what we think might interest you. Additionally, you can talk to our chatbot, **"Scholar Chat"** 🤖. This chatbot allows you to ask specific questions like, "What are some **Machine Learning** papers?". Both the recommendation system and chatbot will provide you with **links, names, colleges, and descriptions**, giving you all the information you need to find your next internship and accelerate your career 🚀.
# 🛠️ **How We Built It**
While half of our team worked on **REST API endpoints** and **front-end development**, the rest worked on **scraping Google Scholar** for data on published papers. The website was built using **HTML/CSS/JS** with the **Bulma** CSS framework. We used **Flask** to create API endpoints for JSON-based communication between the server and the front end.
To process the data, we used **sentence-transformers from HuggingFace** to vectorize everything. Afterward, we performed **calculations on the vectors** to find the optimal vector for the highest accuracy in recommendations. **MongoDB Vector Search** was key to retrieving documents at lightning speed, which helped provide context to the **Cerebras Llama3 LLM** 🧠. The query is summarized, keywords are extracted, and top-k similar documents are retrieved from the vector database. We then combined context with some **prompt engineering** to create a seamless and **human-like interaction** with the LLM.
# 🚧 **Challenges We Ran Into**
The biggest challenge we faced was gathering data from **Google Scholar** due to their servers blocking requests from automated bots 🤖⛔. It took several hours of debugging and thinking to obtain a large enough dataset. Another challenge was collaboration – **LiveShare from Visual Studio Code** would frequently disconnect, making teamwork difficult. Many tasks were dependent on one another, so we often had to wait for one person to finish before another could begin. However, we overcame these obstacles and created something we're **truly proud of**! 💪
# 🏆 **Accomplishments That We're Proud Of**
We’re most proud of the **chatbot**, both in its front and backend implementations. What amazed us the most was how **accurately** the **Llama3** model understood the context and delivered relevant answers. We could even ask follow-up questions and receive **blazing-fast responses**, thanks to **Cerebras** 🏅.
# 📚 **What We Learned**
The most important lesson was learning how to **work together as a team**. Despite the challenges, we **pushed each other to the limit** to reach our goal and finish the project. On the technical side, we learned how to use **Bulma** and **Vector Search** from MongoDB. But the most valuable lesson was using **Cerebras** – the speed and accuracy were simply incredible! **Cerebras is the future of LLMs**, and we can't wait to use it in future projects. 🚀
# 🔮 **What's Next for ScholarFlow**
Currently, our data is **limited**. In the future, we’re excited to **expand our dataset by collaborating with Google Scholar** to gain even more information for our platform. Additionally, we have plans to develop an **iOS app** 📱 so people can discover new professors on the go! | ## Inspiration
With the ubiquitous and readily available ML/AI turnkey solutions, the major bottlenecks of data analytics lay in the consistency and validity of datasets.
**This project aims to enable a labeller to be consistent with both their fellow labellers and their past self while seeing the live class distribution of the dataset.**
## What it does
The UI allows a user to annotate datapoints from a predefined list of labels while seeing the distribution of labels this particular datapoint has been previously assigned by another annotator. The project also leverages AWS' BlazingText service to suggest labels of incoming datapoints from models that are being retrained and redeployed as it collects more labelled information. Furthermore, the user will also see the top N similar data-points (using Overlap Coefficient Similarity) and their corresponding labels.
In theory, this added information will motivate the annotator to remain consistent when labelling data points and also to be aware of the labels that other annotators have assigned to a datapoint.
## How we built it
The project utilises Google's Firestore realtime database with AWS Sagemaker to streamline the creation and deployment of text classification models.
For the front-end we used Express.js, Node.js and CanvasJS to create the dynamic graphs. For the backend we used Python, AWS Sagemaker, Google's Firestore and several NLP libraries such as SpaCy and Gensim. We leveraged the realtime functionality of Firestore to trigger functions (via listeners) in both the front-end and back-end. After K detected changes in the database, a new BlazingText model is trained, deployed and used for inference for the current unlabeled datapoints, with the pertinent changes being shown on the dashboard
## Challenges we ran into
The initial set-up of SageMaker was a major timesink, the constant permission errors when trying to create instances and assign roles were very frustrating. Additionally, our limited knowledge of front-end tools made the process of creating dynamic content challenging and time-consuming.
## Accomplishments that we're proud of
We actually got the ML models to be deployed and predict our unlabelled data in a pretty timely fashion using a fixed number of triggers from Firebase.
## What we learned
Clear and effective communication is super important when designing the architecture of technical projects. There were numerous times where two team members were vouching for the same structure but the lack of clarity lead to an apparent disparity.
We also realized Firebase is pretty cool.
## What's next for LabelLearn
Creating more interactive UI, optimizing the performance, have more sophisticated text similarity measures. | partial |
## Inspiration
Enabling Accessible Transportation for Those with Disabilities
AccessRide is a cutting-edge website created to transform the transportation experience for those with impairments. We want to offer a welcoming, trustworthy, and accommodating ride-hailing service that is suited to the particular requirements of people with mobility disabilities since we are aware of the special obstacles they encounter.
## What it does
Our goal is to close the accessibility gap in the transportation industry and guarantee that everyone has access to safe and practical travel alternatives. We link passengers with disabilities to skilled, sympathetic drivers who have been educated to offer specialised assistance and fulfill their particular needs using the AccessRide app.
Accessibility:-
The app focuses on ensuring accessibility for passengers with disabilities by offering vehicles equipped with wheelchair ramps or lifts, spacious interiors, and other necessary accessibility features.
Specialized Drivers:-
The app recruits drivers who are trained to provide assistance and support to passengers with disabilities. These drivers are knowledgeable about accessibility requirements and are
committed to delivering a comfortable experience.
Customized Preferences:-
Passengers can specify their particular needs and preferences within the app, such as requiring a wheelchair-accessible vehicle, additional time for boarding and alighting, or any specific assistance required during the ride.
Real-time Tracking:-
Passengers can track the location of their assigned vehicle in real-time, providing peace of mind and ensuring they are prepared for pick-up.
Safety Measures:-
The app prioritizes passenger safety by conducting driver background checks, ensuring proper vehicle maintenance, and implementing safety protocols to enhance the overall travel experience.
Seamless Payment:-
The app offers convenient and secure payment options, allowing passengers to complete their transactions electronically, reducing the need for physical cash handling
## How we built it
We built it using django, postgreSQL and Jupyter Notebook for driver selection
## Challenges we ran into
Ultimately, the business impact of AccessRide stems from its ability to provide a valuable and inclusive service to people with disabilities. By prioritizing their needs and ensuring a comfortable and reliable transportation experience, the app can drive customer loyalty, attract new users, and make a positive social impact while growing as a successful business.
To maintain quality service, AccessRide includes a feedback and rating system. This allows passengers to provide feedback on their experience and rate drivers based on their level of assistance, vehicle accessibility, and overall service quality. It was a challenging part in this event.
## Accomplishments that we're proud of
We are proud that we completed our project. We look forward to develop more projects.
## What we learned
We learned about the concepts of django and postgreSQL. We also learnt many algorithms in machine learning and implemented it as well.
## What's next for Accessride-Comfortable ride for all abilities
In conclusion, AccessRide is an innovative and groundbreaking project that aims to transform the transportation experience for people with disabilities. By focusing on accessibility, specialized driver training, and a machine learning algorithm, the app sets itself apart from traditional ride-hailing services. It creates a unique platform that addresses the specific needs of passengers with disabilities and ensures a comfortable, reliable, and inclusive transportation experience.
## Your Comfort, Our Priority "Ride with Ease, Ride with Comfort“ | ## Inspiration
As university students and soon to be graduates, we understand the financial strain that comes along with being a student, especially in terms of commuting. Carpooling has been a long existing method of getting to a desired destination, but there are very few driving platforms that make the experience better for the carpool driver. Additionally, by encouraging more people to choose carpooling at a way of commuting, we hope to work towards more sustainable cities.
## What it does
FaceLyft is a web app that includes features that allow drivers to lock or unlock their car from their device, as well as request payments from riders through facial recognition. Facial recognition is also implemented as an account verification to make signing into your account secure yet effortless.
## How we built it
We used IBM Watson Visual Recognition as a way to recognize users from a live image; After which they can request money from riders in the carpool by taking a picture of them and calling our API that leverages the Interac E-transfer API. We utilized Firebase from the Google Cloud Platform and the SmartCar API to control the car. We built our own API using stdlib which collects information from Interac, IBM Watson, Firebase and SmartCar API's.
## Challenges we ran into
IBM Facial Recognition software isn't quite perfect and doesn't always accurately recognize the person in the images we send it. There were also many challenges that came up as we began to integrate several APIs together to build our own API in Standard Library. This was particularly tough when considering the flow for authenticating the SmartCar as it required a redirection of the url.
## Accomplishments that we're proud of
We successfully got all of our APIs to work together! (SmartCar API, Firebase, Watson, StdLib,Google Maps, and our own Standard Library layer). Other tough feats we accomplished was the entire webcam to image to API flow that wasn't trivial to design or implement.
## What's next for FaceLyft
While creating FaceLyft, we created a security API for requesting for payment via visual recognition. We believe that this API can be used in so many more scenarios than carpooling and hope we can expand this API into different user cases. | ## Inspiration
During last year's World Wide Developers Conference, Apple introduced a host of new innovative frameworks (including but not limited to CoreML and ARKit) which placed traditionally expensive and complex operations such as machine learning and augmented reality in the hands of developers such as myself. This incredible opportunity was one that I wanted to take advantage of at PennApps this year, and Lyft's powerful yet approachable API (and SDK!) struck me as the perfect match for ARKit.
## What it does
Utilizing these powerful technologies, Wher integrates with Lyft to further enhance the process of finding and requesting a ride by improving on ease of use, safety, and even entertainment. One issue that presents itself when using overhead navigation methods is, quite simply, a lack of the 3rd dimension. A traditional overhead view tends to complicate on foot navigation more than it may help, and even more importantly, requires the user to bury their face in their phone. This pulls attention from the users surroundings, and poses a threat to their safety- especially in busy cities. Wher resolves all of these concerns by bringing the experience of Lyft into Augmented Reality, which allows users to truly see the location of their driver and destination, pay more attention to where they are going, and have a more enjoyable and modern experience in the process.
## How I built it
I built Wher using several of Apple's Frameworks including ARKit, MapKit, CoreLocation, and UIKit, which allowed me to build the foundation for the app and the "scene" necessary to create and display an Augmented Reality plane. Using the Lyft API I was able to gather information regarding available drivers in the area, including their exact position (real time), cost, ETA, and the service they offered. This information was used to populate the scene and deep link into the Lyft app itself to request a ride and complete the transaction.
## Challenges I ran into
While both Apple's well documented frameworks and Lyft's API simplified the learning required to take on the project, there were still several technical hurdles that had to be overcome to complete the project. The first issue that I faced was Lyft's API itself; While it was great in many respects, Lyft has yet to create a branch fit for use with Swift 4 and iOS 11 (required to use ARKit), which meant I had to rewrite certain portions of their LyftURLEncodingScheme and LyftButton classes in order to continue with the project. Another challenge was finding a way to represent a variance in coordinates and 'simulate distance', so to make the AR experience authentic. This, similar to the first challenge, became manageable with enough thought and math. One of the last significant challenges I encountered and overcame was with drawing driver "bubbles" in the AR Plane without encountering graphics glitches.
## Accomplishments that I'm proud of
Despite the many challenges that this project presented, I am very happy that I persisted and worked to complete it. Most importantly, I'm proud of just how cool it is to see something so simple represented in AR, and how different it is from a traditional 2D View. I am also very proud to say that this is something I can see myself using any time I need to catch a Lyft.
## What I learned
With PennApps being my first Hackathon, I was unsure what to expect and what exactly I wanted to accomplish. As a result, I greatly overestimated how many features I could fit into Wher and was forced to cut back on what I could add. As a result, I learned a lesson in managing expectations.
## What's next for Wher (with Lyft)
In the short term, adding a social aspect and allowing for "friends" to organize and mark designated meet up spots for a Lyft, to greater simply the process of a night out on the town. In the long term, I hope to be speaking with Lyft! | winning |
## Inspiration
We were trying for an IM cross MS paint experience, and we think it looks like that.
## What it does
Users can create conversations with other users by putting a list of comma-separated usernames in the To field.
## How we built it
We used Node JS combined with the Express.js web framework, Jade for templating, Sequelize as our ORM and PostgreSQL as our database.
## Challenges we ran into
Server-side challenges with getting Node running, overloading the server with too many requests, and the need for extensive debugging.
## Accomplishments that we're proud of
Getting a (mostly) fully up-and-running chat client up in 24 hours!
## What we learned
We learned a lot about JavaScript, asynchronous operations and how to properly use them, as well as how to deploy a production environment node app.
## What's next for SketchWave
We would like to improve the performance and security of the application, then launch it for our friends and people in our residence to use. We would like to include mobile platform support via a responsive web design as well, and possibly in the future even have a mobile app. | ## Inspiration
Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order.
## What it does
You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision.
## How we built it
The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors.
## Challenges we ran into
One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle.
## Accomplishments that we're proud of
We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with.
## What we learned
We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment.
## What's next for Harvard Burger
Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains. | ## Inspiration
Our inspiration for this project is the hot new app, Clubhouse.
## What it does
Create individual rooms for different topics where people can have live discussions.
## How we built it
We built it using Express and postgresql for the back end and React.js for the front end.
## Challenges we ran into
Since this is a beginner project, we used a tutorial as an inspiration. However, the tutorial isn't that complete and there were some dependency problems
## Accomplishments that we're proud of
Starting an app from scratch. Using Git. Learned a lot.
## What we learned
Learned how to collaborate on a software project.
## What's next for ChatHouse
Bug fix. | winning |
* [Deployment link](https://unifymd.vercel.app/)
* [Pitch deck link](https://www.figma.com/deck/qvwPyUShfJbTfeoPSjVIGX/UnifyMD-Pitch-Deck?node-id=4-71)
## 🌟 Inspiration
Long lists of patient records make it challenging to locate **relevant health data**. This can lead to doctors providing **inaccurate diagnoses** due to insufficient or disorganized information. Unstructured data, such as **progress notes and dictated information**, are not stored properly, and smaller healthcare facilities often **lack the resources** or infrastructure to address these issues.
## 💡 What it does
UnifyMD is a **unified health record system** that aggregates patient data and historical health records. It features an **AI-powered search bot** that leverages a patient's historical data to help healthcare providers make more **informed medical decisions** with ease.
## 🛠️ How we built it
* We started with creating an **intuitive user interface** using **Figma** to map out the user journey and interactions.
* For **secure user authentication**, we integrated **PropelAuth**, which allows us to easily manage user identities.
* We utilized **LangChain** as the large language model (LLM) framework to enable **advanced natural language processing** for our AI-powered search bot.
* The search bot is powered by **OpenAI**'s API to provide **data-driven responses** based on the patient's medical history.
* The application is built using **Next.js**, which provides **server-side rendering** and a full-stack JavaScript framework.
* We used **Drizzle ORM** (Object Relational Mapper) for seamless interaction between the application and our database.
* The core patient data and records are stored **securely in Supabase**.
* For front-end styling, we used **shadcn/ui** components and **TailwindCSS**.
## 🚧 Challenges we ran into
One of the main challenges we faced was working with **LangChain**, as it was our first time using this framework. We ran into several errors during testing, and the results weren't what we expected. It took **a lot of time and effort** to figure out the problems and learn how to fix them as we got more familiar with the framework.
## 🏆 Accomplishments that we're proud of
* Successfully integrated **LangChain** as a new large language model (LLM) framework to **enhance the AI capabilities** of our system.
* Implemented all our **initial features on schedule**.
* Effectively addressed key challenges in **Electronic Health Records (EHR)** with a robust, innovative solution to provide **improvements in healthcare data management**.
## 📚 What we learned
* We gained a deeper understanding of various patient safety issues related to the limitations and inefficiencies of current Electronic Health Record (EHR) systems.
* We discovered that LangChain is a powerful tool for Retrieval-Augmented Generation (RAG), and it can effectively run SQL queries on our database to optimize data retrieval and interaction.
## 🚀 What's next for UnifyMD
* **Partnership with local clinics** to kick-start our journey into improving **healthcare services** and **patient safety**.
* **Update** to include **speech-to-text** feature to increase more time **patient and healthcare provider’s satisfaction**. | # **MedKnight**
#### Professional medical care in seconds, when the seconds matter
## Inspiration
Natural disasters often put emergency medical responders (EMTs, paramedics, combat medics, etc.) in positions where they must assume responsibilities beyond the scope of their day-to-day job. Inspired by this reality, we created MedKnight, an AR solution designed to empower first responders. By leveraging cutting-edge computer vision and AR technology, MedKnight bridges the gap in medical expertise, providing first responders with life-saving guidance when every second counts.
## What it does
MedKnight helps first responders perform critical, time-sensitive medical procedures on the scene by offering personalized, step-by-step assistance. The system ensures that even "out-of-scope" operations can be executed with greater confidence. MedKnight also integrates safety protocols to warn users if they deviate from the correct procedure and includes a streamlined dashboard that streams the responder’s field of view (FOV) to offsite medical professionals for additional support and oversight.
## How we built it
We built MedKnight using a combination of AR and AI technologies to create a seamless, real-time assistant:
* **Meta Quest 3**: Provides live video feed from the first responder’s FOV using a Meta SDK within Unity for an integrated environment.
* **OpenAI (GPT models)**: Handles real-time response generation, offering dynamic, contextual assistance throughout procedures.
* **Dall-E**: Generates visual references and instructions to guide first responders through complex tasks.
* **Deepgram**: Enables speech-to-text and text-to-speech conversion, creating an emotional and human-like interaction with the user during critical moments.
* **Fetch.ai**: Manages our system with LLM-based agents, facilitating task automation and improving system performance through iterative feedback.
* **Flask (Python)**: Manages the backend, connecting all systems with a custom-built API.
* **SingleStore**: Powers our database for efficient and scalable data storage.
## SingleStore
We used SingleStore as our database solution for efficient storage and retrieval of critical information. It allowed us to store chat logs between the user and the assistant, as well as performance logs that analyzed the user’s actions and determined whether they were about to deviate from the medical procedure. This data was then used to render the medical dashboard, providing real-time insights, and for internal API logic to ensure smooth interactions within our system.
## Fetch.ai
Fetch.ai provided the framework that powered the agents driving our entire system design. With Fetch.ai, we developed an agent capable of dynamically responding to any situation the user presented. Their technology allowed us to easily integrate robust endpoints and REST APIs for seamless server interaction. One of the most valuable aspects of Fetch.ai was its ability to let us create and test performance-driven agents. We built two types of agents: one that automatically followed the entire procedure and another that responded based on manual input from the user. The flexibility of Fetch.ai’s framework enabled us to continuously refine and improve our agents with ease.
## Deepgram
Deepgram gave us powerful, easy-to-use functionality for both text-to-speech and speech-to-text conversion. Their API was extremely user-friendly, and we were even able to integrate the speech-to-text feature directly into our Unity application. It was a smooth and efficient experience, allowing us to incorporate new, cutting-edge speech technologies that enhanced user interaction and made the process more intuitive.
## Challenges we ran into
One major challenge was the limitation on accessing AR video streams from Meta devices due to privacy restrictions. To work around this, we used an external phone camera attached to the headset to capture the field of view. We also encountered microphone rendering issues, where data could be picked up in sandbox modes but not in the actual Virtual Development Environment, leading us to scale back our Meta integration. Additionally, managing REST API endpoints within Fetch.ai posed difficulties that we overcame through testing, and configuring SingleStore's firewall settings was tricky but eventually resolved. Despite these obstacles, we showcased our solutions as proof of concept.
## Accomplishments that we're proud of
We’re proud of integrating multiple technologies into a cohesive solution that can genuinely assist first responders in life-or-death situations. Our use of cutting-edge AR, AI, and speech technologies allows MedKnight to provide real-time support while maintaining accuracy and safety. Successfully creating a prototype despite the hardware and API challenges was a significant achievement for the team, and was a grind till the last minute. We are also proud of developing an AR product as our team has never worked with AR/VR.
## What we learned
Throughout this project, we learned how to efficiently combine multiple AI and AR technologies into a single, scalable solution. We also gained valuable insights into handling privacy restrictions and hardware limitations. Additionally, we learned about the importance of testing and refining agent-based systems using Fetch.ai to create robust and responsive automation. Our greatest learning take away however was how to manage such a robust backend with a lot of internal API calls.
## What's next for MedKnight
Our next step is to expand MedKnight’s VR environment to include detailed 3D renderings of procedures, allowing users to actively visualize each step. We also plan to extend MedKnight’s capabilities to cover more medical applications and eventually explore other domains, such as cooking or automotive repair, where real-time procedural guidance can be similarly impactful. | ## Inspiration
Our inspiration came from the frustration of managing medical data from multiple doctors. One of our teammates has a Primary Care Physician at MIT and another back home. When she attempted to compare lab results and diagnoses from both doctors, she found herself logging into separate portals and sifting through reports to locate the necessary information. We aimed to simplify this process and provide individuals with a unified solution for better healthcare decisions.
## What it does
Our solution streamlines medical data management, unifies advice from different healthcare providers, and empowers individuals to make informed decisions about their health.
## How we built it
We built our solution using a combination of technologies, including ReactJS, ChartJS, ChakraUI for the frontend, and Python (and Python packages) for data parsing. We were also in the process of training an NLP model to analyze multiple doctor reports for comprehensive insights.
## Challenges we ran into
On the technical side, creating a user-friendly dashboard and handling diverse data formats required careful consideration. Additionally, in working on the NLP models, we noticed it was quite intricate and not easily achievable. On the personal side, we were combatting against health issues that developed during the day which affected the productivity of the team members.
## Accomplishments that we're proud of
We're proud of designing a user-friendly, comprehensive healthcare solution that simplifies medical data management. We also implemented a payment feature. Our team's dedication to addressing complex challenges and working on valuable product is a significant accomplishment.
## What we learned
Throughout the development process, we gained insights into the importance of personalizing healthcare solutions. We understood better how to use React.js and others effectively. We also improved our technical skills in data parsing and NLP.
## What's next for Fusion
In the future, we plan to provide more extensive analytics, including vital signs and predictive health modeling. Our goal is to continue enhancing our solution to empower individuals with even more comprehensive healthcare insights. | partial |
## Inspiration
We've all experienced long wait times for drinks at the bar/club. There always seems to be a bottleneck due to the disproportionate ratio of understaffed bartenders to thirsty bar-goers. We wanted to create a practical but fun way to tackle this problem.
## What it does
McBrews is a bar concept which allows users to browse the drink menu and place their order(s) right through their mobile devices. Bartenders receive the order, prepare the drinks and place completed orders on specific trolleys from a trolley system on the bar top. Users will receive a text notification when their drink is ready, and will tap their phones on the terminal with NFC technology by the bar to release their order. Their drinks are then instantly delivered to a pickup point beside the terminal by the trolley. This system reduces payment friction, creates an interactive experience for users, and allows bartenders to focus on both quality and service by reducing their responsibilities.
## How we built it
We created a mockup of the mobile app interface and plotted the customer journey with Figma. We then used React Native for our front end.
We created the backend with Express and Firebase. We used the Twilio API for text notifications and the Interac API for payment.
We used an Arduino Uno, a SparkFun Inventors Kit for RedBot and an RFID/NFC chip to create the terminal and trolley system. | ## Inspiration:
Fake news has been a growing issue in the past few years. Many companies get targeted by trolls who post fake news which can have damaging impact on their financing.
## What it does:
FibStock is an educational attribution platform designed to help users understand the impact of fake news on company stock prices that often leads to investors to be concerned about direction of the business. User can search up a company and see the impact of fake news on said company stock price and related fake news articles that lead to that impact.
## Accomplishments that I'm proud of:
It's no surprise this was a challenging task as we had to collect data for fake news from different sources and differentiate the real news from it. We had to change our models to integrate the platform fully. In the end we felt proud as we were able to complete this challenge.
## How it works:
We got inspired by open source project jasminevasandani/NLP\_Classification\_Model\_FakeNews to train our model from actual news data. News Title is used as vectors to determine if the news is fake. We used 25000 news records to train our model and ran the model over 1400000 records. Given the tested records we were able to determine a 67~91% accuracy of whether the news is fake or not. Using reddit API and the subreddit news we collect related news about the company and analyze it with our model. Once the results are in, News titles are sent to AWS Sentiment Analysis to find out the nuance of the news and show it as a donut chart. At the same time, We call stock price api to show the price change of the moment that news was published and show it as graph. | # The Ultimate Water Heater
February 2018
## Authors
This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan.
## About
Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource.
Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future.
Some key features we have:
* 90% energy efficiency
* An average rate of roughly 10 kW/hr of energy consumption
* Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure
* Clean and easily understood UI for typical household users
* Incorporation of the Internet of Things for convenience of use and versatility of application
* Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\*
* Cheap cost of installation and immediate returns on investment
## Inspiration
By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs.
So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better.
## How It Works
Our project essentially operates in several core parts running simulataneously:
* Arduino (101)
* Heating Mechanism
* Mobile Device Bluetooth User Interface
* Servers connecting to the IoT (and servicing via Alexa)
Repeat all processes simultaneously
The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity.
The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;)
The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger.
The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age.
## Built With:
(In no particular order of importance...)
* RhoAI
* R
* Balsamiq
* C++ (Arduino 101)
* Node.js
* Tears
* HTML
* Alexa API
* Swift, Xcode
* BLE
* Buckets and Water
* Java
* RXTX (Serial Communication Library)
* Mathematica
* MatLab (assistance)
* Red Bull, Soylent
* Tetrix (for support)
* Home Depot
* Electronics Express
* Breadboard, resistors, capacitors, jumper cables
* Arduino Digital Temperature Sensor (DS18B20)
* Electric Tape, Duct Tape
* Funnel, for testing
* Excel
* Javascript
* jQuery
* Intense Sleep Deprivation
* The wonderful support of the people around us, and TreeHacks as a whole. Thank you all!
\*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2>
Special thanks to our awesome friends Michelle and Darren for providing moral support in person! | losing |
## Inspiration
Virtually every classroom has a projector, whiteboard, and sticky notes. With OpenCV and Python being more accessible than ever, we wanted to create an augmented reality entertainment platform that any enthusiast could learn from and bring to their own place of learning. StickyAR is just that, with a super simple interface that can anyone can use to produce any tile-based Numpy game. Our first offering is *StickyJump* , a 2D platformer whose layout can be changed on the fly by placement of sticky notes. We want to demystify computer science in the classroom, and letting students come face to face with what's possible is a task we were happy to take on.
## What it does
StickyAR works by using OpenCV's Contour Recognition software to recognize the borders of a projector image and the position of human placed sticky notes. We then use a matrix transformation scheme to ensure that the positioning of the sticky notes align with the projector image so that our character can appear as if he is standing on top of the sticky notes. We then have code for a simple platformer that uses the sticky notes as the platforms our character runs, jumps, and interacts with!
## How we built it
We split our team of four into two sections, one half that works on developing the OpenCV/Data Transfer part of the project and the other half who work on the game side of the project. It was truly a team effort.
## Challenges we ran into
The biggest challenges we ran into were that a lot of our group members are not programmers by major. We also had a major disaster with Git that almost killed half of our project. Luckily we had some very gracious mentors come out and help us get things sorted out! We also first attempted to the game half of the project in unity which ended up being too much of a beast to handle.
## Accomplishments that we're proud of
That we got it done! It was pretty amazing to see the little square pop up on the screen for the first time on top of the spawning block. As we think more deeply about the project, we're also excited about how extensible the platform is for future games and types of computer vision features.
## What we learned
A whole ton about python, OpenCV, and how much we regret spending half our time working with Unity. Python's general inheritance structure came very much in handy, and its networking abilities were key for us when Unity was still on the table. Our decision to switch over completely to Python for both OpenCV and the game engine felt like a loss of a lot of our work at the time, but we're very happy with the end-product.
## What's next for StickyAR
StickyAR was designed to be as extensible as possible, so any future game that has colored tiles as elements can take advantage of the computer vision interface we produced. We've already thought through the next game we want to make - *StickyJam*. It will be a music creation app that sends a line across the screen and produces notes when it strikes the sticky notes, allowing the player to vary their rhythm by placement and color. | ## Inspiration for Creating sketch-it
Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives.
## What it does
Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time.
## How we built it
On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server.
On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image.
## Challenges we ran into
We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively.
## Accomplishments that we're proud of
Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎
## What we learned
We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future!
## What's next for sketch-it
Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience! | ## Inspiration
The other day, I heard my mom, a math tutor, tell her students "I wish you were here so I could give you some chocolate prizes!" We wanted to bring this incentive program back, even among COVID, so that students can have a more engaging learning experience.
## What it does
The student will complete a math worksheet and use the Raspberry Pi to take a picture of their completed work. The program then sends it to Google Cloud Vision API to extract equations. Our algorithms will then automatically mark the worksheet, annotate the jpg with Pure Image, and upload it to our website. The student then gains money based on the score that they received. For example, if they received a 80% on the worksheet, they will get 80 cents. Once the student has earned enough money, they can choose to buy a chocolate, where the program will check to ensure they have enough funds, and if so, will dispense it for them.
## How we built it
We used a Raspberry Pi to take pictures of worksheets, Google Cloud Vision API to extract text, and Pure Image to annotate the worksheet. The dispenser uses the Raspberry Pi and Lego to dispense the Mars Bars.
## Challenges we ran into
We ran into the problem that if the writing in the image was crooked, it would not detect the numbers on the same line. To fix this, we opted for line paper instead of blank paper which helped us to write straight.
## Accomplishments that we're proud of
We are proud of getting the Raspberry Pi and motor working as this was the first time using one. We are also proud of the gear ratio where we connected small gears to big gears ensuring high torque to enable us to move candy. We also had a lot of fun building the lego.
## What we learned
We learned how to use the Raspberry Pi, the Pi camera, and the stepper motor. We also learned how to integrate backend functions with Google Cloud Vision API
## What's next for Sugar Marker
We are hoping to build an app to allow students to take pictures, view their work, and purchase candy all from their phone. | winning |
## Inspiration
The Roomba's a great product for cleaning your house, but there's a lot more you can do with it - and a lot of potential to make it smarter. We realized that a Roomba serves as a precise and convenient motorized based for a moving platform. One of us owned a similar product - a knockoff Eufy - so we wanted to look into re-purposing it to do something cool.
When we arrived at HackMIT, we found that the drinks would often be located on the other side of the room, far from our table! We wanted to build a bot to carry them over to us and other folks.
## What it does
Troomba is a transporting Roomba. It has a small container at its top to hold a few drinks, and autonomously navigates around the room. It can also be remotely controlled over a Wi-Fi connection. Additionally, Troomba has emotive light strips and rings to communicate with its users. Finally, Troomba streams a webcam feed over a Wi-Fi connection so that you can monitor it!
## How we built it
The model of Eufy we had used an IR remote to manually control it. We first reverse engineered the protocol that the IR remote uses, by using an IR receiver to record the data the remote sent for different button presses. We then connected up an IR LED and used an Arduino to play back those codes and control the Roomba.
We then hooked up a Raspberry Pi to the Arduino over Serial. The Pi handled higher level computation and planning, and relayed lower level commands over to the Arduino.
We built a frame on top of the Roomba out of some empty Soylent boxes, and added LEDs rings and strips, a webcam, and an ultrasonic sensor.
We set up a webserver with a camera to receive a live feed from Troomba as well as have Troomba be controllable over Serial commands, and developed a simple autonomous obstacle avoiding algorithm.
## Challenges we ran into
Reverse engineering the IR protocol was a bit tricky.
Setting up a Raspberry Pi headless (we didn't have a monitor or a keyboard) was very, very painful. Since the HackMIT Wi-Fi blocked SSHing between devices on the same network, we had to use our own, which was a pain.
## Accomplishments that we're proud of
We were able to build a nice, friendly-looking robot in a pretty short amount of time!
## What we learned
We became a lot more familiar with how IR works, how Arduino components work, and how to interface with a Raspberry Pi and use systemd.
## What's next for Troomba
The navigation algorithm could be greatly improved. | ## Inspiration
We were really excited to hear about the self-driving bus Olli using IBM's Watson. However, one of our grandfather's is rather forgetful due to his dementia, and because of this would often forget things on a bus if he went alone. Memory issues like this would prevent him, and many people like him, from taking advantage of the latest advancements in public transportation, and prevent him from freely traveling even within his own community.
To solve this, we thought that Olli and Watson could work to take pictures of luggage storage areas on the bus, and if it detected unattended items, alert passengers, so that no one would forget their stuff! This way, individuals with memory issues like our grandparents can gain mobility and be able to freely travel.
## What it does
When the bus stops, we use a light sensitive resistor on the seat to see if someone is no longer sitting there, and then use a camera to take a picture of the luggage storage area underneath the seat.
We send the picture to IBM's Watson, which checks to see if the space is empty, or if an object is there.
If Watson finds something, it identifies the type of object, and the color of the object, and vocally alerts passengers of the type of item that was left behind.
## How we built it
**Hardware**
Arduino - Senses whether there is someone sitting based on a light sensitive resistor.
Raspberry Pi - Processes whether it should take a picture, takes the picture, and sends it to our online database.
**Software**
IBM's IoT Platform - Connects our local BlueMix on Raspberry Pi to our BlueMix on the Server
IBM's Watson - to analyze the images
Node-RED - The editor we used to build our analytics and code
## Challenges we ran into
Learning IBM's Bluemix and Node-Red were challenges all members of our team faced. The software that ran in the cloud and that ran on the Raspberry Pi were both coded using these systems. It was exciting to learn these languages, even though it was often challenging.
Getting information to properly reformat between a number of different systems was challenging. From the 8-bit Arduino, to the 32-bit Raspberry Pi, to our 64-bit computers, to the ultra powerful Watson cloud, each needed a way to communicate with the rest and lots of creative reformatting was required.
## Accomplishments that we're proud of
We were able to build a useful internet of things application using IBM's APIs and Node-RED. It solves a real world problem and is applicable to many modes of public transportation.
## What we learned
Across our whole team, we learned:
* Utilizing APIs
* Node-RED
* BlueMix
* Watson Analytics
* Web Development (html/ css/ js)
* Command Line in Linux | ## Inspiration
We are a group of students passionate about automation and pet companions. However, it is not always feasible to own a live animal as a busy engineer. The benefits of personal companionship are plentiful, including decreased blood pressure. Automation is the way of the future. We developed routines using a iRobot Create 2 robot which can dance to music, follow its owner like a dog, and bring items from another room on its top.
## What it does
Spot uses visual processing and image recognition to follow its owner all over their home. He is a helpful companion capable of carrying packages, providing lighting and cleaning for his owner. Furthermore, his warm and friendly appearance is always welcome in any home. The robot platform used also has the capability for autonomous floor cleaning. Finally, Spot's movements can be controlled through a web application which also displays graphs from all the Roomba's sensors.
## How we built it
Spot was built using a variety of different software. The webpage used to control spot was coded in HTML and CSS with Django/Python running in the backend. To control the roomba and display the sensor graphs we used matlab. To do the image processing and get the roomba to follow specific colours the openCV library with python bindings was used.
## Challenges we ran into
One major challenge was being able to display the all the graphs/data on the website in real time. Having different APIs for Python and Matlab was a struggle which we overcame.
## Accomplishments that we're proud of
As a group of relatively new hackers who met at YHacks, we are extremely proud of being able to use our different engineering disciplines to implement both hardware and software into our hack. We are proud of the fact that we were able to learn about image processing, Django/Python and use them to control the movements of the Roomba. In addition to completing all 3 iRobot challenges, we were still able to accomplish 2 tasks of our own and learned plenty of things along the way!
## What we learned
Throughout the creation of spot our group learned many new technologies. As a group we learned how to run Django in the back end of a webpage and be able to control a roomba through a webpage. In addition, we were able to learn about the openCv library and how to control the roomba through image processing. We were also able to learn how to do various things with the roomba, such as making it sing, manipulating the sensor data to produce graphs and track its movements.
## What's next for Spot
Spot has many real world applications. As a mobile camera-enabled robot, it can perform semi-autonomous security tasks e.g. patrolling. Teleoperation is ideal for high-risk situations, including bomb disposal, fire rescue, and SWAT. This device also has therapeutic applications such as those performed by the PARO seal robot---PTSD treatment and personal companionship. As a generic service robot, the robot can include a platform for carrying personal items. A larger robot could assist with construction, or a stainless steel robot could follow a surgeon with operating tools. | partial |
## We wanted to build a smart TrashCan
## The trashcan's lid opens when an object reaches the ultrasonic sensor, assuming a quiz question has been answered correctly
## It was built using Coffee cups, and other recycled items. Arduino and other electronic components
## Challenges we ran into was finding a mechanical system that is able to be controlled with the software
## We are proud of having been able to make the project functional
## To test different prototypes before committing to a final design
## A more enhanced final design | The idea started with the desire to implement a way to throw trash away efficiently and in an environmentally friendly way. Sometimes, it is hard to know what bin trash might go to due to time or carelessness. Even though there are designated spots to throw different type of garbage, it is still not 100% reliable as you are relying on the human to make the decision correctly. We thought of making it easier for the human to throw trash by putting it on a platform and having an AI make the decision for them. Basically, a weight sensor will activate a camera that will take a picture of the object and run through its database to see which category the trash belongs in. There is already a database containing a lot of pictures of trash, but that database can constantly grow as more pictures are taken.
We think this will be a good way to reduce the difficulty in separating trash after it's taken to the dump sites, which should definitely make a positive impact on the environment. The device can be small enough and inexpensive enough at one point that it can be implemented everywhere.
We used azure custom vision to do the image analysis and image data storage, the telus iot starter kit for giving sensor data to the azure iot hub, and an arduino to control the motor that switches between plastic trash and tin can trash. | ## Inspiration
Every year roughly 25% of recyclable material is not able to be recycled due to contamination. We set out to reduce the amount of things that are needlessly sent to the landfill by reducing how much people put the wrong things into recycling bins (i.e. no coffee cups).
## What it does
This project is a lid for a recycling bin that uses sensors, microcontrollers, servos, and ML/AI to determine if something should be recycled or not and physically does it.
To do this it follows the following process:
1. Waits for object to be placed on lid
2. Take picture of object using webcam
3. Does image processing to normalize image
4. Sends image to Tensorflow model
5. Model predicts material type and confidence ratings
6. If material isn't recyclable, it sends a *YEET* signal and if it is it sends a *drop* signal to the Arduino
7. Arduino performs the motion sent to it it (aka. slaps it *Happy Gilmore* style or drops it)
8. System resets and waits to run again
## How we built it
We used an Arduino Uno with an Ultrasonic sensor to detect the proximity of an object, and once it meets the threshold, the Arduino sends information to the pre-trained TensorFlow ML Model to detect whether the object is recyclable or not. Once the processing is complete, information is sent from the Python script to the Arduino to determine whether to yeet or drop the object in the recycling bin.
## Challenges we ran into
A main challenge we ran into was integrating both the individual hardware and software components together, as it was difficult to send information from the Arduino to the Python scripts we wanted to run. Additionally, we debugged a lot in terms of the servo not working and many issues when working with the ML model.
## Accomplishments that we're proud of
We are proud of successfully integrating both software and hardware components together to create a whole project. Additionally, it was all of our first times experimenting with new technology such as TensorFlow/Machine Learning, and working with an Arduino.
## What we learned
* TensorFlow
* Arduino Development
* Jupyter
* Debugging
## What's next for Happy RecycleMore
Currently the model tries to predict everything in the picture which leads to inaccuracies since it detects things in the backgrounds like people's clothes which aren't recyclable causing it to yeet the object when it should drop it. To fix this we'd like to only use the object in the centre of the image in the prediction model or reorient the camera to not be able to see anything else. | losing |
## Inspiration
While working with a Stanford PhD student to do work in Natural Language Processing, I was pointed to a paper that outlines a lightweight model which very effectively translates between languages. The average size for the saved weights of TensorFlow models are only about 5KB. Suddenly, it hit me--**what if someone could download translation capability on their phone, so that they can translate offline?** This is crucial for when you're in another country and don't have access to WiFi or a cellular network, but need to communicate with a local. *This is the most common use case for translators, yet there's no solution available.* **Thus, Offline Translate was born!**
## What it does
The app allows you to use it online just like any other translating app, using Google Cloud ML to serve the desired TensorFlow model. Then, when you know you're going to a country where you don't have internet access, **you can download the specific language-to-language translator onto your device for translation anytime, anywhere!**
## How I built it
The Neural Net is a **Python-implemented TensorFlow encoder-decoder RNN** which takes a variable-length input sequence, computes a representation of the phrase, and finally decodes that representation into a different language. Its architecture is based on cutting-edge, specifically this paper: <https://arxiv.org/pdf/1406.1078.pdf>. I **extended the TensorFlow Sequence-to-Sequence (seq2seq) library** to work effectively for this specific whitepaper. I also started **building a custom Long Short-Term Memory Cell** which computes the representation in the RNN, in order to adhere to the mathematics proposed in the whitepaper.
The app mockup was made using a prototyper, in order to communicate the concept effectively.
## Challenges I ran into
The biggest problems I ran into were ML-related issues. TensorFlow is still a bit tricky when it comes to working with variable-length sequences, so it was difficult wrangling the seq2seq library to work for me (specifically by having to extend it).
## Accomplishments that I'm proud of
I'm very proud that I was able to actually **extend the seq2seq librar**y and get a very difficult concept working in just 36 hours. I'm also proud that I have a **clear path of development next-steps** in order to get this fully function. I'm happy that I got this whole project figured out in such a short amount of time!
## What I learned
The big thing: **it is possible to make rigid TensorFlow libraries to work for you.** The source code is flexible and portable, so it's not difficult to mixed pre-packaged functionality with personal implementations. I also learned that I can function much better than I thought I could on just 2 hrs of sleep.
## What's next for tf-rnn-encoder-decoder
The immediate next step is to get inference work 100% correctly. Once that is done, the technology itself will be solid. Next will be to turn the mockup of the app into a working app, allowing the users to download their preferred models onto their phone and translate anytime, anywhere! | ## Inspiration
It has always been very time consuming to dedicate time to learning a new language. Many of the traditional methods often involve studying words in a dull static environment. Our experience has shown that this is not always the best or most fun way to learn new languages in a way that makes it "stick". This was why we wanted to develop an on the go AR app that anyone could take with you and live translate in real time words that a user sees more often to personalize the learning experience for the individual.
## What it does
Using iOS ARKit, we created an iOS app which works with augmented headset. We use the live video feed from the camera and leverage a tensor flow object recognizing model to scan the field of view and when the user focuses in on the object of choice, we use the model to identify the object in English. The user then must speak the translated word for the object and the app will determine if they are correct.
## How we built it
We leveraged ARKit from the iOS library for the app. Integrated a tensor flow model for object recognition. Created an api server for translated words using stdlib hosted on azure. Word translation was done using microsoft bing translation service. Our score system is a db we created in firebase.
## Challenges we ran into
Hacking out field of view for the ARKit. Getting translation to work on the app. Speech to text. Learning js and integrating stdlib. Voice commands
## Accomplishments that we're proud of
It worked!!
## What we learned
How to work with ARKit scenes. How to spin up an api quickly through stdlib. A bit of mandarin in the process of testing the app!
## What's next for Visualingo
Voice commands for app settings (e.g. language), motion commands (head movements). Gamification of the app. | **Made by Ella Smith (ella#4637) & Akram Hannoufa (ak\_hannou#7596) -- Team #15**
*Domain: <https://www.birtha.online/>*
## Inspiration
Conversations with friends and family about the difficulty of finding the right birth control pill on the first try.
## What it does
Determines the brand of hormonal contraceptive pill most likely to work for you using data gathered from drugs.com. Data includes: User Reviews, Drug Interactions, and Drug Effectiveness.
## How we built it
The front-end was built using HTML, CSS, JS, and Bootstrap. The data was scraped from drugs.com using Beautiful Soup web-scraper.
## Challenges we ran into
Having no experience in web-dev made this a particularly interesting learning experience. Determining how we would connect the scraped data to the front-end was challenging, as well as building a fully functional multi-page form proved to be difficult.
## Accomplishments that we're proud of
We are proud of the UI design, given it is our first attempt at web development. We are also proud of setting up a logic system that provides variability in the generated results. Additionally, figuring out how to web scrape was very rewarding.
## What we learned
We learned how to use version control software, specifically Git and GitHub. We also learned the basics of Bootstrap and developing a functional front-end using HTML, CSS, and JS.
## What's next for birtha
Giving more detailed and accurate results to the user by further parsing and analyzing the written user reviews. We would also like to add some more data sources to give even more complete results to the user. | losing |
## Inspiration
Learning never ends. It's the cornerstone of societal progress and personal growth. It helps us make better decisions, fosters further critical thinking, and facilitates our contribution to the collective wisdom of humanity. Learning transcends the purpose of solely acquiring knowledge.
## What it does
Understanding the importance of learning, we wanted to build something that can make learning more convenient for anyone and everyone. Being students in college, we often find ourselves meticulously surfing the internet in hopes of relearning lectures/content that was difficult. Although we can do this, spending half an hour to sometimes multiple hours is simply not the most efficient use of time, and we often leave our computers more confused than how we were when we started.
## How we built it
A typical scenario goes something like this: you begin a Google search for something you want to learn about or were confused by. As soon as you press search, you are confronted with hundreds of links to different websites, videos, articles, news, images, you name it! But having such a vast quantity of information thrown at you isn’t ideal for learning. What ends up happening is that you spend hours surfing through different articles and watching different videos, all while trying to piece together bits and pieces of what you understood from each source into one cohesive generalization of knowledge. What if learning could be made easier by optimizing search? What if you could get a guided learning experience to help you self-learn?
That was the motivation behind Bloom. We wanted to leverage generative AI to optimize search specifically for learning purposes. We asked ourselves and others, what helps them learn? By using feedback and integrating it into our idea, we were able to create a platform that can teach you a new concept in a concise, understandable manner, with a test for knowledge as well as access to the most relevant articles and videos, thus enabling us to cover all types of learners. Bloom is helping make education more accessible to anyone who is looking to learn about anything.
## Challenges we ran into
We faced many challenges when it came to merging our frontend and backend code successfully. At first, there were many merge conflicts in the editor but we were able to find a workaround/solution. This was also our first time experimenting with LangChain.js so we had problems with the initial setup and had to learn their wide array of use cases.
## Accomplishments that we're proud of/What's next for Bloom
We are proud of Bloom as a service. We see just how valuable it can be in the real world. It is important that society understands that learning transcends the classroom. It is a continuous, evolving process that we must keep up with. With Bloom, our service to humanity is to make the process of learning more streamlined and convenient for our users. After all, learning is what allows humanity to progress. We hope to continue to optimize our search results, maximizing the convenience we bring to our users. | ## Inspiration
The inspiration for the project was our desire to make studying and learning more efficient and accessible for students and educators. Utilizing advancements in technology, like the increased availability and lower cost of text embeddings, to make the process of finding answers within educational materials more seamless and convenient.
## What it does
Wise Up is a website that takes many different types of file format, as well as plain text, and separates the information into "pages". Using text embeddings, it can then quickly search through all the pages in a text and figure out which ones are most likely to contain the answer to a question that the user sends. It can also recursively summarize the file at different levels of compression.
## How we built it
With blood, sweat and tears! We used many tools offered to us throughout the challenge to simplify our life. We used Javascript, HTML and CSS for the website, and used it to communicate to a Flask backend that can run our python scripts involving API calls and such. We have API calls to openAI text embeddings, to cohere's xlarge model, to GPT-3's API, OpenAI's Whisper Speech-to-Text model, and several modules for getting an mp4 from a youtube link, a text from a pdf, and so on.
## Challenges we ran into
We had problems getting the backend on Flask to run on a Ubuntu server, and later had to instead run it on a Windows machine. Moreover, getting the backend to communicate effectively with the frontend in real time was a real challenge. Extracting text and page data from files and links ended up taking more time than expected, and finally, since the latency of sending information back and forth from the front end to the backend would lead to a worse user experience, we attempted to implement some features of our semantic search algorithm in the frontend, which led to a lot of difficulties in transferring code from Python to Javascript.
## Accomplishments that we're proud of
Since OpenAI's text embeddings are very good and very new, and we use GPT-3.5 based on extracted information to formulate the answer, we believe we likely equal the state of the art in the task of quickly analyzing text and answer complex questions on it, and the ease of use for many different file formats makes us proud that this project and website can be useful for so many people so often. To understand a textbook and answer questions about its content, or to find specific information without knowing any relevant keywords, this product is simply incredibly good, and costs pennies to run. Moreover, we have added an identification system (users signing up with a username and password) to ensure that a specific account is capped at a certain usage of the API, which is at our own costs (pennies, but we wish to avoid it becoming many dollars without our awareness of it.
## What we learned
As time goes on, not only do LLMs get better, but new methods are developed to use them more efficiently and for greater results. Web development is quite unintuitive for beginners, especially when different programming languages need to interact. One tool that has saved us a few different times is using json for data transfer, and aws services to store Mbs of data very cheaply. Another thing we learned is that, unfortunately, as time goes on, LLMs get bigger and so sometimes much, much slower; api calls to GPT3 and to Whisper are often slow, taking minutes for 1000+ page textbooks.
## What's next for Wise Up
What's next for Wise Up is to make our product faster and more user-friendly. A feature we could add is to summarize text with a fine-tuned model rather than zero-shot learning with GPT-3. Additionally, a next step is to explore partnerships with educational institutions and companies to bring Wise Up to a wider audience and help even more students and educators in their learning journey, or attempt for the website to go viral on social media by advertising its usefulness. Moreover, adding a financial component to the account system could let our users cover the low costs of the APIs, aws and CPU running Whisper. | **Made by Ella Smith (ella#4637) & Akram Hannoufa (ak\_hannou#7596) -- Team #15**
*Domain: <https://www.birtha.online/>*
## Inspiration
Conversations with friends and family about the difficulty of finding the right birth control pill on the first try.
## What it does
Determines the brand of hormonal contraceptive pill most likely to work for you using data gathered from drugs.com. Data includes: User Reviews, Drug Interactions, and Drug Effectiveness.
## How we built it
The front-end was built using HTML, CSS, JS, and Bootstrap. The data was scraped from drugs.com using Beautiful Soup web-scraper.
## Challenges we ran into
Having no experience in web-dev made this a particularly interesting learning experience. Determining how we would connect the scraped data to the front-end was challenging, as well as building a fully functional multi-page form proved to be difficult.
## Accomplishments that we're proud of
We are proud of the UI design, given it is our first attempt at web development. We are also proud of setting up a logic system that provides variability in the generated results. Additionally, figuring out how to web scrape was very rewarding.
## What we learned
We learned how to use version control software, specifically Git and GitHub. We also learned the basics of Bootstrap and developing a functional front-end using HTML, CSS, and JS.
## What's next for birtha
Giving more detailed and accurate results to the user by further parsing and analyzing the written user reviews. We would also like to add some more data sources to give even more complete results to the user. | partial |
## Inspiration
Being introduced to financial strategies, many are skeptical simply because they can't imagine a significant reward for smarter spending.
## What it does
* Gives you financial advice based on your financial standing (how many credit cards you have, what the limits are, whether you're married or single etc.)
* Shows you a rundown of your spendings separated by category (Gas, cigarettes, lottery, food etc.)
* Identifies transactions as reasonable or unnecessary
## How I built it
Used React for the most part in combination with Material UI. Charting library used is Carbon Charts which is also developed by me: <https://github.com/carbon-design-system/carbon-charts>
## Challenges I ran into
* AI
* Identification of reasonable or unnecessary transactions
* Automated advising
## Accomplishments that I'm proud of
* Vibrant UI
## What I learned
* Learned a lot about React router transitions
* Aggregating data
## What's next for SpendWise
To find a home right inside your banking application. | ## Inspiration
Our inspiration came from seeing how overwhelming managing finances can be, especially for students and young professionals. Many struggle to track spending, stick to budgets, and plan for the future, often due to a lack of accessible tools or financial literacy.
So, we decided to build a solution that isn't just another financial app, but a tool that empowers individuals, especially students, to take control of their finances with simplicity, clarity, and efficiency. We believe that managing finances should not be a luxury or a skill learned through trial and error, but something that is accessible and intuitive for everyone
## What it does
Sera simplifies financial management by providing users with an intuitive dashboard where they can track their recent transactions, bills, budgets, and overall balances - all in one place. What truly sets it apart is the personalization, AI-powered guidance that goes beyond simple tracking. Users receive actionable recommendations like "manage your budget" or "plan for retirement" based on their financial activity
With features like scanning receipts via QR code and automatic budget updates, we ensure users never miss a detail. The AI chatbot, SeraAI, offers tailored financial advice and can even handle tasks like adding transactions or adjusting budgets - making complex financial decisions easy and stress-free. With a focus on accessibility, Sera makes financial literacy approachable and actionable for everyone.
## How we built it
We used Next.js with TailwindCSS for a responsive, dynamic UI, leveraging server-side rendering for performance. The backend is powered by Express and Node.js, with MongoDB Atlas for scalable, secure data storage.
For advanced functionality, we integrated Roboflow for OCR, enabling users to scan receipts via QR codes, automatically updating their transactions, Cerebras handles AI processing, powering SeraAI, our chatbot that offers personalized financial advice and automates various tasks on our platform. In addition, we used Tune to provide users with customized financial insights, ensuring a proactive and intuitive financial management experience
## Challenges we ran into
Integrating OCR with our app posed several challenges, especially when using Cerebras for real-time processing. Achieving high accuracy was tricky due to the varying layouts and qualities of receipts, which often led to misrecognized data.
Preprocessing images was essential; we had to adjust brightness and contrast to help the OCR perform better, which took considerable experimentation. Handling edge cases, like crumpled or poorly printed receipts, also required robust error-checking mechanisms to ensure accuracy.
While Cerebras provided the speed we needed for real-time data extraction, we had to ensure seamless integration with our user interface. Overall, combining OCR with Cerebras added complexity but ultimately enhanced our app’s functionality and user experience.
## Accomplishments that we're proud of
We’re especially proud of developing our QR OCR system, which showcases our resilience and capabilities despite challenges. Integrating OCR for real-time receipt scanning was tough, as we faced issues with accuracy and image preprocessing.
By leveraging Cerebras for fast processing, we overcame initial speed limitations while ensuring a responsive user experience. This accomplishment is a testament to our problem-solving skills and teamwork, demonstrating our ability to turn obstacles into opportunities. Ultimately, it enhances our app’s functionality and empowers users to manage their finances effectively.
## What we learned
We learned that financial education isn’t enough, people need ongoing support to make lasting changes. It’s not just about telling users how to budget; it’s about providing the tools, guidance, and nudges to help them stick to their goals. We also learned the value of making technology feel human and approachable, particularly when dealing with sensitive topics like money.
## What's next for Sera
The next steps for Sera include expanding its capabilities to integrate with more financial platforms and further personalizing the user experience to provide everyone with guidance and support that fits their needs. Ultimately, we want Sera to be a trusted financial companion for everyone, from those just starting their financial journey to experienced users looking for better insights. | ## Inspiration
In the hustle and bustle of student life, maintaining financial clarity can be a demanding task. We found that many of us were losing sight of where our hard-earned money was actually going, leading to missed opportunities for savings and smarter spending. Inspired by the need for a seamless, intuitive way to gain control over our financial future, we set out to create an AI-driven solution that not only simplifies but revolutionizes the money management process for students and beyond.
## What it does
WalletWise functions as a comprehensive financial hub. It begins by collecting essential user metrics such as income streams, expenditures, and predefined savings goals. These expenses are then meticulously categorized into various sectors like groceries, entertainment, travel, and more. That's where our AI financial advisor kicks in—powered by OpenAI’s gpt-3, it provides dynamic, data-driven insights on optimizing spending. It identifies areas for potential savings and offers actionable advice tailored specifically to the user’s financial fingerprint.
But we didn't stop there. Leveraging the income data, the platform also cross-references tax brackets to offer strategic advice on tax write-offs. Whether it's maximizing charitable donations or finding hidden tax-saving opportunities, our app ensures that every user is as tax-efficient as possible.
## How we built it
We employed a robust stack of technologies to bring our vision to life. The front-end was elegantly crafted using React.js, featuring an intuitive UI that promotes ease of use and seamless navigation. On the back-end, we utilized Python, harmoniously integrated with Flask to enable real-time data processing and analytics. For the AI financial advisor, we tapped into OpenAI's API. This empowers WalletWise to not only classify financial data but also to generate personalized, human-like responses to user queries. This combination of cutting-edge technologies ensures that our platform is not just another budgeting tool, but a holistic financial assistant that adapts and grows with the user.
In summary, our application is not merely a financial tracking system but a full-fledged financial mentor, designed to empower users with unparalleled insights and tools for financial freedom.
## Challenges we ran into
One of the most formidable challenges we encountered was the integration of our front-end, developed in React.js, with our Python-based backend. The crux of the issue was the transfer of arrays in JSON format between these two disparate environments. After hours of relentless debugging, we realized that our initial web-based architecture wasn't feasible for seamless data exchange. Taking a bold pivot, we shifted our focus from web development to crafting a desktop application—a territory that was unexplored for many team members. While the complete integration of the front-end and back-end remains a work in progress, our audacious collective decision to essentially 'reboot' the project became a defining moment in our journey.
## Accomplishments that we're proud of
One of our most rewarding achievements is the development of WalletWise's AI Financial Analysis Bot. This AI bot has the capability to categorize expenditures, offer user-specific spending advice, and function as a responsive chatbot. It's more than just a piece of software; it's a financial mentor that uses machine learning to provide personalized, timely guidance.
The game-changing aspect of having an AI financial mentor is its ability to democratize financial literacy. Traditional financial advising services are often costly and unscalable, excluding those who may need advice the most. Our AI mentor seeks to fill this void by making sophisticated financial counsel accessible to all, empowering users to make smarter decisions and achieve their financial goals more efficiently. Moreover, the supportive and cooperative team environment we managed to create, even after coming together post-hackathon start, really added value to the entire experience.
## What we learned
Our collective journey has been an expansive tapestry of learning experiences, both technical and personal. From specialized workshops to intense debugging marathons, we immersed ourselves in situations that stretched our boundaries. In the process, each of us stepped courageously into unfamiliar technological terrains that we had previously avoided. By embracing these challenges, we not only expanded our skill sets but also discovered how to synergize our individual strengths, culminating in a harmonious yet dynamic team workflow.
## What's next for WalletWise
Although WalletWise remains a work-in-progress with its front-end and back-end yet to be fully integrated, we have grand plans for its evolution. One of our most ambitious features under consideration involves automating tax efficiency. By juxtaposing user income against applicable tax brackets, we aim to provide precise tax write-off strategies that minimize one's fiscal burden. Further down the line, we envisage launching a mobile application equipped with Optical Character Recognition (OCR) capabilities. This would revolutionize data input by automatically extracting transaction details from user-uploaded receipts or bills, eliminating the need for manual entries. Additionally, we are looking to establish a robust database architecture to further enhance the scalability and performance of WalletWise.
By pioneering in this realm, we are not merely offering another financial management tool. WalletWise is a roadmap to financial literacy and autonomy, replete with innovations that adapt to you. Join us in shaping a more financially secure and savvy future. | partial |
## Inspiration
We were interested in machine learning and data analytics and decided to pursue a real-world application that could prove to have practical use for society. Many themes of this project were inspired by hip-hop artist Cardi B.
## What it does
Money Moves analyzes data about financial advisors and their attributes and uses machine's deep learning unsupervised algorithms to predict if certain financial advisors will most likely be beneficial or detrimental to an investor's financial standing.
## How we built it
We partially created a custom deep-learning library where we built a Self Organizing Map. The Self Organizing Map is a neural network that takes data and creates a layer of abstraction; essentially reducing the dimensionality of the data. To make this happened we had to parse several datasets. We used beautiful soup library, pandas and numpy to parse the data needed. Once it was parsed, we were able to pre-process the data, to feed it to our neural network (Self Organizing Map). After we were able to successfully analyze the data with the deep learning algorithm, we uploaded the neural network and dataset to our Google server where we are hosting a Django website. The website will show investors the best possible advisor within their region.
## Challenges we ran into
Due to the nature of this project, we struggled with moving large amounts of data through the internet, cloud computing, and designing a website to display analyzed data because of the difficult with WiFi connectivity that many hackers faced at this competition. We mostly overcame this through working late nights and lots of frustration.
We also struggled to find an optimal data structure for storing both raw and output data. We ended up using .csv files organized in a logical manner so that data is easier accessible through a simple parser.
## Accomplishments that we're proud of
Successfully parse the dataset needed to do preprocessing and analysis with deeplearing.
Being able to analyze our data with the Self Organizing Map neural network.
Side Note: Our team member Mikhail Sorokin placed 3rd in the Yhack Rap Battle
## What we learned
We learnt how to implement a Self Organizing Map, build a good file system and code base with Django. This led us to learn about Google's cloud service where we host our Django based website. In order to be able to analyze the data, we had to parse several files and format the data that we had to send through the network.
## What's next for Money Moves
We are looking to expand our Self Organizing Map to accept data from other financial dataset, other than stock advisors; this way we are able to have different models that will work together. One way we were thinking is to have unsupervised and supervised deep-learning systems where, we have the unsupervised find the patterns that would be challenging to find; and the supervised algorithm will direct the algorithm to a certain goal that could help investors choose the best decision possible for their financial options. | ## Inspiration
Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again?
There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset.
We want to use technology to elevate the world's consciousness around their personal finance.
## What it does
Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life.
It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth.
Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending.
## How we built it
The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE
Then we took it to Figma where we brainstormed and completed design flows for our prototype -
Then we started working on the App-
**Frontend**
* React.
**Backend**
* Authentication: Auth0
* Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase
* Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon
## Challenges we ran into
The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic.
## What we learned
We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data.
## What's next for Where’s my money?
We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc.
Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech) | ## Inspiration
The inspiration for Med2Meals sprang from the universal truth that food is more than just sustenance; it's medicine, comfort, and a catalyst for connection. In today's fast-paced world, we've seen an increasing reliance on pharmaceutical solutions to health issues, often overlooking the holistic benefits of natural remedies and the healing power of human connection. This realization was compounded by the global pandemic, which highlighted the detrimental effects of isolation on mental and physical health. Med2Meals was born out of a desire to revive the ancient wisdom that food can heal and to harness the digital age's potential to bring people together over the healing power of meals. We envisioned a platform that not only encourages a natural approach to healing through diet but also fosters a sense of community and support among individuals facing health challenges.
## What it does
Med2Meals connects individuals seeking natural dietary remedies for their health conditions with local chefs and home cooks who prepare and deliver home-cooked, healing meals. Users can input their specific health concerns or the type of medication they're aiming to supplement or avoid. The platform then suggests a variety of home-cooked recipes, each tailored to address those health issues with natural ingredients known for their healing properties.
Beyond just providing recipes, Med2Meals offers a service where users can request these meals to be cooked and delivered by someone in their community. This feature aims to provide comfort through nourishing food while also opening the door to new friendships and a support network. The platform caters to a range of dietary preferences and health needs, ensuring that each user receives personalized care and nutrition.
In essence, Med2Meals is more than a meal delivery service; it's a community-building tool that leverages the nurturing power of food to heal bodies and connect souls.
## How we built it
* **Frontend Development**: We utilized Next.js for the frontend to leverage its server-side rendering capabilities, ensuring a fast, and responsive user interface.
* **Backend Infrastructure**: Our backend is powered by Node.js and Express.js, forming a robust and scalable foundation.
* **Integration of AI Technologies**: We partnered with Together.ai to fine-tune and deploy Large Language Models (LLM) and Diffusion models tailored to our specific needs. These models are crucial for generating personalized meal recommendations and understanding user queries in natural language.
* **AI Agents for Data Exchange**: The seamless execution of LLM and diffusion models, along with data exchange between them, is facilitated by AI agents using fetch.ai. This innovative approach allows for real-time, intelligent processing and a highly personalized user experience.
* **Blockchain Technology for Transactions**: We employed Crossmint for blockchain ledger transactions between the user and chef. This ensures that every cooking opportunity a chef receives is cryptographically signed, providing a transparent and secure method to verify a chef's credibility through the NFTs they have minted.
* **Database Management**: MongoDB serves as our database management system, offering a flexible, scalable solution for storing and managing our data.
* **API Documentation**: The entire API documentation was meticulously maintained in a Postman Workspace.
* **Development Tools**: We adopted Bun as our package manager and JavaScript runtime. Bun's high performance and efficiency in package management and execution of JavaScript code significantly enhanced our development workflow, allowing us to build and deploy features rapidly.
## Challenges we ran into
* **Customized Recipe Generation**: We encountered difficulties in crafting the ideal few-shot prompts for Large Language Models (LLMs) to generate customized recipes, which was crucial for tailoring dietary solutions.
* **Building AI Agents with Fetch.ai**: The challenge of navigating Fetch.ai's occasionally misleading documentation was significant. However, the assistance from the Fetch.ai team was instrumental in overcoming these hurdles.
* **Idea Pivot**: Initially, we faced a setback with our original concept, which seemed too cliché after our initial pitch to judges. This led us to pivot to a more unique and impactful problem statement, which ultimately defined our project's direction.
## Accomplishments that we're proud of
* **Successful Pivot**: The decision to pivot our idea proved to be valuable. Moving away from a generic concept to tackle a unique problem statement has positioned us distinctively in the space.
* **Diverse Technological Learning**: Each team member embraced the challenge of learning new technologies, from AI agents and prompt engineering for LLMs to crypto signing. This diversity in learning has been one of our project's most enriching experiences.
## What we learned
* **AI Agents**: The project deepened our understanding of AI agents, enhancing our ability to deploy intelligent solutions.
* **Fine-tuning LLMs**: We gained valuable insights into the process of fine-tuning Large Language Models to meet specific project needs.
* **Crypto Signing**: The importance and application of crypto signing were key learnings, opening new avenues for secure data handling.
* **Teamwork**: The project underscored the indispensable value of teamwork in overcoming challenges and achieving collective goals.
## What's next for Med2Meals
* **Speed Optimization**: Currently, the process of fetching custom recipes using LLMs is slower than desired. Our immediate focus will be on improving the speed of this feature to enhance user experience.
* **User Interface Improvements**: We plan to refine the user interface to make it more intuitive and user-friendly, ensuring that our platform is accessible to everyone, regardless of their tech-savviness.
* **Integration with Health Platforms**: We are looking into integrating Med2Meals with existing health platforms and medical databases, to provide users with a seamless experience that bridges the gap between medical advice and dietary solutions. | winning |
## Inspiration
Our main goal was to pursue our combined passion for learning lyrics to our favorite songs! We wanted to create a way to test our knowledge of our favorite lyrics in a fun way.
## What it does
Lyric Master is a game that allows you to choose what type of songs you want to be tested on, whether it be based on your favorite genre, artist, or decade of music. Then it provides a random song lyric of a song according to the selected theme obscured in asterisks. You can start guessing lyrics of the game one letter at a time, or guess the title of the song itself. 1 point is awarded for each asterisk that is not unraveled when the title is finally guessed.
## How I built it
We coded this using Python, working together and sharing code using a Google Colab notebook.
## Challenges I ran into
We had some troubles with getting the Python code to work on all of our teammates' computers. We also tried to create a proper GUI with tkinter, but time constraints as well as complications translating the code made it difficult.
## Accomplishments that I'm proud of
We were able to successfully have the program read the CSV file and establish a point system.
## What I learned
Application of Python, using CSV files...
## What's next for Lyric Master
Creating a more interactive interface, create a more heavy database of songs. | ## Inspiration
During the early stages of the hackathon, we were listening to music as a group, and were trying to select some songs. However, we were all in slightly different moods and so were feeling different kinds of music. We realized that our current moods played a significant impact in the kind of music we liked, and from there, we brainstormed ways to deal with this concept/problem.
## What it does
Our project is a web app that allows users to input their mood on a sliding scale, and get a list of 10 curated songs that best match their mood.
## How we built it
We found a database of songs that included a string of the lyrics on Kaggle. We then applied a machine learning model based on the natural language toolkit to the dataset. This formed the trained model
## Challenges we ran into
As we are all beginners with full stack development, we ran into numerous errors while constructing the backend of our webpage. Many of our errors were not descriptive and it was difficult to figure out if the errors were coming from the front end, the backend or the database.
## Accomplishments that we're proud of
We are most proud of getting over the challenges we faced given the strained circumstances of our work. Many of the challenges were entirely new to us and so interpreting, finding and solving these errors was a difficult and stressful process. We are very proud to have a MVP to submit.
## What we learned
Working collaboratively in a high stress environment is something we are not super experienced with and it was an important lesson to learn.
Given our limited full stack experience, we also learned a tremendous amount about the backend web development and using technologies like react.
## What's next for Fortress
There are numerous additions we hope to make to improve the quality and functionalities of our project. Some of these include using tempo and key data to provide a stronger analysis of songs. Getting more songs in our database will help improve the quality of outputs. In addition, it would be helpful for the user to embed snippets of each song so users can listen to a small portion. Finally, it convenient feature would be exporting the song list as a Spotify playlist. | ## Inspiration
We were inspired to create this as being computer science students we are always looking for opportunities to leverage our passion for technology by helping others.
## What it does
Think In Sync is a platform that uses groundbreaking AI to make learning more accessible. It has features that enable it to generate images and audio with a selected text description. It works with a given audio as well, by generating the equivalent text or image. This is done so that children have an easier time learning according to their primary learning language.
## How we built it
We built an interface and medium fidelity prototype using Figma. We used Python as our back end to integrate open AI's API.
## Challenges we ran into
None of us have worked with API keys and authentication previously so that was new for all of us.
## Accomplishments that we're proud of
We are proud of what we have accomplished given the short amount of time.
## What we learned
We have extended our computer science knowledge out of the syllabus and we have learned more about collaboration and teamwork.
## What's next for Think In Sync
Creating a high-fidelity prototype along with integrating the front end to the back end. | losing |
## Inspiration
Over the Summer one of us was reading about climate change but then he realised that most of the news articles that he came across were very negative and affected his mental health to the point that it was hard to think about the world as a happy place. However one day he watched this one youtube video that was talking about the hope that exists in that sphere and realised the impact of this "goodNews" on his mental health. Our idea is fully inspired by the consumption of negative media and tries to combat it.
## What it does
We want to bring more positive news into people’s lives given that we’ve seen the tendency of people to only read negative news. Psychological studies have also shown that bringing positive news into our lives make us happier and significantly increases dopamine levels.
The idea is to maintain a score of how much negative content a user reads (detected using cohere) and once it reaches past a certain threshold (we store the scores using cockroach db) we show them a positive news related article in the same topic area that they were reading.
We do this by doing text analysis using a chrome extension front-end and flask, cockroach dp backend that uses cohere for natural language processing.
Since a lot of people also listen to news via video, we also created a part of our chrome extension to transcribe audio to text - so we included that into the start of our pipeline as well! At the end, if the “negativity threshold” is passed, the chrome extension tells the user that it’s time for some good news and suggests a relevant article.
## How we built it
**Frontend**
We used a chrome extension for the front end which included dealing with the user experience and making sure that our application actually gets the attention of the user while being useful. We used react js, HTML and CSS to handle this. There was also a lot of API calls because we needed to transcribe the audio from the chrome tabs and provide that information to the backend.
**Backend**
## Challenges we ran into
It was really hard to make the chrome extension work because of a lot of security constraints that websites have. We thought that making the basic chrome extension would be the easiest part but turned out to be the hardest. Also figuring out the overall structure and the flow of the program was a challenging task but we were able to achieve it.
## Accomplishments that we're proud of
1) (co:here) Finetuned a co:here model to semantically classify news articles based on emotional sentiment
2) (co:here) Developed a high-performing classification model to classify news articles by topic
3) Spun up a cockroach db node and client and used it to store all of our classification data
4) Added support for multiple users of the extension that can leverage the use of cockroach DB's relational schema.
5) Frontend: Implemented support for multimedia streaming and transcription from the browser, and used script injection into websites to scrape content.
6) Infrastructure: Deploying server code to the cloud and serving it using Nginx and port-forwarding.
## What we learned
1) We learned a lot about how to use cockroach DB in order to create a database of news articles and topics that also have multiple users
2) Script injection, cross-origin and cross-frame calls to handle multiple frontend elements. This was especially challenging for us as none of us had any frontend engineering experience.
3) Creating a data ingestion and machine learning inference pipeline that runs on the cloud, and finetuning the model using ensembles to get optimal results for our use case.
## What's next for goodNews
1) Currently, we push a notification to the user about negative pages viewed/a link to a positive article every time the user visits a negative page after the threshold has been crossed. The intended way to fix this would be to add a column to one of our existing cockroach db tables as a 'dirty bit' of sorts, which tracks whether a notification has been pushed to a user or not, since we don't want to notify them multiple times a day. After doing this, we can query the table to determine if we should push a notification to the user or not.
2) We also would like to finetune our machine learning more. For example, right now we classify articles by topic broadly (such as War, COVID, Sports etc) and show a related positive article in the same category. Given more time, we would want to provide more semantically similar positive article suggestions to those that the author is reading. We could use cohere or other large language models to potentially explore that. | ## Inspiration
(<http://televisedrevolution.com/wp-content/uploads/2015/08/mr_robot.jpg>)
If you watch Mr. Robot, then you know that the main character, Elliot, deals with some pretty serious mental health issues. One of his therapeutic techniques is to write his thoughts on a private journal. They're great... they get your feelings out, and acts as a point of reference to look back to in the future.
We took the best parts of what makes a diary/journal great, and made it just a little bit better - with Indico. In short, we help track your mental health similar to how FitBit tracks your physical health. By writing journal entries on our app, we automagically parse through the journal entries, record your emotional state at that point in time, and keep an archive of the post to aggregate a clear mental profile.
## What it does
This is a FitBit for your brain. As you record entries about your live in the private journal, the app anonymously sends the data to Indico and parses for personality, emotional state, keywords, and overall sentiment. It requires 0 effort on the user's part, and over time, we can generate an accurate picture of your overall mental state.
The posts automatically embeds the strongest emotional state from each post so you can easily find / read posts that evoke a certain feeling (joy, sadness, anger, fear, surprise). We also have a analytics dashboard that further analyzes the person's longterm emotional state.
We believe being cognizant of one self's own mental health is much harder, and just as important as their physical health. A long term view of their emotional state can help the user detect sudden changes in the baseline, or seek out help & support long before the situation becomes dire.
## How we built it
The backend is built on a simple Express server on top of Node.js. We chose React and Redux for the client, due to its strong unidirectional data flow capabilities, as well as the component based architecture (we're big fans of css-modules). Additionally, the strong suite of redux middlewares such as sagas (for side-effects), ImmutableJS, and reselect, helped us scaffold out a solid, stable application in just one day.
## Challenges we ran into
Functional programming is hard. It doesn't have any of the magic that two-way data-binding frameworks come with, such as MeteorJS or AngularJS. Of course, we made the decision to use React/Redux being aware of this. When you're hacking away - code can become messy. Functional programming can at least prevent some common mistakes that often make a hackathon project completely unusable post-hackathon.
Another challenge was the persistance layer for our application. Originally, we wanted to use MongoDB - due to our familiarity with the process of setup. However, to speed things up, we decided to use Firebase. In hindsight, it may have cause us more trouble - since none of us ever used Firebase before. However, learning is always part of the process and we're very glad to have learned even the prototyping basics of Firebase.
## Accomplishments that we're proud of
* Fully Persistant Data with Firebase
* A REAL, WORKING app (not a mockup, or just the UI build), we were able to have CRUD fully working, as well as the logic for processing the various data charts in analytics.
* A sweet UI with some snazzy animations
* Being able to do all this while having a TON of fun.
## What we learned
* Indico is actually really cool and easy to use (not just trying to win points here). Albeit it's not always 100% accurate, but building something like this without Indico would be extremely difficult, and similar apis I've tried is not close to being as easy to integrate.
* React, Redux, Node. A few members of the team learned the expansive stack in just a few days. They're not experts by any means, but they definitely were able to grasp concepts very fast due to the fact that we didn't stop pushing code to Github.
## What's next for Reflect: Journal + Indico to track your Mental Health
Our goal is to make the backend algorithms a bit more rigorous, add a simple authentication algorithm, and to launch this app, consumer facing. We think there's a lot of potential in this app, and there's very little (actually, no one that we could find) competition in this space. | ## Inspiration
We felt the world could always use more penguins, so we decided to bring more penguins to the world.
## What it does
It spawns penguins.
## How we built it
We compiled over 40 images of penguins from the internet.
## Challenges we ran into
We fell asleep :(
## Accomplishments that we're proud of
We are proud of our vast collection of penguins. Thank u Isaac for finding them and Bradley for writing out their parabolic pathways by hand.
## What we learned
Penguins are a gift from God.
## What's next for Penguin Minglin'
It's perfect as is | winning |
## Inspiration
Informa’s customers want to understand what new technologies will be most relevant to their businesses. This is also more “hype” around technologies. Therefore, it is increasingly important for companies to stay informed about emerging technologies.
## What it does
Marble Grapes display the most relevant technologies for each of Informa's 6 industry-specific clients.
## How we built it
We developed a neural network to algorithmically predict the estimated “noise” of a technology.
This information is then displayed in a dynamic dashboard for Informa’s market analysts.
## Challenges we ran into
Time constraints were a significant problem. There was limited accessibility of meaningful data. There were also minor syntax issues with Javascript ES6.
## Accomplishments that we're proud of
Interviewing Informa, and understanding the problem in a deep way. We're also proud of developing a website that is intuitive to use.
## What we learned
We learned that it is hard to access meaningful data, despite having a good solution in mind. We also learned that 4 young adults can eat a surprising amount of grapes in a short period of time.
## What's next for Maple Grapes
We'd like to improve the accuracy of the algorithm by increasing the body of historical data for technological successes and failures. We'd also to account for a social media impact score, by doing sentiment analysis.
## Team
Faith Dennis (UC Berkeley), Shekhar Kumar (University of Toronto), Peter Zheng (City University of New York), Avkash Mukhi (University of Toronto) | ## Inspiration
Students often do not have a financial background and want to begin learning about finance, but the sheer amount of resources that exist online make it difficult to know which articles are good for people to read. Thus we thought the best way to tackle this problem was to use a machine learning technique known as sentiment analysis to determine the tone of articles, allowing us to recommend more neutral options to users and provide a visual view of the different articles available so that users can make more informed decisions on the articles they read.
## What it does
This product is a web based application that performs sentiment analysis on a large scope of articles to aid users in finding biased, or un - biased articles. We also offer three data visualizations of each topic, an interactive graph that shows the distribution of sentiment scores on articles, a heatmap of the sentiment scores and a word cloud showing common key words among the articles.
## How we built it
Around 80 unique articles from 10 different domains were scraped from the web using scrapy. This data was then processed with the help of Indico's machine learning API. The API provided us with the tools to perform sentiment analysis on all of our articles which was the main feature of our product. We then further used the summarize feature of Indico api to create shorter descriptions of the article for our users. Indico api also powers the other two data visualizations that we provide to our users. The first of the two visualizations would be the heatmap which is also created through tableau and takes the sentimenthq scores to better visualize and compare articles and the difference between the sentiment scores. The last visualization is powered by wordcloud which is built on top of pillow and matplotlib. It takes keywords generated by Indico api and displays the most frequent keywords across all articles.The web application is powered by Django and a SQL lite database in the backend, bootstrap for the frontend and is all hosted on a google cloud platform app engine.
## Challenges we ran into
The project itself was a challenge since it was our first time building a web application with Django and hosting on a cloud platform. Another challenge arose in data scraping, when finding the titles of the articles, different domains placed their article titles in different locations and tags making it difficult to make one scraper that could abstract to many websites. Not only this, but the data that was returned by the scraper was not the correct format for us to easily manipulate so unpackaging dictionaries and such were small little tasks that we had to do in order for us to solve these problems. On the data visualization side, there was no graphic library that would fit our vision for the interactive graph, so we had to build that on our own!
## Accomplishments that we're proud of
Being able to accomplish the goals that we set out for the project and actually generating useful information in our web application based on the data that we ran through Indico API.
## What we learned
We learned how to build websites using Django, generate word clouds using matplotlib and pandas, host websites on google cloud platform, how to utilize the Indico api and researched various types of data visualization techniques.
## What's next for DataFeels
Lots of improvements could still be made to this project and here are just some of the different things that could be done. The scraper created for the data required us to manually run the script for every new link but creating an automated scraper that built the correct data structures for us to directly pipeline to our website would be much more ideal. Next we would expand our website to have not just financial categories but any topic that has articles about it. | Though technology has certainly had an impact in "leveling the playing field" between novices and experts in stock trading, there still exist a number of market inefficiencies for the savvy trader to exploit. Figuring that stock prices in the short term tend to some extent to reflect traders' emotional reactions to news articles published that day, we set out to create a machine learning application that could predict the general emotional response to the day's news and issue an informed buy, sell, or hold recommendation for each stock based on that information.
After entering the ticker symbol of a stock, our application allows the user to easily compare the actual stock price over a period of time against our algorithm's emotional reaction.
We built our web application using the Flask python framework and front-end using React and Bootstrap. To scrape news articles in order to analyze trends, we utilized the google-news API. This allowed us to search for articles pertaining to certain companies, such as Google and Disney. Afterwards, we performed ML and sentiment analysis through the textblob Python API.
We had some difficulty finding news articles; it was quite a challenge to find a free and accessible API that allowed us to gather our data. In fact, we stumbled upon one API that, without our knowledge, redirected us to a different web page the moment we attempted any sort of data extraction. Additionally, we had some problems trying to optimize our ML algorithm in order to produce as accurate results as possible.
We are proud of the fact that Newsstock is up and running and able to predict certain trends in the stock market with some accuracy. It was cool not only to see how certain companies fared in the stock market, but also to see how positivity or negativity in media influenced how people bought or sold certain stocks.
First and foremost, we learned how difficult it could be at times to scrape news articles, especially while avoiding any sort of payment or fee. Additionally, we learned that machine learning can be fairly inaccurate. Overall, we had a great experience learning new frameworks and technologies as we built Newsstock. | partial |
## Inspiration
The inspiration for GithubGuide came from our own experiences working with open-source projects and navigating through complex codebases on GitHub. We realized that understanding the purpose of each file and folder in a repository can be a daunting task, especially for beginners. Thus, we aimed to create a tool that simplifies this process and makes it easier for developers to explore and contribute to GitHub projects.
## What it does
GithubGuide is a Google Chrome extension that takes any GitHub repository as input and explains the purpose of each file and folder in the repository. It uses the GitHub API to fetch repository contents and metadata, which are then processed and presented in an easily understandable format. This enables developers to quickly navigate and comprehend the structure of a repository, allowing them to save time and work more efficiently.
## How we built it
We built GithubGuide as a team of four. Here's how we split the work among teammates 1, 2, 3, and 4:
1. Build a Chrome extension using JavaScript, which serves as the user interface for interacting with the tool.
2. Develop a comprehensive algorithm and data structures to efficiently manage and process the repository data and LLM-generated inferences.
3. Configure a workflow to read repository contents into our chosen LLM ChatGPT model using a reader built on LLaMa - a connector between LLMs and external data sources.
4. Build a server with Python Flask to communicate data between the Chrome extension and LLaMa, the LLM data connector.
## Challenges we ran into
Throughout the development process, we encountered several challenges:
1. Integrating the LLM data connector with the Chrome extension and the Flask server.
2. Parsing and processing the repository data correctly.
3. Engineering our ChatGPT prompts to get optimal results.
## Accomplishments that we're proud of
We are proud of:
1. Successfully developing a fully functional Chrome extension that simplifies the process of understanding GitHub repositories.
2. Overcoming the technical challenges in integrating various components and technologies.
3. Creating a tool that has the potential to assist developers, especially beginners, in their journey to contribute to open-source projects.
## What we learned
Throughout this project, we learned:
1. How to work with LLMs and external data connectors.
2. The intricacies of building a Chrome extension, and how developers have very little freedom when developing browser extensions.
3. The importance of collaboration, effective communication, and making sure everyone is on the same page within our team, especially when merging critically related modules.
## What's next for GithubGuide
We envision the following improvements and features for GithubGuide:
1. Expanding support for other browsers and platforms.
2. Enhancing the accuracy and quality of the explanations provided by ChatGPT.
3. Speeding up the pipeline.
4. Collaborating with the open-source community to further refine and expand the project. | ## Inspiration
In a world full of so much info but limited time as we are so busy and occupied all the time, we wanted to create a tool that helps students make the most of their learning—quickly and effectively. We imagined a platform that could turn complex material into digestible, engaging content tailored for a fast-paced generation.
## What it does
Lemme Learn More (LLM) transforms study materials into bite-sized TikTok-style trendy and attractive videos or reels, flashcards, and podcasts. Whether it's preparing for exams or trying to stay informed on the go, LLM breaks down information into formats that match how today's students consume content. If you're an avid listener of podcasts during commute to work - this is the best platform for you.
## How we built it
We built LLM using a combination of AI-powered tools like OpenAI for summaries, Google TTS for podcasts, and pypdf2 to pull data from PDFs. The backend runs on Flask, while the frontend is React.js, making the platform both interactive and scalable. Also used fetch.ai ai agents to deploy on test net - blockchain.
## Challenges we ran into
Due to a highly-limited time, we ran into a deployment challenge where we faced some difficulty setting up on Heroku cloud service platform. It was an internal level issue wherein we were supposed to change config files - I personally spent 5 hours on that - my team spent some time as well. Could not figure it out by the time hackathon ended : we decided not to deploy today.
In brainrot generator module, audio timing could not match with captions. This is something for future scope.
One of the other biggest challenges was integrating sponsor Fetch.ai agentverse AI Agents which we did locally and proud of it!
## Accomplishments that we're proud of
Biggest accomplishment would be that we were able to run and integrate all of our 3 modules and a working front-end too!!
## What we learned
We learned that we cannot know everything - and cannot fix every bug in a limited time frame. It is okay to fail and it is more than okay to accept it and move on - work on the next thing in the project.
## What's next for Lemme Learn More (LLM)
Coming next:
1. realistic podcast with next gen TTS technology
2. shorts/reels videos adjusted to the trends of today
3. Mobile app if MVP flies well! | ## Inspiration
Research suggests that wearable aids can be used to alleviate adverse effects of impaired recognition of facial expressions. In particular, [The Austim Glass Project](http://autismglass.stanford.edu/) at Stanford is using Google Glass to help children with Autism Spectrum Disorder. Researchers use the Glass in combination with an Android app to help train autistic patients to identify emotions accurately. However, with a starting price of $1500, the Glass is a pricy tool. The goal then was to develop an embedded system with similar functionality, but at a lower price point. In combination with the Picamera, the Raspberry Pi costs less than a $100, making it less risky to insure than the Glass.
## What it is
A glasses attachment that takes photos of what the user is seeing, and if a subject's face is detected tells the user what the subject is feeling.
## How we built it
Raspberry Pi and Picamera for hardware, AWS SDK and Rekognition API (Python) for processing, Flask for UI
## Challenges we ran into
Setting up the Raspberry Pi was initially faulty. Getting Flask to stream photos taken from the Picamera was similarly difficult. | partial |
## 💡Inspiration
‘Despacito’, ‘Gangnam Style’, ‘Shape of you’, sound like a melody? Yup, these are a few of the most iconic music of our century (According to Gen Z). What better way to help the upcoming generation of music artists than creating an algorithm to predict the percentage of success your music can reach! We want to give back to our community of artists by helping them achieve their dreams.
## 💿 What it does
Like Shazam, ULabel listens to a song but instead of only telling you all that Shazam, it will tell you the chances of success **your** music has! Using machine learning techniques including neural networks, k-nearest neighbor, Spotify API and more. Aimed towards an audience of music lovers, producers, upcoming artists. ULabel can be used to determine the potential popularity of unreleased music.
## 💻 How we built it
To start off, we gathered data on the top charting songs across different genres and time. We then used Spotify’s API to get each song's metadata and integrated it into a pandas Data frame. We cleaned and preprocessed the data and created a machine learning model to predict the popularity of a song based on 13 key acoustic characteristics. Then we used Switft UI to create an iOS mobile application with a simplistic design made on Figma.
## ⏳ Challenges we ran into
One of the biggest challenges we faced was generating the data set using Spotify as well as creating an interactive UI using swift. To create our dataset we needed to aggregate top songs across multiple playlists and albums. The Spotify API sends the information in nested dictionaries so it took a while to decipher, clean, preprocess, and train the data. We also took this opportunity to learn a new skill with Swift IOS development. Learning a new skill had its fair share of challenges, one of which was integrating our ML model using swift.
## ✅ Accomplishments that we're proud of
-Method that converts Spotify playlists to a data frame full of all the song’s acoustic and metadata
-Clean user-friendly UI / UX
-Highly accurate machine learning model (measured using mean squared error)
-Additional collaboration and acoustics features to accommodate client needs
## 🧠 What we learned
We also learned a lot about how music is classified and variables that come from it such as danceability, acousticness, and energy. Additionationally, we learned how these variables affected the popularity of the songs. We learned all about Swift development fundamentals and implementing systems design within IOS applications.
## 👷♂️ What's next for ULabel
Our next step is making the IOS app fully functional. This entails adding a collaboration feature where artists can find other producers and artists in their vicinity, a simplified streamlined way to promote their music across multiple social media platforms, and connecting artists with venues & shoes that match their target audience. | ## Inspiration
I was inspired by my own difficulty to make a budget and stick to it.
## What it does
The application helps you estimate your budget. You answer a couple of basic questions about your life style and we provide you with a suggested monthly budget. You can then create your final budget and manage it.
## How I built it
Java and XML. The estimation is based on data found on university websites. I used the average monthly budget for college students in Canada. Based on it and on your answers, the application automatically adjusts and estimates.
## Challenges I ran into
I wasn't quite sure of how to represent my data and manipulate it the most efficiently. I ended up choosing HashTables and ArrayLists. I also wanted to do too much for only 36 hours so I had to restrict my ambition. I am still working on saving the activity state and restoring it for the budget management part. I was also interested in taking the estimation result and outputing it in the budget management activity but I got stuck on how to transfer this data from one activity to another. One solution would be to make the destination activity extend the initial one.
## Accomplishments that I'm proud of
I am very proud of the design. I think it looks good and is fairly intuitive to use. I am also proud that my estimation activity works well.
## What I learned
A ton of Java and XML.
## What's next for College Budget
Improve the overall user experience and maybe use a learning algorithm to improve estimations. I would also like to make my Budget Manager activity way more complete. (I was limited by time) | ## Inspiration
The main focus of our project is creating opportunities for people to interact virtually and pursue their interests while remaining active. We hoped to accomplish this through a medium that people are already interested in and providing them with a tool to take that interest to the next level. From these intentions came our project- TikTok Dance Trainer.
In our previous hackathon, we gained experience with computer vision using OpenCV2 in python, and we wanted to look further in this field. Gaining inspiration from other projects that we saw, we wanted to create a project that could not only recognize hand movements but full body motion as well.
## What it does
TikTok Dance Trainer is a new Web App that enables its users to learn and replicate popular dances from TikTok. While using the app, users will receive a score in real time that gives them feedback on how well they are dancing compared to the original video. This web app is an encouraging way for beginners to hone dance skills and improve their TikTok content as well as a fun way for advanced users to compete against one another in perfecting dances.
## How we built it
To create this project, we split into teams. One team experimented with comparison metrics to compare body poses while the other built up the UI with HTML, CSS and Javascript.
The pose estimation is implemented with an open source pre-trained neural network in tensorflow called posenet. This model can pinpoint the key points on the human body such as wrists, elbows, hips, knees, and joints on the head. The two dancers each have a set of 17 joints, which are then compared to each other, frame by frame. In order to compare these arrays of coordinates, we researched various distance metrics to use such as the Euclidean Metric, Cosine Similarity, the weighted Manhattan distance, and Procrustes Analysis (Affine Transformation). Through data collection and trial and error, the cosine distance gave the best results in the end. The resulting distances were then fed into a function to map the values to viable player scores.
The UI is built up in HTML with CSS styling and Javascript to run its functions. It has a hand-drawn background and an easy-to-use design packed with function. The menu bar has a file selector for choosing and uploading a dance video to compare to. The three main cards of the UI have the reference video and live cam side by side, with pose-estimated skeletons of each in the middle to aid in matching the reference dance. The whole UI is built up in general keeping in mind ease of use, simplicity, visual appeal and functionality.
## Challenges we ran into
As a result of splitting into two teams for different parts of the project, one challenge we faced was merging the two parts. It was difficult to both combine the code but as well to connect the different parts of it, returning outputs from one part as acceptable inputs for another. Through perseverance and a lot of communication we managed to effectively merge the two parts.
## Accomplishments that we're proud of
We managed to create a clean looking app that performs the algorithm well despite the time pressure and complexity of the project. In addition, we were able to allocate time into making a presentation with a skit to tie everything together.
## What we learned
Coming into this hackathon, only one of our members was experienced in web development, but coming out, all of us four felt that we gained valuable experience and insight into the ins and outs of webpages. We learned how to effectively use Node.js to create a backend and connect it with our frontend. Along with this, we gained experience using npm and many of javascript's potpourri of packages such as browserify.
## What's next for TikTok Dance Trainer
We also looked into using Dynamic Time Warping to help with the comparison. This would help primarily when the videos were different lengths or if the dancers were slightly mismatched. However, we realized that this would not be needed if the user is dancing against the TikTok video in their own live feed. In the future, we would like to add a functionality that allows two pre-recorded videos to be compared that would then use Dynamic Time Warping.
All open source repositories/packages that were used:
[link](https://github.com/tensorflow/tfjs-models/tree/master/posenet)
[link](https://github.com/compute-io/cosine-similarity)
[link](https://github.com/GordonLesti/dynamic-time-warping)
[link](https://github.com/browserify/browserify)
[link](https://github.com/ml5js/ml5-library) | losing |
## Inspiration
Social media platforms such as Facebook and Twitter have been extremely crucial in helping establish important political protests in nations such as Egypt during the Arab Spring, and mesh network based platforms hold importance in countries like China where national censorship prevents open communication. In addition to this, people in countries like India have easy access to smart phones because of how cheap android phones have become, but wifi/cellular access still remains expensive.
## What it does
Our project is called Wildfire. Wildfire is a peer to peer social media where users can follow other specific users to receive updates, posts, news articles, and other important information such as weather. The app works completely offline, by transmitting data through a network of other devices. In addition to this, we created a protocol for hubs, which centralizes the ad hoc network to specific locations, allowing mass storage of data on the hubs rather than users phones.
## How we built it
The peer to peer component of the hack uses Android Nearby, which is a protocol that uses hotspots, bluetooth, and sound to transmit messages to phones and hubs that are close to you. Using this SDK, we created a protocol to establish mesh networks between all the devices, and created an algorithm that efficiently transfers information across the network. Also, the hubs were created using android things devices, which can be built using cheap raspberry pis. This provides an advantage over other mesh networking/ad hoc network applications, because our hack uses a combination of persistent storage on the hubs, and ephemeral storage on the actual users' device, to ensure that you can use even devices that do not have a ton of storage capability to connect to a Wildfire network.
## Challenges we ran into
There were a couple of major problems we had to tackle. First of all, our social media is peer to peer, meaning all the data on the network is stored on the phones of users, which would be a problem if we stored ALL the data on EVERY single phone. To solve this problem, we came up with the idea for hubs, which provides centralized storage for data that do not move, lessening the load on each users phone for storage, which is a concern in regions where people might be purchasing phones that are cheaper and have less total storage. In addition to this, we have an intelligent algorithm that tries to predict a given users movement to allow them to act as a messenger between two separate users. This algorithm took a lot of thinking to actually transmit data efficiently, and is something we are extremely proud of.
## What we learned
We learned about a lot of the problems and solutions to the problems that come with working on a distributed system. We tried a bunch of different solutions, such as using distributed elections to decide a leader to establish ground truth, using a hub as a centralized location for data, creating an intelligent content delivery system using messengers, etc.
## What's next for Wildfire
We plan on making the wildfire app, the hub system, and our general p2p protocol open source, and our hope is that other developers can build p2p applications using our system. Also, potential applications for our hack include being able to create Wildfire networks across rural areas to allow quick communication, if end to end encryption is integrated, being able to use it to transmit sensitive information in social movements, and more. Wildfires are fires you cannot control, and so the probabilities for our system are endless. | ## What Inspired Us
A good customer experience leaves a lasting impression across every stage of their journey. This is exemplified in the airline and travel industry. To give credit and show appreciation to the hardworking employees of JetBlue, we chose to scrape and analyze customer feedback on review and social media sites to both highlight their impact on customers and provide currently untracked, valuable data to build a more personalized brand that outshines its market competitors.
## What Our Project does
Our customer feedback analytics dashboard, BlueVisuals, provides JetBlue with highly visual presentations, summaries, and highlights of customers' thoughts and opinions on social media and review sites. Visuals such as word clouds and word-frequency charts highlight critical areas of focus where the customers reported having either positive or negative experiences, suggesting either areas of improvement or strengths. The users can read individual comments to review the exact situation of the customers or skim through to get a general sense of their social media interactions with their customers. Through this dashboard, we hope that the users are able to draw solid conclusions and pursue action based on those said conclusions.
Humans of JetBlue is a side product resulting from such conclusions users (such as ourselves) may draw from the dashboard that showcases the efforts and dedication of individuals working at JetBlue and their positive impacts on customers. This product highlights our inspiration for building the main dashboard and is a tool we would recommend to JetBlue.
## How we designed and built BlueVisuals and Humans of JetBlue
After establishing the goals of our project, we focused on data collection via web scraping and building the data processing pipeline using Python and Google Cloud's NLP API. After understanding our data, we drew up a website and corresponding visualizations. Then, we implemented the front end using React.
Finally, we drew conclusions from our dashboard and designed 'Humans of JetBlue' as an example usage of BlueVisuals.
## What's next for BlueVisuals and Humans of JetBlue
* collecting more data to get a more representative survey of consumer sentiment online
* building a back-end database to support data processing, storage, and organization
* expanding employee-centric
## Challenges we ran into
* Polishing scraped data and extracting important information.
* Finalizing direction and purpose of the project
* Sleeping on the floor.
## Accomplishments that we're proud of
* effectively processed, organized, and built visualizations for text data
* picking up new skills (JS, matplotlib, GCloud NLP API)
* working as a team to manage loads of work under time constraints
## What we learned
* value of teamwork in a coding environment
* technical skills | ## Off The Grid
Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app
# Inspiration
Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better.
# Technology Used
Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access.
# Challenges
Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas
# Team Members
Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project! | partial |
## Inspiration
At **Happy Muscles**, we're driven by a simple yet profound idea – *everyone deserves to experience the joy of a pain-free, active life*. The inspiration behind our app stems from the desire to empower individuals to take control of their muscle health and well-being. We believe that AI technology can be a game-changer in this domain, and our journey to create *"Happy Muscles"* began with the vision of making wellness accessible, personalized, and delightful.
## What it does
Happy Muscles is not just another fitness app. It's your **personal wellness companion**. Our app caters to two distinct needs:
*For those experiencing muscle pain*, especially seniors, it offers a virtual pain map. Users can identify the specific muscle causing discomfort, and our advanced AI suggests personalized stretches and exercises to find relief.
*For fitness enthusiasts hitting the gym*, we provide guidance on effective warm-up and cooldown stretches for targeted muscle groups, reducing the risk of injuries and enhancing performance.
## How we built it
We harnessed the capabilities of modern technology to create **Happy Muscles**. Our tech stack includes React JS for the frontend, FastAPI for the backend, and we hosted it all on Replit.
**React JS**: We used React JS to design a user-friendly and responsive front-end interface, making it easy for users to navigate and interact with the app.
**FastAPI**: Our back-end was powered by FastAPI, which allowed us to efficiently handle user data, connect with our AI models, and provide real-time recommendations.
**Replit**: We chose Replit for its ease of collaboration and deployment capabilities, enabling us to work together seamlessly and bring **Happy Muscles** to life.
## Challenges we ran into
Creating an app that provides personalized wellness recommendations comes with its set of challenges. One of the key hurdles we faced was fine-tuning the AI to offer accurate and effective advice for various muscle-related issues. Ensuring user privacy and data security was another significant challenge, which we addressed with robust measures.
Additionally, real-time collaboration on a hackathon project can be demanding. Synchronizing our efforts and maintaining a smooth workflow was a challenge we had to overcome.
## Accomplishments that we're proud of
We're incredibly proud of what we've achieved in such a short time. Happy Muscles is not just an idea anymore; it's a reality. We successfully created an app that has the potential to positively impact people's lives by promoting better muscle health and fitness.
Our AI-powered muscle recognition and recommendation system is a major achievement. It's the result of hard work, dedication, and a shared vision to make wellness accessible to all.
## What we learned
In the course of building Happy Muscles, we learned that the fusion of technology and wellness can be a powerful force for good. It's amazing how a hackathon can bring out the best in a team, fostering creativity, collaboration, and innovation.
We also realized the importance of user-centered design. Making the app accessible and user-friendly was as critical as its functionality. (Still Working to make it responsive on all devices)
## What's next for Happy Muscles
Our journey doesn't end here. Happy Muscles is just the beginning. In the future, we aim to expand our AI's capabilities, add more exercises and stretches, and provide even more personalized guidance to users.
We are looking forward to make the app responsive to all devices.
We will be adding images for more visual instructions on the stretching
We will be looking more deeper into the muscles and providing more accurate stretching results to that specific model
We're also looking forward to gathering user feedback and refining the app based on their experiences and needs. Our goal is to continuously improve and innovate, all while keeping people's happiness and health at the core of our mission.
Join us in our mission to make muscles happy, one stretch at a time. Together, we can build a healthier, happier world. | ## Inspiration
Living a healthy and balanced life style comes with many challenges. There were three primary challenges we sought to resolve with this hack.
* the knowledge barrier | *“I want to work out, but I don’t exactly know what to do”*
* the schedule barrier | *“I don’t know when to workout or for how long. I don't even know how long the workout I created is going to take.”*
* the motivation barrier | *“I don’t feel like working out because I’m tired.”*
Furthermore, sometimes you feel awful and don’t wish to work out. Sometimes you work out anyways and feel better. Sometimes you work out anyways and feel worse and suffer the next day. How can we optimize this sort of flow to get people consistently feeling good and wanting to workout?
## What it does
That's where Smart Fit comes in. This AI based web application takes input from the user such as time availability, focus, and mood to intelligently generate workouts and coach the user to health & happiness. The AI applies **sentimental analysis** using **AWS API**. Using keyword analysis and inputs from the user this AI predicts the perfectly desired workout, which fits into their availability. Once the AI generates the workout for the user, the user can either (1) schedule the workout for later or (2) workout now. **Twillio** is used to send the workout to the users phone and schedules the workouts. The application uses **facial emotional detection** through AWS to analyze the users' facial expression and provides the user with real-time feedback while they're exercising.
## How we built it
The website and front-end was built using HTML5, and styled using CSS, Adobe Photoshop, and Figma. Javascript (both vanilla and jQuery) was used to connect most HTML elements to our backend.
The backend was built as a Python Flask application. While responsible for serving up static assets, the backend was also in charge of crucial backend processes such as the AI engine utilized to intelligently generate workouts and give real-time feedback as well as the workout scheduler. We utilized technology such as AWS AI Services (Comprehend and Rekognition) and the Twilio API.
## Challenges we ran into
We found that the most difficult portion of our project were the less technical aspects: defining the exact problem we wanted to solve, deciding on features of our app, and narrowing scope enough to produce a minimum viable product. We resolved this be communicating extensively; in fact, we argued numerous times over the best design. Because of discussions like this, we were able to create a better product.
## Accomplishments that we're proud of
* engineering a workout scheduler and real-time feedback engine. It was amazing to be able to make an AI application that uses real-time data to give real-time feedback. This was a fun challenge to solve because of all of the processes communicating concurrently.
* becoming an extremely effective team and great friends, despite not knowing each other beforehand and having diverse backgrounds (we're a team of a chemical engineer, an 11th grader in high school, and a senior in college).
## What we learned
We learned many new technical skills like how to integrate APIs into a complex application, how to structure a multi-purpose server (web, AI engine, workout scheduler), and how to develop a full-stack application. We also learned how to effectively collaborate as a group and how to rapidly iterate and prototype.
## What's next for Smart Fit
The development of a mobile responsive app for more convenient/accessible use. We created a mockup of the user interface; check it out [here](https://www.youtube.com/watch?v=asirXH3Hxw4&feature=youtu.be)! Using google calendar API to allow for direct scheduling to your Google account and the use of google reminders. Bootstrap will be used in the future to allow for a better visual and user experience of the web application. Finally, deploying on a cloud platform like GCP and linking the app to a domain | ## Inspiration
• Saw a need for mental health service provision in Amazon Alexa
## What it does
• Created Amazon Alexa skill in Node.js to enable Alexa to empathize with and help a user who is feeling low
• Capabilities include: probing user for the cause of low mood, playing soothing music, reciting inspirational quote
## How we built it
• Created Amazon Alexa skill in Node.js using Amazon Web Services (AWS) and Lambda Function
## Challenges we ran into
• Accessing the web via Alexa, making sample utterances all-encompassing, how to work with Node.js
## Accomplishments that we're proud of
• Made a stable Alexa skill that is useful and extendable
## What we learned
• Node.js, How to use Amazon Web Services
## What's next for Alexa Baymax
• Add resources to Alexa Baymax (if the user has academic issues, can provide links to helpful websites), and emergency contact information, tailor playlist to user's taste and needs, may commercialize by adding an option for the user to book therapy/massage/counseling session | losing |
## Inspiration
Most of us have probably donated to a cause before — be it $1 or $1000. Resultantly, most of us here have probably also had the same doubts:
* who is my money really going to?
* what is my money providing for them...if it’s providing for them at all?
* how much of my money actually goes use by the individuals I’m trying to help?
* is my money really making a difference?
Carepak was founded to break down those barriers and connect more humans to other humans. We were motivated to create an application that could create a meaningful social impact. By creating a more transparent and personalized platform, we hope that more people can be inspired to donate in more meaningful ways.
As an avid donor, CarePak is a long-time dream of Aran’s to make.
## What it does
CarePak is a web application that seeks to simplify and personalize the charity donation process. In our original designs, CarePak was a mobile app. We decided to make it into a web app after a bit of deliberation, because we thought that we’d be able to get more coverage and serve more people.
Users are given options of packages made up of predetermined items created by charities for various causes, and they may pick and choose which of these items to donate towards at a variety of price levels. Instead of simply donating money to organizations,
CarePak's platform appeals to donators since they know exactly what their money is going towards. Once each item in a care package has been purchased, the charity now has a complete package to send to those in need. Through donating, the user will build up a history, which will be used by CarePak to recommend similar packages and charities based on the user's preferences. Users have the option to see popular donation packages in their area, as well as popular packages worldwide.
## How I built it
We used React with the Material UI framework, and NodeJS and Express on the backend. The database is SQLite.
## Challenges I ran into
We initially planned on using MongoDB but discovered that our database design did not seem to suit MongoDB too well and this led to some lengthy delays. On Saturday evening, we made the decision to switch to a SQLite database to simplify the development process and were able to entirely restructure the backend in a matter of hours. Thanks to carefully discussed designs and good teamwork, we were able to make the switch without any major issues.
## Accomplishments that I'm proud of
We made an elegant and simple application with ideas that could be applied in the real world. Both the front-end and back-end were designed to be modular and could easily support some of the enhancements that we had planned for CarePak but were unfortunately unable to implement within the deadline.
## What I learned
Have a more careful selection process of tools and languages at the beginning of the hackathon development process, reviewing their suitability in helping build an application that achieves our planned goals. Any extra time we could have spent on the planning process would definitely have been more than saved by not having to make major backend changes near the end of the Hackathon.
## What's next for CarePak
* We would love to integrate Machine Learning features from AWS in order to gather data and create improved suggestions and recommendations towards users.
* We would like to add a view for charities, as well, so that they may be able to sign up and create care packages for the individuals they serve. Hopefully, we would be able to create a more attractive option for them as well through a simple and streamlined process that brings them closer to donors. | ## Inspiration
We were inspired by how there were many instances of fraud with regards to how donations were handled. It is heartbreaking to see how mismanagement can lead to victims of disasters not receiving the help that they desperately need.
## What it does
TrustTrace introduces unparalleled levels of transparency and accountability to charities and fundraisers. Donors will now feel more comfortable donating to causes such as helping earthquake victims since they will now know how their money will be spent, and where every dollar raised is going to.
## How we built it
We created a smart contract that allowed organisations to specify how much money they want to raise and how they want to segment their spending into specific categories. This is then enforced by the smart contract, which puts donors at ease as they know that their donations will not be misused, and will go to those who are truly in need.
## Challenges we ran into
The integration of smart contracts and the web development frameworks were more difficult than we expected, and we overcame them with lots of trial and error.
## Accomplishments that we're proud of
We are proud of being able to create this in under 36 hours for a noble cause. We hope that this creation can eventually evolve into a product that can raise interest, trust, and participation in donating to humanitarian causes across the world.
## What we learned
We learnt a lot about how smart contract works, how they are deployed, and how they can be used to enforce trust.
## What's next for TrustTrace
Expanding towards serving a wider range of fundraisers such as GoFundMes, Kickstarters, etc. | # [eagleEye](https://aliu139.github.io/eagleEye/index.html)
Think Google Analytics... but for the real world

## Team
* [Austin Liu](https://github.com/aliu139)
* Benjamin Jiang
* Mahima Shah
* Namit Juneja
## Al
EagleEye is watching out for you. Our program links into already existing CCTV feeds + webcam streams but provides a new wealth of data, simply by pairing the power of machine learning algorithms with the wealth of data already being captured. All of this is then output to a simple-to-use front-end website, where you can not only view updates in real-time but also filter your existing data to derive actionable insights about your customer base. The outputs are incredibly intuitive and no technical knowledge is required to create beautiful dashboards that replicate the effects of complicated SQL queries. All data automatically syncs through Firebase and the site is mobile compatible.
## Technologies Used
* AngularJS
* Charts.js
* OpenCV
* Python
* SciKit
* Pandas
* FireBase
## Technical Writeup
Image processing is done using OpenCV with Python bindings. This data is posted to Firebase which continuously syncs with the local server. The dashboard pulls this information and displays it using Angular. Every several minutes, the front-end pulls the data that has not already been clustered and lumps it together in a Pandas dataframe for quicker calculations. The data is clustered using a modified SciKit library and reinserted into a separate Firebase. The chart with filters pulls from this database because it is not as essential for these queries to operate in real time.
All of the front-end is dynamically generated using AngularJS. All of the data is fetched using API calls to the Firebase. By watching $scope variables, we can have the charts update in real time. In addition, by using charts.js, we also get smoother transitions and a more aesthetic UI. All of the processing that occurs on the filters page is also calculated by services and linked into the controller. All calculations are done on the fly with minimal processing time. | winning |
## Inspiration
## What it does
The leap motion controller tracks hand gestures and movements like what an actual DJ would do (raise/lower volume, cross-fade, increase/decrease BPM, etc.) which translate into the equivalent in the VirtualDJ software. Allowing the user to mix and be a DJ without touching a mouse or keyboard. Added to this is a synth pad for the DJ to use.
## How we built it
We used python to interpret gestures using Leap Motion and translating them into how a user in VirtualDJ would do that action using the keyboard and mouse. The synth pad we made using an Arduino and wiring it to 6 aluminum "pads" that make sounds when touched.
## Challenges we ran into
Creating all of the motions and make sure they do not overlap was a big challenge. The synth pad was challenging to create also because of lag problems that we had to fix by optimizing the C program.
## Accomplishments that we're proud of
Actually changing the volume in the VirtualDJ using leap motion. That was the first one we made work.
## What we learned
Using the Leap Motion, learned how to wire an arduino to create a MIDI synthesizer.
## What's next for Tracktive
Sell to DJ Khaled! Another one. | ## Inspiration
We like Pong. Pong is life. Pong is love.
We like Leap. Leap is cool. Leap is awesome.
## What it does
Classical game of Pong that can be controlled with the Leap Controller Motion sensor.
Supports different playing mode such as:
* Single player
+ Control with Mouse
+ Control with Keyboard
+ Control with Leap Controller
* Multi player
+ Control with Keyboard and Keyboard
+ Control with Keyboard and Mouse
+ Control with Leap Controller and Keyboard
+ Control with Leap Controller and Mouse
+ Control with 2 hands on one Leap Controller
## How we built it
* Unity
* Leap Controller Motion Sensor
* Tears, sweat, and Awake chocolate (#Awoke)
## Challenges we ran into
Too many. That's what happens when you work with things you've never touched before (and Git).
## Accomplishments that we're proud of
We created a functional game of Pong. And that beautiful opening screen.
## What we learned
Everything. We had to learn EVERYTHING.
## What's next for Pong2.0
Make it into a mobile app and/or turn your mobile phone into one of the controller! Not everyone has a Leap sensor but most people own a smartphone (monetize). | ## Inspiration
I was inspired by the idea of having computer generated music interact with human expression, and working with Leap Motion gave me the idea for a Soundfont-powered web app that controls some aspects of MIDI playback using only the user's hand movements.
## What it does
This project plays a number of MIDI channels which interact.A general chord progression guides a chord backdrop, an arpeggiator, a bass, and other instruments which play a number of different melodies (which can be changed using the keyboard). However, hand motions provided to the Leap control the complexity of the chords and melodies, and the intensity of their playback. For instance, a level, closed fist will silence all instruments. Extending an index finger controls the arpeggiated melody, extending a pinky controls the bass, and tilting the hand controls the chords. Raising the hand makes the notes more legato, and staccato if the hand is lowered. Generally, the user can control the intensity of the playing music using only their hand motion.
Change chord progression, top row of keyboard: (q through t)
Change arpeggio melody, middle row of keyboard (a through f)
Change bass melody, bottom row of keyboard (z through v)
## How I built it
I built this almost completely using JavaScript, jQuery, and the Leap Motion JavaScript API. I also used MIDI.js and some instrument soundfonts I had on my computer to make the music.
## Challenges I ran into
Interactively creating music is challenging. In this project, most of the instruments don't algorithmically play, but play pre-programmed melodies. Getting all instruments in sync, and to follow one chord progression were challenges I faced.
## Accomplishments that I'm proud of
I successfully read hand gestures from the Leap Motion API, to great effect. You can really control the intensity of the music and come up with some cool songs, if you get the hang of it.
## What I learned
I learned that algorthmic song creation (which I originally intended to implement) is a deeply challenging problem. Music is so subjective that it pretty much cannot be left up to a computer.
## What's next for Conductor
It could be expanded to have more melodies, or some way of inputting custom melodies or chord progressions. It could better implement gesture control to | winning |
## Inspiration
Much to our chagrin, the current industry leading water fountain model; the Elkay EZWSR Bottle Filling Station is lacking in effective features and displays an inconsequential "bottles saved" counter that contributes little to no good towards the fountain's practical use. Additionally, the fountain incapable of distinguishing different objects placed in front of the sensor—whether it be a water bottle or an alligator—it will indiscriminately dispense water. Another flaw we wanted to tackle was that the fountain continues to dispense water as long as there is an object in front of the sensor, even if the bottle is overflowing. This humiliation has incited a passion within us to create a more technologically impressive and environmentally-friendly solution to the Elkay EZWSR Bottle Filling Station.
## What it does
Where's My Water dispenses water into a variety of water bottles and common containers that hold liquids—such as cups and mugs, and automatically stops when the container is reasonably filled.
## How we built it
We trained an AI model MobileNetV2 SSD to identify and dimension water bottles and other containers, then loaded it onto a HDK8450 Development Kit with a camera card. We created an Android application intended to use the AI image recognition and relay necessary information to a Raspberry Pi 3 Model B with an attached Ultrasonic sensor to determine the water level and dimensions of a water bottle from empty to full.
## Challenges we ran into
The main challenge that we faced was writing an application from scratch that could receive the output generated by the camera card to extract the necessary information to transmit to our Raspberry Pi. The main difficulty in our design was securing smooth communication between each component to transmit information for efficient processing.
## Accomplishments that we're proud of
Our idea innovatively uses technology such as a Qualcomm HDK8450 and a Raspberry Pi 3 Model B to contribute to user accessibility and sustainable living.
## What we learned
We learned how to train an AI model using Edge Impulse and how to make an Android application using Android Studio.
## What's next for Where's My Water?
The next step would be to implement our design into a proper form factor and have it undergo extensive testing to improve its performance. | ## Inspiration
We had a mixture of talent this time around and decided we could tackle the creative and technical challenge of making a new intuitive way to play a text adventure
## What it does
Uses synonyms and natural language processing to figure out actions you can do that are the most similar to the typed sentence.
## How we built it
We used the GCP NLP API and the Oxford Dictionaries API to analyze inputted text and compare it to nearest "action" points in feasible action space.
## Challenges we ran into
The first challenge we ran into was finding a feasible project for such a diverse team, as we had a CS student, econ student, and English student all in one team. Then, heavy use of internet APIs slows the game down significantly (and took a very long time to figure out). Building a text adventure from scratch involves many bugs, basically none of which were fixed.
## Accomplishments that we're proud of
We managed to get a basic proof of concept for a game that intelligently recognizes English sentence inputs and converts them to in game actions.
## What we learned
Don't code a text adventure from scratch, and making 12 API calls sure takes a while.
## What's next for Silescia
Fix the bugs, add the features we didn't have time to add, and add the rest of the story which we have planned, but not at all implemented. Other potential features in the future would include some ML based intelligent procedural generation of the world, and intelligently intractable NPCs also using the NLP framework. | ## What it does
ColoVR is a virtual experience allowing users to modify their environment with colors and models. It's a relaxing, low poly experience that is the first of its kind to bring an accessible 3D experience to the Google Cardboard platform. It's a great way to become a creative in 3D without dropping a ton of money on VR hardware.
## How we built it
We used Google Daydream and extended off of the demo scene to our liking. We used Unity and C# to create the scene, and do all the user interactions. We did painting and dragging objects around the scene using a common graphics technique called Ray Tracing. We colored the scene by changing the vertex color. We also created a palatte that allowed you to change tools, colors, and insert meshes. We hand modeled the low poly meshes in Maya.
## Challenges we ran into
The main challenge was finding a proper mechanism for coloring the geometry in the scene. We first tried to paint the mesh face by face, but we had no way of accessing the entire face from the way the Unity mesh was set up. We then looked into an asset that would help us paint directly on top of the texture, but it ended up being too slow. Textures tend to render too slowly, so we ended up doing it with a vertex shader that would interpolate between vertex colors, allowing real time painting of meshes. We implemented it so that we changed all the vertices that belong to a fragment of a face. The vertex shader was the fastest way that we could render real time painting, and emulate the painting of a triangulated face. Our second biggest challenge was using ray tracing to find the object we were interacting with. We had to navigate the Unity API and get acquainted with their Physics raytracer and mesh colliders to properly implement all interaction with the cursor. Our third challenge was making use of the Controller that Google Daydream supplied us with - it was great and very sandboxed, but we were almost limited in terms of functionality. We had to find a way to be able to change all colors, insert different meshes, and interact with the objects with only two available buttons.
## Accomplishments that we're proud of
We're proud of the fact that we were able to get a clean, working project that was exactly what we pictured. People seemed to really enjoy the concept and interaction.
## What we learned
How to optimize for a certain platform - in terms of UI, geometry, textures and interaction.
## What's next for ColoVR
Hats, interactive collaborative space for multiple users. We want to be able to host the scene and make it accessible to multiple users at once, and store the state of the scene (maybe even turn it into an infinite world) where users can explore what past users have done and see the changes other users make in real time. | losing |
## Inspiration
As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born.
## What it does
Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly.
## How we built it
We used Flutter to build the interactive prototype of our Android Application.
## Challenges we ran into
None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced.
We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging.
## Accomplishments that we're proud of
This was our first mobile app we developed (as well as our first hackathon).
## What we learned
This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one.
As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new.
## What's next for Spend2Save
There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress.
We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC. | ## Inspiration
Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users.
## What it does
Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives.
The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising
## Persona
Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards.
## How we built it
We used : React, NodeJs, Firebase, HTML & Figma
## Challenges we ran into
We had a number of ideas but struggled to define the scope and topic for the project.
* Different design philosophies made it difficult to maintain consistent and cohesive design.
* Sharing resources was another difficulty due to the digital nature of this hackathon
* On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app.
* Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge.
## Accomplishments that we're proud of
* The use of harder languages including firebase and react hooks
* On the design side it was great to create a complete prototype of the vision of the app.
* Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time
## What we learned
* we learned how to meet each other’s needs in a virtual space
* The designers learned how to merge design philosophies
* How to manage time and work with others who are on different schedules
## What's next for Re:skale
Re:skale can be rescaled to include people of all gender and ages.
* More close integration with other financial institutions and credit card providers for better automation and prediction
* Physical receipt scanner feature for non-debt and credit payments
## Try our product
This is the link to a prototype app
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1>
This is a link for a prototype website
<https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1> | ## Inspiration
Most of the expenses apps track personal expenses and do weekly reviews. We think there is a lack of motivation to save more since the period they track has passed. As a result, we decide to create an expense app where the user can know their spending in the current budgeting period.
We observed that students, in particular, are experiencing multiple factors of stress for the first time. Whether it is from academic work or a change of environment. However, we realized that one of the most fundamental stress factors that all students are experiencing is in regards to their financial situations. With heavy tuition and expensive rent in downtown Toronto, we hope to create an app that will let students manage their expenses better. With an emphasis on budget control, we believe that students will have a clearer image of their expense behavior and thus make adjustments according to their past data. This will not only alleviate their stress on their financial situation but also encourage them to save more and plan their monthly expenses more often.
## What it does
The app is an expense app that shows how much of the budget is being used. Users can add and view transactions like a standard expenses app. An emphasis has been made on the budget system as users will have a clear and simple graph that tells them about their current budget and expenses.
## How we built it
We built the app using flutter and dart. We used multiple Stateful objects that pass data around by routing to different pages.
## Challenges we ran into
Learning app development from scratch. To keep track of multiple data at once was also a key factor that we needed to keep an eye on.
## Accomplishments that we're proud of
It's a working app!!!!!!
It has a clear and simple UI that our users can easily follow through, therefore encouraging them to keep track of their expenses.
## What we learned
As Flutter is a program that works around states and rerouting pages and data, we learned a lot about keeping consistent data types, having consistent language usage, and how to build a strong and durable framework.
## What's next for BudgetIt
We hope that we can expand it further, directing its use towards students to help them budget. Adding small investment categories and suggesting while giving personalized advice would be an ideal addition to our project. | winning |
## Inspiration
The current system for tracking and finding patient information in medical offices consists of outdated databases and finicky paperwork that is not only difficult to manage but bad for the environment. We wanted to create something that would be easy to use that could replace this system. After attending a chatbot workshop, we thought that a chatbot would be an effective solution to the problem.
## What it does
The chatbot interacts with the doctor by asking questions. The doctor will respond to these questions, and the responses will be logged in the database. The doctor can either request information about a patient or store new information about a patient from a visit. This includes recording prescriptions, diagnoses, and patients names/numbers. The doctor can ask for information about a patient by providing the patient number.
## How we built it
Using the fulfillment functionality of DialogFlow combined with firebase cloud functions using node.js.
## Challenges we ran into
We ran into an issue when we tried to implement slot filling in DialogFlow. When we first approached the project, we didn't have the database backend set up. Instead, we only had the chatbot that could respond depending on certain contexts and intents. After some research and trial and error, we were able to get the chatbot to record patient data to the database.
## Accomplishments that we're proud of
We were able to learn and implement a server using node.js from scratch with no prior experience. We also learned DialogFlow and created an entire project.
## What we learned
We learned how to use node.js and DialogFlow. In terms of personal skills, we learned how to collaborate and utilize the strengths of certain members to overcome our own shortcomings.
## What's next for CareBot
We hope to implement a patient-side version for patients to check up on their own health. We also thought about implementing an organization-side version for organizations to look up patient info. Eventually, we would hope to automate all paperwork in the healthcare industry. | ## Inspiration
Some of have family members in healthcare and see the overwhelming hardships they experience looking to provide healthcare to members of society. We also witness how hard it can be for the average human being to receive basic healthcare without losing a lot of money.
## What it does
This product uses ai to develop solutions to your personal health problems that you are encountering and want a solution for. We also have opportunity to connect to a real doctor if ai is not helping.
## How we built it
We used Next.JS, React, Firebase, and OpenAI to create this.
## Challenges we ran into
A lot of the challenges focused around developing the AI chat experience as well as the doctor to patient experience.
## Accomplishments that we're proud of
We managed to get the full product and the expected functionalities within 24 hours. There's also a full 24/7 backend database storing user and doctor credentials.
## What we learned
Leveraging firebase for user authentication and complex databases as well as build a real-time chat experience between doctors and patients.
## What's next for Telemedicine Chatbot | ## Inspiration
Lots of applications require you to visit their website or application for initial tasks such as signing up on a waitlist to be seen. What if these initial tasks could be performed at the convenience of the user on whatever platform they want to use (text, slack, facebook messenger, twitter, webapp)?
## What it does
In a medical setting, allows patients to sign up using platforms such as SMS or Slack to be enrolled on the waitlist. The medical advisor can go through this list one by one and have a video conference with the patient. When the medical advisor is ready to chat, a notification is sent out to the respective platform the patient signed up on.
## How I built it
I set up this whole app by running microservices on StdLib. There are multiple microservices responsible for different activities such as sms interaction, database interaction, and slack interaction. The two frontend Vue websites also run as microservices on StdLib. The endpoints on the database microservice connect to a MongoDB instance running on mLab. The endpoints on the sms microservice connect to the MessageBird microservice. The video chat was implemented using TokBox. Each microservice was developed one by one and then also connected one by one like building blocks.
## Challenges I ran into
Initially, getting the microservices to connect to each other, and then debugging microservices remotely.
## Accomplishments that I'm proud of
Connecting multiple endpoints to create a complex system more easily using microservice architecture.
## What's next for Please Health Me
Developing more features such as position in the queue and integrating with more communication channels such as Facebook Messenger. This idea can also be expanded into different scenarios, such as business partners signing up for advice from a busy advisor, or fans being able to sign up and be able to connect with a social media influencer based on their message. | losing |
## Inspiration
As a patient in the United States you do not know what costs you are facing when you receive treatment at a hospital or if your insurance plan covers the expenses. Patients are faced with unexpected bills and left with expensive copayments. In some instances patients would pay less if they cover the expenses out of pocket instead of using their insurance plan.
## What it does
Healthiator provides patients with a comprehensive overview of medical procedures that they will need to undergo for their health condition and sums up the total costs of that treatment depending on which hospital they go-to, and if they pay the treatment out-of-pocket or through their insurance.
This allows patients to choose the most cost-effective treatment and understand the medical expenses they are facing. A second feature healthiator provides is that once patients receive their actual hospital bill they can claim inaccuracies. Healthiator helps patients with billing disputes by leveraging AI to handle the process of negotiating fair pricing.
## How we built it
We used a combination of Together.AI and Fetch.AI. We have several smart agents running in Fetch.AI each responsible for one of the features. For instance, we get the online and instant data from the hospitals (publicly available under the Good Faith act/law) about the prices and cash discounts using one agent and then use together.ai's API to integrate those information in the negotiation part.
## Ethics
The reason is that although our end purpose is to help people get medical treatment by reducing the fear of surprise bills and actually making healthcare more affordable, we are aware that any wrong suggestions or otherwise violations of the user's privacy have significant consequences. Giving the user as much information as possible while keeping away from making clinical suggestions and false/hallucinated information was the most challenging part in our work.
## Challenges we ran into
Finding actionable data from the hospitals was one of the most challenging parts as each hospital has their own format and assumptions and it was not straightforward at all how to integrate them all into a single database. Another challenge was making various APIs and third parties work together in time.
## Accomplishments that we're proud of
Solving a relevant social issue. Everyone we talked to has experienced the problem of not knowing the costs they're facing for different procedures at hospitals and if their insurance covers it. While it is an anxious process for everyone, this fact might prevent and delay a number of people from going to hospitals and getting the care that they urgently need. This might result in health conditions that could have had a better outcome if treated earlier.
## What we learned
How to work with convex fetch.api and together.api.
## What's next for Healthiator
As a next step, we want to set-up a database and take the medical costs directly from the files published by hospitals. | ## Inspiration
During the fall 2021 semester, the friend group made a fun contest to participate in: Finding the chonkiest squirrel on campus.
Now that we are back in quarantine, stuck inside all day with no motivation to do exercise, we wondered if we could make a timer like in the app [Forest](https://www.forestapp.cc/) to motivate us to work out.
Combine the two idea, and...
## Welcome to Stronk Chonk!
In this game, the user has a mission of the utmost importance: taking care of Mr Chonky, the neighbourhood squirrel!
Spending time working out in real life is converted, in the game, as time spent gathering acorns. Therefore, the more time spent working out, the more acorns are gathered, and the chonkier the squirrel will become, providing it excellent protection for the harsh Canadian winter ahead.
So work out like your life depends on it!

## How we built it
* We made the app using Android Studio
* Images were drawn in Krita
* Communications on Discord
## Challenges we ran into
36 hours is not a lot of time. Originally, the app was supposed to be a game involving a carnival high striker bell. Suffice to say: *we did not have time for this*.
And so, we implemented a basic stopwatch app on Android Studio... Which 3 of us had never used before. There were many headaches, many laughs.
The most challenging bits:
* Pausing the stopwatch: Android's Chronometer does not have a pre-existing pause function
* Layout: We wanted to make it look pretty *we did not have time to make every page pretty* (but the home page looks very neat)
* Syncing: The buttons were a mess and a half, data across different pages of the app are not synced yet
## Accomplishments that we're proud of
* Making the stopwatch work (thanks Niels!)
* Animating the squirrel
* The splash art
* The art in general (huge props to Angela and Aigiarn)
## What we learned
* Most team members used Android Studio for the first time
* This was half of the team's first hackathon
* Niels and Ojetta are now *annoyingly* familiar with Android's Chronometer function
* Niels and Angela can now navigate the Android Studio Layout functions like pros!
* All team members are now aware they might be somewhat squirrel-obsessed
## What's next for Stronk Chonk
* Syncing data across all pages
* Adding the game element: High Striker Squirrel | ## Inspiration
The intricate nature of diagnosing and treating diseases, combined with the burdensome process of managing patient data, drove us to develop a solution that harnesses the power of AI. Our goal was to simplify and expedite healthcare decision-making while maintaining the highest standards of patient privacy.
## What it does
Percival automates data entry by seamlessly accepting inputs from various sources, including text, speech-to-text transcripts, and PDFs. It anonymizes patient information, organizes it into medical forms, and compares it against a secure vector database of similar cases. This allows us to provide doctors with potential diagnoses and tailored treatment recommendations for various diseases.
## How we use K-means clustering?
To enhance the effectiveness of our recommendation system, we implemented a K-means clustering model using Databricks Open Source within our vector database. This model analyzes the symptoms and medical histories of patients to identify clusters of similar cases. By grouping patients with similar profiles, we can quickly retrieve relevant data that reflects shared symptoms and outcomes.
When a new patient record is entered, our system evaluates their symptoms and matches them against existing clusters in the database. This process allows us to provide doctors with recommendations that are not only data-driven but also highly relevant to the patient's unique situation. By leveraging the power of K-means clustering, we ensure that our recommendations are grounded in real-world patient data, improving the accuracy of diagnoses and treatment plans.
## How we built it
We employed a combination of technologies to bring Percival to life: Flask for server endpoint management, Cloudflare D1 for secure backend storage of user data and authentication, OpenAI Whisper for converting speech to text, the OpenAI API for populating PDF forms, Next.js for crafting a dynamic frontend experience, and finally Databricks Open-source for the K-means clustering to identify similar patients.
## Challenges we ran into
While integrating speech-to-text capabilities, we faced numerous hurdles, particularly in ensuring the accurate conversion of doctors' verbal notes into structured data for medical forms. The task required overcoming technical challenges in merging Next.js with speech input and effectively parsing the output from the Whisper model.
## Accomplishments that we're proud of
We successfully integrated diverse technologies to create a cohesive and user-friendly platform. We take pride in Percival's ability to transform doctors' verbal notes into structured medical forms while ensuring complete data anonymization. Our achievement in combining Whisper’s speech-to-text capabilities with OpenAI's language models to automate diagnosis recommendations represents a significant advancement. Additionally, establishing a secure vector database for comparing anonymized patient data to provide treatment suggestions marks a crucial milestone in enhancing the efficiency and accuracy of healthcare tools.
## What we learned
The development journey taught us invaluable lessons about securely and efficiently handling sensitive healthcare data. We gained insights into the challenges of working with speech-to-text models in a medical context, especially when managing diverse and large inputs. Furthermore, we recognized the importance of balancing automation with human oversight, particularly in making critical healthcare diagnoses and treatment decisions.
## What's next for Percival
Looking ahead, we plan to broaden Percival's capabilities to diagnose a wider range of diseases beyond AIDS. Our focus will be on enhancing AI models to address more complex cases, incorporating multiple languages into our speech-to-text feature for global accessibility, and introducing real-time data processing from wearable devices and medical equipment. We also aim to refine our vector database to improve the speed and accuracy of patient-to-case comparisons, empowering doctors to make more informed and timely decisions. | partial |
## Inspiration
We often found ourselves stuck at the start of the design process, not knowing where to begin or how to turn our ideas into something real. In large organisations these issues are not only inconvenient and costly, but also slow down development. That is why we created ConArt AI to make it easier. It helps teams get their ideas out quickly and turn them into something real without all the confusion.
## What it does
ConArt AI is a gamified design application that helps artists and teams brainstorm ideas faster in the early stages of a project. Teams come together in a shared space where each person has to create a quick sketch and provide a prompt before the timer runs out. The sketches are then turned into images and everyone votes on their team's design where points are given from 1 to 5. This process encourages fast and fun brainstorming while helping teams quickly move from ideas to concepts. It makes collaboration more engaging and helps speed up the creative process.
## How we built it
We built ConArt AI using React for the frontend to create a smooth and responsive interface that allows for real-time collaboration. On the backend, we used Convex to handle game logic and state management, ensuring seamless communication between players during the sketching, voting, and scoring phases.
For the image generation, we integrated the Replicate API, which utilises AI models like ControlNet with Stable Diffusion to transform the sketches and prompts into full-fledged concept images. These API calls are managed through Convex actions, allowing for real-time updates and feedback loops.
The entire project is hosted on Vercel, which is officially supported by Convex, ensuring fast deployment and scaling. Convex especially enabled us to have a serverless experience which allowed us to not worry about extra infrastructure and focus more on the functions of our app. The combination of these technologies allows ConArt AI to deliver a gamified, collaborative experience.
## Challenges we ran into
We faced several challenges while building ConArt AI. One of the key issues was with routing in production, where we had to troubleshoot differences between development and live environments. We also encountered challenges in managing server vs. client-side actions, particularly ensuring smooth, real-time updates. Additionally, we had some difficulties with responsive design, ensuring the app looked and worked well across different devices and screen sizes. These challenges pushed us to refine our approach and improve the overall performance of the application.
## Accomplishments that we're proud of
We’re incredibly proud of several key accomplishments from this hackathon.
Nikhil: Learned how to use a new service like Convex during the hackathon, adapting quickly to integrate it into our project.
Ben: Instead of just showcasing a local demo, he managed to finish and fully deploy the project by the end of the hackathon, which is a huge achievement.
Shireen: Completed the UI/UX design of a website in under 36 hours for the first time, while also planning our pitch and brand identity, all during her first hackathon.
Ryushen: He worked on building React components and the frontend, ensuring the UI/UX looked pretty, while also helping to craft an awesome pitch.
Overall, we’re most proud of how well we worked as a team. Every person filled their role and brought the project to completion, and we’re happy to have made new friends along the way!
## What we learned
We learned how to effectively use Convex by studying its documentation, which helped us manage real-time state and game logic for features like live sketching, voting, and scoring. We also learned how to trigger external API calls, like image generation with Replicate, through Convex actions, making the integration of AI seamless. On top of that, we improved our collaboration as a team, dividing tasks efficiently and troubleshooting together, which was key to building ConArt AI successfully.
## What's next for ConArt AI
We plan to incorporate user profiles in order to let users personalise their experience and track their creative contributions over time. We will also be adding a feature to save concept art, allowing teams to store and revisit their designs for future reference or iteration. These updates will enhance collaboration and creativity, making ConArt AI even more valuable for artists and teams working on long-term projects. | ## Inspiration
A chatbot is often described as one of the most advanced and promising expressions of interaction between humans and machines. For this reason we wanted to create one in order to become affiliated with Natural Language Processing and Deep-Learning through neural networks.
Due to the current pandemic, we are truly living in an unprecedented time. As the virus' spread continues, it is important for all citizens to stay educated and informed on the pandemic. So, we decided to give back to communities by designing a chatbot named Rona who a user can talk to, and get latest information regarding COVID-19.
(This bot is designed to function similarly to ones used on websites for companies such as Amazon or Microsoft, in which users can interact with the bot to ask questions they would normally ask to a customer service member, although through the power of AI and deep learning, the bot can answer these questions for the customer on it's own)
## What it does
Rona answers questions the user has regarding COVID-19.
More specifically, the training data we fed into our feed-forward neural network to train Rona falls under 5 categories:
* Deaths from COVID-19
* Symptoms of COVID-19
* Current Cases of COVID-19
* Medicines/Vaccines
* New Technology/Start-up Companies working to fight coronavirus
We also added three more categories of data for Rona to learn, those being greetings, thanks and goodbyes, so the user can have a conversation with Rona which is more human-like.
## How we built it
First, we had to create my training data. Commonly referred to as 'intentions', the data we used to train Rona consisted of different phrases that a user could potentially ask. We split up all of my intentions into 7 categories, which we listed above, and these were called 'tags'. Under our sub-branch of tags, we would provide Rona several phrases the user could ask about that tag, and also gave it responses to choose from to answer questions related to that tag. Once the intentions were made, we put this data in a json file for easy access in the rest of the project.
Second, we had to use 3 artificial-intelligence, natural language processing, techniques to process the data, before it was fed into our training model. These were 'bag-of-words', 'tokenization' and 'stemming'. First, bag-of-words is a process which took a phrase, which were all listed under the tags, and created an array of all the words in that phrase, making sure there are no repeats of any words. This array was assigned to an x-variable. A second y-variable delineated which tag this bag-of-words belonged to. After these bags-of-words were created, tokenization was applied through each bag-of-words and split them up even further into individual words, special characters (like @,#,$,etc.) and punctuation. Finally, stemming created a crude heuristic, i.e. it chopped off the ending suffixes of the words (organize and organizes both becomes organ), and replaced the array again with these new elements. These three steps were necessary, because the training model is much more effective when the data is pre-processed in this way, it's most fundamental form.
Next, we made the actual training model. This model was a feed-forward neural network with 2 hidden layers. The first step was to create what are called hyper-parameters, which is a standard procedure for all neural networks. These are variables that can be adjusted by the user to change how accurate you want your data to be. Next, the network began with 3 layers which were linear, and these were the layers which inputted the data which was pre-processed earlier. After, these were passed on into what are called activation functions. Activation functions output a small value for small inputs, and a larger value if its inputs exceed a threshold. If the inputs are large enough, the activation function "fires", otherwise it does nothing. In other words, an activation function is like a gate that checks that an incoming value is greater than a critical number.
The training was completed, and the final saved model was saved into a 'data.pth' file using pytorch's save method.
## Challenges we ran into
The most obvious challenge was simply time constraints. We spent most of our time trying to make sure the training model was efficient, and had to search up several different articles and tutorials on the correct methodology and API's to use. Numpy and pytorch were the best ones.
## Accomplishments that we're proud of
This was our first deep-learning project so we are very proud of completing at least the basic prototype. Although we were aware of NLP techniques such as stemming and tokenization, this is our first time actually implementing them in action. We have created basic neural nets in the past, but also never a feed-forward one which provides an entire model as its output.
## What we learned
We learned a lot about deep learning, neural nets, and how AI is trained for communication in general. This was a big step up for us in Machine Learning.
## What's next for Rona: Deep Learning Chatbot for COVID-19
We will definitely improve on this in the future by updating the model, providing a lot more types of questions/data related to COVID-19 for Rona to be trained on, and potentially creating a complete service or platform for users to interact with Rona easily. | ## Inspiration
Inspired by the learning incentives offered by Duolingo, and an idea from a real customer (Shray's 9 year old cousin), we wanted to **elevate the learning experience by integrating modern technologies**, incentivizing students to learn better and teaching them about different school subjects, AI, and NFTs simultaneously.
## What it does
It is an educational app, offering two views, Student and Teacher. On Student view, compete with others in your class through a leaderboard by solving questions correctly and earning points. If you get questions wrong, you have the chance to get feedback from Together.ai's Mistral model. Use your points to redeem cool NFT characters and show them off to your peers/classmates in your profile collection!
For Teachers, manage students and classes and see how each student is doing.
## How we built it
Built using TypeScript, React Native and Expo, it is a quickly deployable mobile app. We also used Together.ai for our AI generated hints and feedback, and CrossMint for verifiable credentials and managing transactions with Stable Diffusion generated NFTs
## Challenges we ran into
We had some trouble deciding which AI models to use, but settled on Together.ai's API calls for its ease of use and flexibility. Initially, we wanted to do AI generated questions but understandably, these had some errors so we decided to use AI to provide hints and feedback when a student gets a question wrong. Using CrossMint and creating our stable diffusion NFT marketplace was also challenging, but we are proud of how we successfully incorporated it and allowed each student to manage their wallets and collections in a fun and engaging way.
## Accomplishments that we're proud of
Using Together.ai and CrossMint for the first time, and implementing numerous features, such as a robust AI helper to help with any missed questions, and allowing users to buy and collect NFTs directly on the app.
## What we learned
Learned a lot about NFTs, stable diffusion, how to efficiently prompt AIs, and how to incorporate all of this into an Expo React Native app.
Also met a lot of cool people and sponsors at this event and loved our time at TreeHacks!
## What's next for MindMint: Empowering Education with AI & NFTs
Our priority is to incorporate a spaced repetition-styled learning algorithm, similar to what Anki does, to tailor the learning curves of various students and help them understand difficult and challenging concepts efficiently.
In the future, we would want to have more subjects and grade levels, and allow the teachers to input questions for the student to solve. Another interesting idea we had was to create a mini real-time interactive game for students to play among themselves, so they can encourage each other to play between themselves. | winning |
## Inspiration
This year's theme of Nostalgia reminded us of our childhoods, reading stories and being so immersed in them. As a result, we created Mememto as a way for us to collectively look back on the past from the retelling of it through thrilling and exciting stories.
## What it does
We created a web application that asks users to input an image, date, and brief description of the memory associated with the provided image. Doing so, users are then given a generated story full of emotions, allowing them to relive the past in a unique and comforting way. Users are also able to connect with others on the platform and even create groups with each other.
## How we built it
Thanks to Taipy and Cohere, we were able to bring this application to life. Taipy supplied both the necessary front-end and back-end components. Additionally, Cohere enabled story generation through natural language processing (NLP) via their POST chat endpoint (<https://api.cohere.ai/v1/chat>).
## Challenges we ran into
Mastering Taipy presented a significant challenge. Due to its novelty, we encountered difficulty freely styling, constrained by its syntax. Setting up virtual environments also posed challenges initially, but ultimately, we successfully learned the proper setup.
## Accomplishments that we're proud of
* We were able to build a web application that functions
* We were able to use Taipy and Cohere to build a functional application
## What we learned
* We were able to learn a lot about the Taipy library, Cohere, and Figma
## What's next for Memento
* Adding login and sign-up
* Improving front-end design
* Adding image processing, able to identify entities within user given image and using that information, along with the brief description of the photo, to produce a more accurate story that resonates with the user
* Saving and storing data | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | ## What it does
We made 2 front ends to demonstrate the capabilities of alexaMD. Being able to extrapolate input from users to determine the likelihood of various diseases with confidence scores.
## How we built it
By scraping Mayo Clinic, a comprehensive medical database, we were able to compile information associated with illnesses and their characteristics. Using Watson's Natural Language Classifier suite, we integrated its natural language processing capabilities with Alexa's clear voice input to provide a seamless way to deliver medical diagnosis.
## Challenges we ran into
Extracting data from Mayo Clinics 400+ articles and integrating it with IBM Watson + Amazon AWS Lambda
## What we learned
Various techniques of efficiently processing large amounts of data and learning all the APIs needed.
## What's next for alexaMD
Scaling to extrapolate information from new research papers and modifying providing cures/remedies to possible illnesses. | partial |
## Inspiration
In today's always-on world, we are more connected than ever. The internet is an amazing way to connect to those close to us, however it is also used to spread hateful messages to others. Our inspiration was taken from a surprisingly common issue among YouTubers and other people prominent on social media: That negative comments (even from anonymous strangers) hurts more than people realise. There have been cases of YouTubers developing mental illnesses like depression as a result of consistently receiving negative (and hateful) comments on the internet. We decided that this overlooked issue deserved to be brought to attention, and that we could develop a solution not only for these individuals, but the rest of us as well.
## What it does
Blok.it is a Google Chrome extension that analyzes web content for any hateful messages or content and renders it unreadable to the user. Rather than just censoring a particular word or words, the entire phrase or web element is censored. The HTML and CSS formatting remains, so nothing funky happens to the layout and design of the website.
## How we built it
The majority of the app is built in JavaScript and jQuery, with some HTML and CSS for interaction with the user.
## Challenges we ran into
Working with Chrome extensions was something very new to us and we had to learn some new JS in order to tackle this challenge. We also ran into the issue of spending too much time deciding on an idea and how to implement it.
## Accomplishments that we're proud of
Managing to create something after starting and scraping multiple different projects (this was our third or fourth project and we started pretty late)
## What we learned
Learned how to make Chrome Extensions
Improved our JS ability
learned how to work with a new group of people (all of us are first time hackathon-ers and none of us had extensive software experience)
## What's next for Blok.it
Improving the censoring algorithms. Most hateful messages are censored, but some non-hateful messages are being inadvertently marked as hateful and being censored as well. Getting rid of these false positives is first on our list of future goals. | ## Inspiration
We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases.
We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events.
## What it does
Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views.
The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts.
Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording.
## How we built it
We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database.
## Challenges we ran into
Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs.
## Accomplishments that we're proud of
We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had.
## What we learned
Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction.
## What's next for Need 2 Know
We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future. | ## Inspiration
Social media has been shown in studies that it thrives on emotional and moral content, particularly angry in nature. In similar studies, these types of posts have shown to have effects on people's well-being, mental health, and view of the world. We wanted to let people take control of their feed and gain insight into the potentially toxic accounts on their social media feed, so they can ultimately decide what to keep and what to remove while putting their mental well-being first. We want to make social media a place for knowledge and positivity, without the anger and hate that it can fuel.
## What it does
The app performs an analysis on all the Twitter accounts the user follows and reads the tweets, checking for negative language and tone. Using machine learning algorithms, the app can detect negative and potentially toxic tweets and accounts to warn users of the potential impact, while giving them the option to act how they see fit with this new information. In creating this app and its purpose, the goal is to **put the user first** and empower them with data.
## How We Built It
We wanted to make this application as accessible as possible, and in doing so, we made it with React Native so both iOS and Android users can use it. We used Twitter OAuth to be able to access who they follow and their tweets while **never storing their token** for privacy and security reasons.
The app sends the request to our web server, written in Kotlin, hosted on Google App Engine where it uses Twitter's API and Google's Machine Learning APIs to perform the analysis and power back the data to the client. By using a multi-threaded approach for the tasks, we streamlined the workflow and reduced response time by **700%**, now being able to manage an order of magnitude more data. On top of that, we integrated GitHub Actions into our project, and, for a hackathon mind you, we have a *full continuous deployment* setup from our IDE to Google Cloud Platform.
## Challenges we ran into
* While library and API integration was no problem in Kotlin, we had to find workarounds for issues regarding GCP deployment and local testing with Google's APIs
* Since being cross-platform was our priority, we had issues integrating OAuth with its requirement for platform access (specifically for callbacks).
* If each tweet was sent individually to Google's ML API, each user could have easily required over 1000+ requests, overreaching our limit. Using our technique to package the tweets together, even though it is unsupported, we were able to reduce those requests to a maximum of 200, well below our limits.
## What's next for pHeed
pHeed has a long journey ahead: from integrating with more social media platforms to new features such as account toxicity tracking and account suggestions. The social media space is one that is rapidly growing and needs a user-first approach to keep it sustainable; ultimately, pHeed can play a strong role in user empowerment and social good. | winning |
## Check it out on GitHub!
The machine learning and web app segments are split into 2 different branches. Make sure to switch to these branches to see the source code! You can view the repository [here](https://github.com/SuddenlyBananas/be-right-back/).
## Inspiration
Inspired in part by the Black Mirror episode of the same title (though we had similar thoughts before we made the connection).
## What it does
The goal of the project is to be able to talk to a neural net simulation of your Facebook friends you've had conversations with. It uses a standard base model and customizes it based on message upload input. However, we ran into some struggles that prevented the full achievement of this goal.
The user downloads their message history data and uploads it to the site. Then, they can theoretically ask the bot to emulate one of their friends and the bot customizes the neural net model to fit the friend in question.
## How we built it
Tensor Flow for the machine learning aspect, Node JS and HTML5 for the data-managing website, Python for data scraping. Users can interact with the data through a Facebook Messenger Chat Bot.
## Challenges we ran into
AWS wouldn't let us rent a GPU-based E2 instance, and Azure didn't show anything for us either. Thus, training took much longer than expected.
In fact, we had to run back to an apartment at 5 AM to try to run it on a desktop with a GPU... which didn't end up working (as we found out when we got back half an hour after starting the training set).
The Facebook API proved to be more complex than expected, especially negotiating the 2 different user IDs assigned to Facebook and Messenger user accounts.
## Accomplishments that we're proud of
Getting a mostly functional machine learning model that can be interacted with live via a Facebook Messenger Chat Bot.
## What we learned
Communication between many different components of the app; specifically the machine learning server, data parsing script, web server, and Facebook app.
## What's next for Be Right Back
We would like to fully realize the goals of this project by training the model on a bigger data set and allowing more customization to specific users. | This project was developed with the RBC challenge in mind of developing the Help Desk of the future.
## What inspired us
We were inspired by our motivation to improve the world of work.
## Background
If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution.
## Try it!
<http://www.rbcH.tech>
## What we learned
Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes.
## How we built
* Node.js for our servers (one server for our webapp, one for BotFront)
* React for our front-end
* Rasa-based Botfront, which is the REST API we are calling for each user interaction
* We wrote our own Botfront database during the last day and night
## Our philosophy for delivery
Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP.
## Challenges we faced
Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence.
## Our code
<https://github.com/ntnco/mchacks/>
## Our training data
<https://github.com/lool01/mchack-training-data> | ## Inspiration
We wanted to create something fun and innovative with futuristic technologies using an old classic game.
## What it does
Club Penguin VR brings you the classic experience of selecting your penguin, walking through a charming town, and dancing in the nightclub! We used computer vision so that your penguin actually does your dance moves, and there's a giant mirror in the club room for you to watch yourself jam out on the colorful dance floor while surrounded with fellow penguins.
## How we built it
We used Unity as the game engine, C# to code, Google Daydream as the VR device, Blender to model the buildings and rig the penguins, and TensorFlow and OpenCV and Python to build the computer vision model.
## Challenges we ran into
We were unable to find a teammate with prior experience with computer vision, so we had to learn and create it, which was a long process. We also had to build many of our own 3D models for the game. Linking a deployed VR android app with Firebase while recording your body movements for the dancing was also something we had to design around.
## Accomplishments that we're proud of
We were able to do some justice to the old classic game, now including the ability to dance your own dances. The penguin had a slightly different rigging structure than a humanoid, but we were able to map to the best of our abilities during the hackathon our movements and joints to those of the penguin. It's quite fun to watch your penguin dance your dance!
## What's next for Club Penguin VR
Uh, yea, make all of Club penguin of course. Snow ball fights and the mine cart race! We originally wanted to also do the minecart race by computer vision, watching for body tilting. However, we ran out of time to model the entire track. | winning |
## Inspiration
Having experienced a language barrier firsthand, witnessing its effects in family, and reflecting on equity in services inspired our team to create a resource to help Canadian newcomers navigate their new home.
Newt aims to reduce one of the most stressful aspects of the immigrant experience by promoting more equitable access to services.
## What it does
We believe that everyone deserves equal access to health, financial, legal, and other services. Newt displays ratings on how well businesses can accommodate a user's first language, allowing newcomers to make more informed choices based on their needs.
When searching for a particular services, we use a map to display several options and their ratings for the user's first language. Users can then contact businesses by writing a message in their language of choice. Newt automatically translates the message and sends a text to the business provider containing the original and translated message as well as the user's contact information and preferred language of correspondence.
## How we built it
Frontend: React, Typescript
Backend: Python, Flask, PostgreSQL, Infobip API, Yelp API, Google Translate, Docker
## Challenges we ran into
Representing location data within our relational database was challenging. It would not be feasible to store every possible location that users might search for within the database. We needed to find a balance between sourcing data from the Yelp API and updating the database using the results without creating unnecessary duplicates.
## What we learned
We learned to display location data through an interactive map. To do so, we learned about react-leaflet to embed maps on React webpages. In the backend, we learned to use Infobip by reviewing related documentation, experimenting with test data, and with the help of Hack Western's sponsors. Lastly, we challenged ourselves to write unit tests for our backend functions and integrate testing within GitHub Actions to ensure every code contribution was safe.
## What's next for Newts
* Further support for translating the frontend display in each user's first language.
* Expanding backend data sources beyond the Yelp API and including other data sources more specific to user queries | ## Inspiration
Inspired by the millions of students around the world who swear that if they were just in another country travelling their lives would be so much better. We assure you that the grass is not always greener on the other side.
## What it does
The website collects reviews on the worst restaurants, hotels, and attractions and creates the worst possible itinerary for each city around the world. Based on the user’s deepest desires, it outputs the worst possible places to be in their dream city. The site also hosts a chat, comment, and like component where you can discuss itineraries and particularly distasteful sites. In the spirit of social distancing but still bringing people together, the website also utilises a matching algorithm to match similar itineraries together, allowing you to find a travel buddy for the worst trip ever.
## How We built it
This project was built on sheer determination and iMovie.
In all seriousness, this project was built with a variety of different tools as we all brought our unique perspective to the table.
Front-end:
React
Typescript
Javascript
Back-end:
Node.js
Express.js
Firebase
Twilio API
Places API
Design:
Figma
For greater detail, please check out our video!
## Challenges we ran into
A challenge we encountered was finding the review data. It seems like no one wants to report on terrible establishments but we were able to find out a way to get the data we needed (shout out to the Places API). Another issue was connecting the front-end and the back-end but we did it woo-hoo.
## Accomplishments that I'm proud of
It works for the most part!
We worked together to create a fully functioning front end and had very smooth design to developer handoff, implemented machine learning algorithms for the second time ever, and created a web app in multiple languages!
## What we learned
Some restaurants are really disgusting. Oh, and we all learned a lot about full-stack development and how to integrate APIs and databases.
## What's next for Trinogo
Increase functionality and create new features. First, we get reviews on all the restaurants in the world, next, world domination. | ## Inspiration
Most of us have had to visit loved ones in the hospital before, and we know that it is often not a great experience. The process can be overwhelming between knowing where they are, which doctor is treating them, what medications they need to take, and worrying about their status overall. We decided that there had to be a better way and that we would create that better way, which brings us to EZ-Med!
## What it does
This web app changes the logistics involved when visiting people in the hospital. Some of our primary features include a home page with a variety of patient updates that give the patient's current status based on recent logs from the doctors and nurses. The next page is the resources page, which is meant to connect the family with the medical team assigned to your loved one and provide resources for any grief or hardships associated with having a loved one in the hospital. The map page shows the user how to navigate to the patient they are trying to visit and then back to the parking lot after their visit since we know that these small details can be frustrating during a hospital visit. Lastly, we have a patient update and login screen, where they both run on a database we set and populate the information to the patient updates screen and validate the login data.
## How we built it
We built this web app using a variety of technologies. We used React to create the web app and MongoDB for the backend/database component of the app. We then decided to utilize the MappedIn SDK to integrate our map service seamlessly into the project.
## Challenges we ran into
We ran into many challenges during this hackathon, but we learned a lot through the struggle. Our first challenge was trying to use React Native. We tried this for hours, but after much confusion among redirect challenges, we had to completely change course around halfway in :( In the end, we learnt a lot about the React development process, came out of this event much more experienced, and built the best product possible in the time we had left.
## Accomplishments that we're proud of
We're proud that we could pivot after much struggle with React. We are also proud that we decided to explore the area of healthcare, which none of us had ever interacted with in a programming setting.
## What we learned
We learned a lot about React and MongoDB, two frameworks we had minimal experience with before this hackathon. We also learned about the value of playing around with different frameworks before committing to using them for an entire hackathon, haha!
## What's next for EZ-Med
The next step for EZ-Med is to iron out all the bugs and have it fully functioning. | partial |
## Inspiration
Patreon!
## What it does
We empower creators to directly upload their content to the blockchain and then sell their personal "creator coins" to their audiences. The more creator coins a fan owns, the more content from their creator they are able to see. The purpose of our application in the modern world is to create a place of equality where everyone is able to express their own personal beliefs on social media. Through blockchain, there will be no censorship or restrictions on users posts, while increasing security and allowing users to monetize their posts.
## How we built it
We leveraged the DeSo API to handle posting to the blockchain, fetching data, and authentication. We constructed our frontend with the ReactJS framework and served a Node.js/Express server that communicated to a MongoDB backend for data storage that the DeSo API couldn't handle. We used CSS to style all of our components and pages written with React framework, to make them all tie together to a visual theme. After completing the prototype we hosted it on a server with a custom domain name.
## Challenges we ran into
We ran into some difficulty using the DeSo API some documentation were still under construction. Due to the complexity of the API, it also requires fundamental knowledge about blockchain such as transactions and validation, so we struggled a little bit to get it running. We spent a lot of time making API calls without any crypto in our wallets, not realizing that some of the calls required gas fees.
## Accomplishments that we're proud of
We were able to come up with a creative idea and getting the DeSo API up and running to deploy a working prototype of our project. We are also extremely proud of our UI, as it really does give the feel of a real social media platform.
## What we learned
How to read and use large scale API documentation. | ## Inspiration
The inspiration for this project was both personal experience and the presentation from Ample Labs during the opening ceremony. Last summer, Ryan was preparing to run a summer computer camp and taking registrations and payment on a website. A mother reached out to ask if we had any discounts available for low-income families. We have offered some in the past, but don't advertise for fear of misuse of the discounts by average or high-income families. We also wanted a way to verify this person's income. If we had WeProsper, verification would have been easy. In addition to the issues associated with income verification, it is likely that there are many programs out there (like the computer camps discounts) that low-income families aren't aware of. Ample Labs' presentation inspired us with the power of connecting people with services they should be able to access but aren't aware of. WeProsper would help low-income families be aware of the services available to them at a discount (transit passes, for another example) and verify their income easily in one place so they can access the services that they need without bundles of income verification paperwork. As such, WeProsper gives low-income families a chance to prosper and improve financial stability. By doing this, WeProsper would increase social mobility in our communities long-term.
## What it does
WeProsper provides a login system which allows users to verify their income by uploading a PDF of their notice of assessment or income proof documents from the CRA and visit service providers posted on the service with a unique code the service provider can verify with us to purchase the service. Unfortunately, not all of this functionality is implemented just yet. The login system works with Auth0, but the app mainly runs with dummy data otherwise.
## How We built it
We used Auth0, react, and UiPath to read the PDF doing our on-site demo. UiPath would need to be replaced in the future with a file upload on the site. The site is made with standard web technologies HTML, CSS and Javascript.
## Challenges We ran into
The team was working with technologies that are new to us, so a lot of the hackathon was spent learning these technologies. These technologies include UiPath and React.
## Accomplishments that we're proud of
We believe WeProsper has a great value proposition for both organizations and low-income families and isn't easy replicated with other solutions. We excited about the ability to share a proof-of-concept that could have massive social impact. Personally, we are also excited that every team member improved skills (technical and non-technical) that will be useful to them in the future.
## What we learned
The team learned a lot about React, and even just HTML/CSS. The team also learned a lot about how to share knowledge between team members with different backgrounds and experiences in order to develop the project.
## What's next for WeProsper
WeProsper would like to use AI to detect anomalies in the future when verifying income. | ## Inspiration
With NFTs, Crypto, and Blockchain becoming more mainstream, online marketplaces like digital art platforms have never been more relevant. However, until recently, life has been very difficult for artists to portray and sell their art in art galleries. Our team saw this as an opportunity to not only introduce artists to crypto space and enable them to sell digital art online but also donate to modern social causes.
## What it does
1. Through our platform artists can upload their digital art and create an NFT for the art.
2. Anyone on the platform can buy the artists art (NFT) using crypto.
3. When a consumer buys the NFT, the ownership will transfer from seller to buyer.
4. Ten percent or more (if the artist chooses) of the proceeds will go to a charitable organization of the artists choice.
5. Depending on the amount of money donated towards a cause will determine the artist's position on our leaderboard.
## How we built it
1. Blockchain, Vue.js, Hedera Token Service, NFT.
2. React, Node.js, Docker.
3. Frontend are using HTML, CSS, and Javascript.
4. We also used Figma .
## Challenges we ran into
Most of our team members barely knew anything about cryptocurrencies and blockchains. We took on this project knowing full on that we will have to do a lot of work (and suffer), and we are proud that we were able to build this project. We ran into errors on front-end and back-end parts but by searching on Google and reading through documentations, we figured them out.
## Accomplishments that we're proud of
1. Creating an interface that creates NFT's
2. Creating a frontend that properly represents our platform
## What we learned
A lot about digital collectibles, NFT, Crypto currency, Block chain, Front-end development, Figma, React and many more technologies.
## What's next for Influenza
In future we hope to expand on the website, adding more functionality to the components and ensuring security of transactions and streamlining the NFT creation process. | partial |
## Inspiration
Finding teammates can be incredibly stressful. The Hack the North Slack had **1,710** members in the "looking for teammates" channel, and taking the step to message someone when you're a solo hacker can be terrifying. Everyone on our team had to take that leap, but some can't.
That's why we created **Hackd**—an easier way to find the right hacker for the job.
## What it does
**Hackd** analyzes your GitHub profile to understand what kind of coder you are and the complexity of the projects you've worked on. It also conducts sentiment analysis to identify people who can help meet your goals.
But we didn't want to stop there. Recruiters come to hackathons to meet thousands of people. With **Hackd**, we can help them find the best hackers to join their teams or contribute to exciting projects.
## How we built it
**BACKEND**
* **Convex** to run our database and store the user information, this includes **authentication** for github and all the information we gather from their account, the sentiment analysis linked to the github account, as well as the AI algorithm results of matches
* **Cohere** Would run the analysis of the github results and the sentiment scores to see who the matches were
* **Python** to scrape github users of their public repos, tech stack, and profile pictures
**FRONTEND**
* **React** for our front-end development ...
## Challenges we ran into
We had to learn new technology very fast here, Convex's dashboard and AUTH system gave us a lot of issues on figuring out the linking.
Being able to manipulate Cohere to be able to read through such heavy files such as a git repo was also quite a challenge, we had to make an insanely fine-tuned model to ensure that it avoided all unnecessary information to speed up processing.
## Accomplishments that we're proud of
We completed the project !! **Hackd** is polished and well made web app that is able to connect users attending hackathons with each other and ensure a more clean way to find your next collaborator.
Being able to offer a solution that is *DEI* friendly, our app is able to connect users without the walls, until you find the person for the job you don't even know their name, and the lovely part about github is that once we show their icon it automatically isn't them. All they know is their work, work ethic, and how much they want to go.
## What we learned
I think we should start with collaboration, this Hackathon was a physically demanding challenge and we pulled through together as a team, we all have each other to thank for it and brought us all closer.
Looking solely at the techstack, we learned a lot about Convex and different data base companies out there, how they work and how we can manipulate them into our product.
Parsing github was a huge learning curve, as well as how easy it is to get information out of it, the company wants us to have it, it's up to us to make it as beneficial for others as possible.
## What's next for Hackd
Continuing the recruiting side of it. I believe we have a lot to grow in people able to connect top talent at hackathons with users, this includes being able to invite for a coffee-chat and having access to information like a resume at a click of a button.
As this is a proof-of-concept our AI is not perfect, even with our fine-tuning it is still slow especially in larger repos. Finding a more efficient way to go through and grade it.
We have also noticed at hackathons that there is no way to give feedback to teammates after. We think that kind of knowledge is a powerful thing that we should be able to do. A post hackathon, or coworking review could allow our algorithms to sort even better | ## Inspiration
You're a hacker at a virtual Hackathon. You dig through hundreds of messages on Slack, hoping to find potential partners (who aren't taken already!). Or you post your bio in numerous Slack channels, wishing that your post doesn't drown among hundreds of others and that someone notices you. Hopping around video chats to introduce yourself and meet other hackers is nice, but there might be someone whom you didn't get the chance to meet, someone who could have been your perfect partner!
The scenario above is such a common struggle among us, and our team hopes to solve this problem using the connectivity app HackMeet.
## What it does
In hackathons, HackMeet allows hackers to interact with others' profiles and project ideas in an intuitive way to facilitate team formation. After creating an account and entering their info, users can view the profiles of other hackers. If they are searching for a team and see a profile with a cool idea and open group availability, they can direct message them and ask to form a team! Our filter feature allows users to view hackers with specific interests, skills, availability, etc. For hackers who are searching for additional members with specific skill sets, they can use the filter to view and reach out to those who might be a great fit for their team!
## How we built it
We first thought about design and project scope focusing on what key features we should include in our app. We brainstormed what were the best ways that hackers would be able to connect with each other and find the right groups.
At first, we tried building a website with React as the front end and Firebase as the backend. None of us had much experience User Authentication was done with firebase. All the database is created with firebase's NoSQL database. We were running into many problems and progress was very slow, as most of our time was spent trying to learn how to do things. Midway through- we decided to switch over to creating an app with android studio to create a finished product.
## Challenges we ran into
The most difficult part of our project was trying to learn from the tutorials and taking bits and pieces to integrate them into our project. We really thought it was possible for us to learn web development within a limited time but it turned out that it was more difficult than we estimated. But I think it was a good decision that we switched it over to App development at the end. Some of us had at least some familiarity with it so we were able to get the job done.
## Accomplishments that we're proud of
This is our first hackathon and we are proud to have an MVP and something to showcase. It was very difficult for us to pull it through since we were completely new to web dev and we were trying to understand as much as possible within those hours. In the end, we were able to cover most of the MVP using Android Studio.
## What we learned
Among many things: How to use Figma to create wireframes,
Learned the basics of React web development, user firebase as a backend and database, and android
## What's next for HackMeet
There is a whole lot we can expand on our app due to our limited time and knowledge going into this hackathon. We can definitely expand on the idea and create a matching algorithm so we can better refine the user experience. Refining the UI/UX. Create a website for the HackMeet platform. | ## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :) | losing |
## Inspiration
Our inspiration comes from people who require immediate medical assistance when they are located in remote areas. The project aims to reinvent the way people in rural or remote settings, especially seniors who are unable to travel frequently, obtain medical assistance by remotely connecting them to medical resources available in their nearby cities.
## What it does
Tango is a tool to help people in remote areas (e.g. villagers, people on camping/hiking trips, etc.) to have access to direct medical assistance in case of an emergency. The user would have the device on him while hiking along with a smart watch. If the device senses a sudden fall, the vital signs of the user provided by the watch would be sent to the nearest doctor/hospital in the area. The doctor could then assist the user in a most appropriate way now that the user's vital signs are directly relayed to the doctor. In a case of no response from the user, medical assistance can be sent using his location.
## How we built it
The sensor is made out of the Particle Electron Kit, which based on input from an accelerometer and a sound sensor, asseses whether the user has fallen down or not. Signals from this sensor are sent to the doctor if the user has fallen along with data from smart watch about patient health.
## Challenges we ran into
One of our biggest challenges we ran into was taking the data from the cloud and loading it on the web page to display it.
## Accomplishments that we are proud of
It is our first experience with the Particle Electron and for some of us their first experience in a hardware project.
## What we learned
We learned how to use the Particle Election.
## What's next for Tango
Integration of the Pebble watch to send the vital signs to the doctors. | ## Inspiration
In response to recent tragic events in Turkey, where the rescue efforts for the Earthquake have been very difficult. We decided to use Qualcomm’s hardware development kit to create an application for survivors of natural disasters like earthquakes to send out distress signals to local authorities.
## What it does
Our app aids disaster survivors by sending a distress signal with location & photo, chatbot updates on rescue efforts & triggering actuators on an Arduino, which helps rescuers find survivors.
## How we built it
We built it using Qualcomm’s hardware development kit and Arduino Due as well as many APIs to help assist our project goals.
## Challenges we ran into
We faced many challenges as we programmed the android application. Kotlin is a new language to us, so we had to spend a lot of time reading documentation and understanding the implementations. Debugging was also challenging as we faced errors that we were not familiar with. Ultimately, we used online forums like Stack Overflow to guide us through the project.
## Accomplishments that we're proud of
The ability to develop a Kotlin App without any previous experience in Kotlin. Using APIs such as OPENAI-GPT3 to provide a useful and working chatbox.
## What we learned
How to work as a team and work in separate subteams to integrate software and hardware together. Incorporating iterative workflow.
## What's next for ShakeSafe
Continuing to add more sensors, developing better search and rescue algorithms (i.e. travelling salesman problem, maybe using Dijkstra's Algorithm) | ## Inspiration
This project was inspired by a team member’s family, his grandparents always have to take medicine but often forget about it. Not only his grandparents forget the medicine also his mom. Although, his mom is very young but in a very fast paced society nowadays people always forget to do small things like taking their pills. Due to this inspiration, we decided to develop a pill reminder, but then we got inspired by a Tik Tok video about a person who has Parkinson’s disease and he couldn’t pick up an individual pill from the container. In end, we decide to create this project that will resolve the problem of people forgetting to take their pills as well as helping people to easily take individual pills.
## What it does
Our project the Delta Dispenser uses an app to communicate with the database to set up a specific time to alert users to take their pills as well as tracking their pills information in the app. The hardware of Delta Dispenser will alert the user when the time is reached and automatically dispense the correct amount of the pills into the container.
## How we built it
The frontend of the app is made with **Flutter**, the app communicates with a **firebase real-time database** to store medicinal and scheduling information for the user. The physical component uses an **embedded microcontroller** called an ESP-32 which we chose for its ability to connect to WiFi and be able to sync with the firebase database to know when to dispense the pills.
## Challenges we ran into
The time constraint was definitely a big challenge and we accounted for that by deciding which features were most important in emphasizing our main idea for this project. These parts include the mechanical indexer of the pills, the interface the user would interact with, and how the database would look for communication with the app and the embedded device.
## Accomplishments that we're proud of
We are most proud of how this project utilized many different aspects of engineering, from mechanical to electrical and software. Our team did a really good job at communicating throughout the design process which made integration at the end much easier.
## What we learned
During this project, we had learned how to flutter to create a mobile app as well as learning how firebase works. Throughout this project, although we only learned a few skills that will be very useful in the future. The most important part was that we were able to develop upon the skills we already had. For example, now we are able to develop hardware that could communicate through firebase.
## What's next for Delta Dispenser
The next steps for the Delta Dispenser include building a fully 3D printed prototype, along with the control box and hopper as shown in the CAD renders. On the software side, we would also like to add the ability for more complicated drug scheduling, while keeping the UI easy enough for anyone to set up. Having another portal that allows a doctor to directly input the information themselves is also a feature we are interested in having. | winning |
## Inspiration
We want to develop something fun to start our first Hackathon project. So we invented this light-weighted Diary app.
## What it does
You can record your voice and your stories on it.
## How we built it
We use react.js, json.js, and java to build it. React.js as a front-end tool, json.js as mock database and java as a back-end tool
## Challenges we ran into
our front-end cannot send audio directly to json server and our back-end cannot connect to database
## Accomplishments that we're proud of
We are really proud of developing an app as a team. And we made separate functionalities works correctly. we get the web-page right, we can send and get text data from server, we can translate audio to text and analysis sentiments in our back-end.
## What we learned
We learned how to use react to develop an app, how to create a mock database using json.js. we learned how to use AssemblyAi API, etc.
## What's next for Fantastic Diary
We will work out how to transfer mp3 audio from web to database, and connect database from server. We also intend to add more functionality to our Diary, such as music player, weather report etc. | ## Inspiration
As we all know the world has come to a halt in the last couple of years. Our motive behind this project was to help people come out of their shells and express themselves. Connecting various people around the world and making them feel that they are not the only ones fighting this battle was our main objective.
## What it does
People share their thought processes through speaking and our application identifies the problem the speaker is facing and connects him to a specialist in that particular domain so that his/her problem could be resolved. There is also a **Group Chat** option available where people facing similar issues can discuss their problems among themselves.
For example, if our application identifies that the topics spoken by the speaker are related to mental health, then it connects them to a specialist in the mental health field and also the user has an option to get into a group discussion which contains people who are also discussing mental health.
## How we built it
The front-end of the project was built by using HTML, CSS, Javascript, and Bootstrap. The back-end part was exclusively written in python and developed using the Django framework. We integrated the **Assembly AI** file created by using assembly ai functions to our back-end and were successful in creating a fully functional web application within 36 hours.
## Challenges we ran into
The first challenge was to understand the working of Assembly AI. None of us had used it before and it took us time to first understand it's working. Integrating the audio part into our application was also a major challenge. Apart from Assembly AI, we also faced issues while connecting our front-end to the back-end. Thanks to the internet and the mentors of **HackHarvard**, especially **Assembly AI Mentors** who were very supportive and helped us resolve our errors.
## Accomplishments that we're proud of
Firstly, we are proud of creating a fully functional application within 36 hours taking into consideration all the setbacks we had. We are also proud of building an application from which society can be benefitted. Finally and mainly we are proud of exploring and learning new things which is the very reason for hackathons.
## What we learned
We learned how working as a team can do wonders. Working under a time constraint can be a really challenging task, aspects such as time management, working under pressure, the never give up attitude and finally solving errors which we never came across are some of the few but very important things which we were successful in learning. | ## Inspiration
We wanted to watch videos together as a social activity during the pandemic but were unable as many platforms were hard to use or unstable. Therefore our team decided to create a platform that met our needs in terms of usability and functionality.
## What it does
Bubbles allow people to create viewing rooms and invite their rooms to watch synchronized videos. Viewers no longer have to conduct countdowns on their play buttons or have a separate chat while watching!
## How I built it
Our web app uses React for the frontend and interfaces with our Node.js REST API in the backend. The core of our app uses the Solace PubSub+ and the MQTT protocol to coordinate events such as video play/pause, a new member joining your bubble, and instant text messaging. Users broadcast events with the topic being their bubble room code, so only other people in the same room receive these events.
## Challenges I ran into
* Collaborating in a remote environment: we were in a discord call for the whole weekend, using Miro for whiteboarding, etc.
* Our teammates came from various technical backgrounds, and we all had a chance to learn a lot about the technologies we used and how to build a web app
* The project was so much fun we forgot to sleep and hacking was more difficult the next day
## Accomplishments that I'm proud of
The majority of our team was inexperienced in the technologies used. Our team is very proud that our product was challenging and was completed by the end of the hackathon.
## What I learned
We learned how to build a web app ground up, beginning with the design in Figma to the final deployment to GitHub and Heroku. We learned about React component lifecycle, asynchronous operation, React routers with a NodeJS backend, and how to interface all of this with a Solace PubSub+ Event Broker. But also, we learned how to collaborate and be productive together while still having a blast
## What's next for Bubbles
We will develop additional features regarding Bubbles. This includes seek syncing, user join messages and custom privilege for individuals. | losing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.