anchor
stringlengths 1
23.8k
| positive
stringlengths 1
23.8k
| negative
stringlengths 1
31k
| anchor_status
stringclasses 3
values |
---|---|---|---|
Inspiration
Our journey began from a place of personal struggle, often encountering the bottleneck of validating numerous business ideas brewing in our minds. The scenario of scribbling down a groundbreaking startup idea in Apple notes is all too familiar, followed by the tedious chore of toggling between tabs to research its potential. This routine not only hindered our momentum but also clouded our judgment with information overload. It was this friction that birthed the idea of Manat.ai - a tool envisaged to streamline the initial validation of startup ideas using Language Model Machines (LLMs), serving as a preliminary sieve to filter through the feasibility of our potential startup ideas.
What We Learned
The project was a rich learning ground. We delved into the art of analyzing business ideas, identifying key data points for an initial assessment. The event also brought us together as a team, teaching us the essence of collaboration. We explored various cool technologies including Langchain and OpenAI, and harnessed LLMs to fuel our application, marking our first stride into integrating such advanced AI into a project.
How We Built Manat.ai
Utilizing the Flowise platform, we orchestrated LLM flows comprising nodes and chains from OpenAI and Flowchain. This structure allowed us to incorporate conversational agents and AutoGPT for smarter workflow designs. The backend was engineered using Redis, Flask, and Express to interface with APIs, handle Python analytics, and manage data storage. On the frontend, we employed Next.js and JavaScript, crafting a user-friendly interface to interact with our system.
Challenges Faced
Our first time building an LLM application was a blend of excitement and hurdles. Designing chains and creating intelligent agents capable of web access and task execution posed a steep learning curve. The modular nature of our work - different teams focusing on LLM workflows, Python scripts for analytics, and others on integrating these into a cohesive application, brought about the challenge of assembling these disparate pieces into a unified, functioning system. Each challenge, however, was a stepping stone, propelling us closer to our vision of simplifying startup ideation through Manat.ai. | ## Inspiration
There is a lot of E-waste in the world. So instead of buying a new Bluetooth speaker for $50, we spent $250+ on parts to repurpose an old pair of earbuds to build our own.
## How we built it
We used the Bluetooth module from the earbuds and amplified the signal using a breadboard circuit.
## Challenges we ran into/What's next
Our parts did not arrive on time, even with priority shipping. When the subwoofer and tweeters come, we will make a high and low-pass filter using a resistor circuit to split out the bass and highs, and then wire them to their respective speakers. We'll then make a housing and solder our circuit to make it a permanent solution. | ## Inspiration
When learning new languages, it can be hard to find people and situations to properly practice, let alone finding the motivation to continue. As a group of second-generation immigrants who have taken both English and French classes, language is an embedded part of our identity. With Harbour, we sought to make the practice partner that we would want to use ourselves: speaking, listening, reading and writing under fun objective-based scenarios.
## What it does
Harbour places you in potential real-life situations where natural conversations could arise, using a large-language model powered chatbot to communicate in real time. It will generate some scenarios where you can decide what language and vocabulary difficulty you’re aiming for. Adding on a fun twist, you’ll receive objectives for what to do: figure out what time it is, where the washroom is, or why your conversation partner is feeling upset.
If you ever are confused about a word or phrase the chatbot uses, simply highlight the offending text and a friendly prompt to get a live translation will appear. Additional features include text-to-speech and speech-to-text, for the dual purposes to practice oral conversations as well as to provide maximum accessibility.
## How we built it
The web application for Harbour was created using Next.js as a framework for React. Next.js enabled us to create our backend which returns data the frontend requires through API requests. These were powered by our usage of the LangChain model from OpenAI for the LLM conversation and Google Cloud API for translations. Using the Vercel AI SDK also allowed us to manipulate server side event streaming to get the responses from the LangChain model over to the frontend as we received parts of the response. Text-to-speech and speech-to-text conversions were handled through the Web Speech API and the node module react-speech-recognition. Axios and fetch were used to make HTTP requests to our backend as well as the APIs used. Across our development process, GitHub was used for version control so that our four team members could each work on separate portions of the project on various branches and merge them as the pieces fell together.
## Challenges we ran into
Initially, our major blocker was that the OpenAI API LLM response would be slow to respond to our prompts. This was due to us attempting to receive the API response as one JSON file, which includes the full response of the API. Due to the nature of LLMs, they typically respond in portions of content, appearing similarly to a person typing a response. The process for the whole response to be generated takes a longer amount of time, but since they are generated portion by portion, readers can follow along as it is generated. However, we were attempting to wait until the full response was processed, causing the LLM to be slow and the user experience of the product to be poorer. In order to handle this slow response, we decided to receive the data from the API in the form of a stream instead, which would send the information portion by portion in order to be displayed for the users whilst it was being generated.
However, our choice of processing the data in the form of a stream meant that it was no longer compatible with the API structure we were originally doing. This required us to restructure our API in order to process the data. We were successfully able to restructure our API in order to effectively handle the stream data and send it to the frontend that further displays the data in a similar progressively generating format to other LLM powered applications, as well as processes the data and displays it in audio format.
## Accomplishments that we're proud of
We’re proud of our text-to-speech and speech-to-text features that operate alongside the messaging function of our application. These features enable learners to develop their reading, writing, listening, and speaking skills simultaneously. Additionally, they help those who may not be easily able to communicate with one skill still develop their language abilities.
We’re also proud of our team’s quick ability to learn how to and develop a LLM application, whilst it being our first time working with this type of technology. We were able to effectively learn how to create LLM applications, structure ours, and deliver a seamless user experience and product within the short timeframe of two days.
## What we learned
This was all of our team’s first time creating an LLM application. Harbour is a language service centered around dialogue. With the goal of fostering a realistic connection with users, we dove into technologies centered on receiving, processing, and delivering data, as well as a fluid user experience. Specifically, we learned how to efficiently deliver required prompts to the OpenAI API in order to refine and receive the intended answer, and also how to effectively create prompts the LLM can easily understand and follow through with, to prevent unintended features or mistakes.
We also gained experience with Next.js by building a React framework for full stack development and structuring backend to process frontend requests. On the front end, we learned how to integrate React design and components with Next.js.
## What's next for Harbour
Now that we’ve completed the MVP and some nice to have features, we would like to further improve Harbour by including more unique features and functionality.
One main feature of Harbour currently is the unique, almost life-like situations and experiences that the user can go through to practice their language abilities. We would like to further improve our prompt engineering to continue creating more depth in our scenarios for the users to practice with. For example, we’d like to include more levels of difficulty in our scenarios, for different levels of learners to effectively learn. This could mean including more levels of vocabulary, grammar, and sentence structure difficulty. Additionally, we would like to improve the situations to include more variety, such as creating different personalities. For example, different situations could be creating personalities that can have avid political debates, or speak to the user in a professional workplace manner.
One fun idea we’d like to implement in future versions is integrating famous personalities into Harbour. From strolling Hogwarts while chatting with Harry Potter to heating it up in the kitchen with Gordon Ramsay, who could resist talking with their favourite characters, all whilst developing their ability to speak in another language? | losing |
## Inspiration
We were inspired by the fact that **diversity in disability is often overlooked** - individuals who are hard-of-hearing or deaf and use **American Sign Language** do not have many tools that support them in learning their language. Because of the visual nature of ASL, it's difficult to translate between it and written languages, so many forms of language software, whether it is for education or translation, do not support ASL. We wanted to provide a way for ASL-speakers to be supported in learning and speaking their language.
Additionally, we were inspired by recent news stories about fake ASL interpreters - individuals who defrauded companies and even government agencies to be hired as ASL interpreters, only to be later revealed as frauds. Rather than accurately translate spoken English, they 'signed' random symbols that prevented the hard-of-hearing community from being able to access crucial information. We realized that it was too easy for individuals to claim their competence in ASL without actually being verified.
All of this inspired the idea of EasyASL - a web app that helps you learn ASL vocabulary, translate between spoken English and ASL, and get certified in ASL.
## What it does
EasyASL provides three key functionalities: learning, certifying, and translating.
**Learning:** We created an ASL library - individuals who are learning ASL can type in the vocabulary word they want to learn to see a series of images or a GIF demonstrating the motions required to sign the word. Current ASL dictionaries lack this dynamic ability, so our platform lowers the barriers in learning ASL, allowing more members from both the hard-of-hearing community and the general population to improve their skills.
**Certifying:** Individuals can get their mastery of ASL certified by taking a test on EasyASL. Once they start the test, a random word will appear on the screen and the individual must sign the word in ASL within 5 seconds. Their movements are captured by their webcam, and these images are run through OpenAI's API to check what they signed. If the user is able to sign a majority of the words correctly, they will be issued a unique certificate ID that can certify their mastery of ASL. This certificate can be verified by prospective employers, helping them choose trustworthy candidates.
**Translating:** EasyASL supports three forms of translation: translating from spoken English to text, translating from ASL to spoken English, and translating in both directions. EasyASL aims to make conversations between ASL-speakers and English-speakers more fluid and natural.
## How we built it
EasyASL was built primarily with **typescript and next.js**. We captured images using the user's webcam, then processed the images to reduce the file size while maintaining quality. Then, we ran the images through **Picsart's API** to filter background clutter for easier image recognition and host images in temporary storages. These were formatted to be accessible to **OpenAI's API**, which was trained to recognize the ASL signs and identify the word being signed. This was used in both our certification stream, where the user's ASL sign was compared against the prompt they were given, and in the translation stream, where ASL phrases were written as a transcript then read aloud in real time. We also used **Google's web speech API** in the translation stream, which converted English to written text. Finally, the education stream's dictionary was built using typescript and a directory of open-source web images.
## Challenges we ran into
We faced many challenges while working on EasyASL, but we were able to persist through them to come to our finished product. One of our biggest challenges was working with OpenAI's API: we only had a set number of tokens, which were used each time we ran the program, meaning we couldn't test the program too many times. Also, many of our team members were using TypeScript and Next.js for the first time - though there was a bit of a learning curve, we found that its similarities with JavaScript helped us adapt to the new language. Finally, we were originally converting our images to a UTF-8 string, but got strings that were over 500,000 characters long, making them difficult to store. We were able to find a workaround by keeping the images as URLs and passing these URLs directly into our functions instead.
## Accomplishments that we're proud of
We were very proud to be able to integrate APIs into our project. We learned how to use them in different languages, including TypeScript. By integrating various APIs, we were able to streamline processes, improve functionality, and deliver a more dynamic user experience. Additionally, we were able to see how tools like AI and text-to-speech could have real-world applications.
## What we learned
We learned a lot about using Git to work collaboratively and resolve conflicts like separate branches or merge conflicts. We also learned to use Next.js to expand what we could do beyond JavaScript and HTML/CSS. Finally, we learned to use APIs like Open AI API and Google Web Speech API.
## What's next for EasyASL
We'd like to continue developing EasyASL and potentially replacing the Open AI framework with a neural network model that we would train ourselves. Currently processing inputs via API has token limits reached quickly due to the character count of Base64 converted image. This results in a noticeable delay between image capture and model output. By implementing our own model, we hope to speed this process up to recreate natural language flow more readily. We'd also like to continue to improve the UI/UX experience by updating our web app interface. | ## Inspiration
In school, we were given the offer to take a dual enrollment class called Sign Language. A whole class for the subject can be quite time consuming for most children including adults. If people are interested in learning ASL, they either watch Youtube videos which are not interactive or spend HUNDREDS of dollars in classes (<https://deafchildren.org> requiring $70-100). Our product provides a cost-effective, time-efficient, and fun experience when learning the new unique language.
## What it does
Of course you have to first learn the ASL alphabets. A, B, C, D ... Z. Each alphabet has a unique hand gesture. You also have the option to learn phrases like "Yes", "No", "Bored", etc. The app makes sure you have done the alphabet correctly by displaying a circular progress view on how long you have to hold the gesture. We provide many images to make the learning experience accessible. After learning all the alphabets and practicing a few words, time for GAME time :). Test your ability to show a gesture and see how long you can go until you give up. The gamified experience leads to more learning and engaging for children.
## How we built it
The product was built using the language Swift. The hand-tracking was done using CoreML Components. We used hand landmarks and found distances between all points of the hand. Comparing the distances it SHOULD be and what it is as a specific time frame helps us figure out whether the hand pose is occurring. For the UI we planned it out using Figma and later wrote the code in Swift. We used the SwiftUI components to save time. For data storing we used UIData which syncs across devices with the same iCloud account.
## Challenges we ran into
There are 26 alphabets. That's a lot of arrays, comparing statements, and repetitive work. Testing would sometimes become difficult because the iPhone would eventually become hot and get temperature notifications. We only had one phone to test, so phone testing was frequently used for hand landmarks mostly. The project was extremely lengthy and putting so much content in one 36 hours is difficult, so we had to risk sleep. A cockroach in the room.
## Accomplishments that we're proud of
The hand landmark detection for an alphabet actually works much better than expected. Moving your hand super fast does not glitch the system. A fully functional vision app with clean UI makes the experience fun and open for all people.
## What we learned
Quantity < Quality. We created more than 6 functioning pages with different level of UI quality. It's very noticeable which views were created quickly because of the time crunch. Instead of having so many pages, decreasing the number of pages and maybe adding more content into each View would make the app appear flawless. Comparing arrays of the goal array and current time-frame array is TEDIOUS. So much time is wasted from testing. We could not figure out action classifier in Swift as there was no basic open-source code. Explaining problems to Chat GPT becomes difficult because the LLM never seems to understand basic tasks, but perfectly performs in complex tasks. Stack Overflow will still be around (for now) if we face problems.
## What's next for Hands-On
The app fits well on my iPhone 11, but on an iPad? I do not think so. The next step to take the project further is to scale UI, so it works for iPads an iPhones of any size. Once we fix that problem, we could release the app to the App Store. Since we do not use any API, we would have no expenses related to hosting the API. Making the app public could help people of all ages learn a new language in an interactive manner. | ## Inspiration
Over 70 million people around the world use sign language as their native form of communication. 70 million voices who were unable to be fully recognized in today’s society. This disparity inspired our team to develop a program that will allow for real-time translation of sign language into a text display monitor. Allowing for more inclusivity among those who do not know sign language to be able to communicate with a new community by breaking down language barriers.
## What it does
It translates sign language into text in real-time processing.
## How we built it
We set the environment by installing different packages (Open cv, mediapipe, scikit-learn), and set a webcam.
-Data preparation: We collected data for our ML model by capturing a few sign language letters through the webcam that takes the whole image frame to collect it into different categories to classify the letters.
-Data processing: We use the media pipe computer vision inference to capture the hand gestures to localize the landmarks of your fingers.
-Train/ Test model: We trained our model to detect the matches between the trained images and hand landmarks captured in real time.
## Challenges we ran into
The challenges we ran into first began with our team struggling to come up with a topic to develop. Then we ran into the issue of developing a program to integrate our sign language detection code with the hardware due to our laptop lacking the ability to effectively process the magnitude of our code.
## Accomplishments that we're proud of
The accomplishment that we are most proud of is that we were able to implement hardware in our project as well as Machine Learning with a focus on computer vision.
## What we learned
At the beginning of our project, our team was inexperienced with developing machine learning coding. However, through our extensive research on machine learning, we were able to expand our knowledge in under 36 hrs to develop a fully working program. | partial |
## Inspiration
**Problem**: It's a HUGE HASSLE to keep rewinding a video (using your hands) when learning. e.g. learning songs on the guitar, dance moves, cooking, etc. :(
**The Dream**: There should be a hands-free way to rewind videos, someone save us!
## What it does
Achieves the dream of **using your voice to navigate around a video** such as going forward, backward, starting, pausing, yay! ^\_^
## How I built it
Google Products:
* **Google Assistant** (Integration with Google Homes or Google Assistant on your phone)
* **DialogFlow** (Framework to handle speech intent, slot filling, fulfilment, and all the conversational flow)
* **Cloud Functions** (Fulfilment endpoint for DialogFlow to push to)
* **Pubsub** (Event-bus to handling streaming commands at scale)
+ Cloud Functions are publishers
+ Mobile Clients are subscribers
* **Flutter** (Framework for creating apps on iOS and Android phones and tablets)
+ **Text-to-Speech** (text processor for input from microphone)
+ **Google Cloud** (publisher for Google Pubsub)
+ **Youtube API** (video player on mobile app)
Other Products:
* ❤️ from Nick!
## Challenges I ran into
* Terrible Latency for Google Assistant to Pubsub (over 4 seconds for a simple word)
+ Unacceptable for real world use case UX wise :( ...
* No streaming text-to-speech interfaces for mobile
* Google Home Mini struggling to connect to nwhacks WiFi network T\_T
## Accomplishments that I'm proud of
## What I learned
## What's next for Videogator | ## Inspiration
In a world in which we all have the ability to put on a VR headset and see places we've never seen, search for questions in the back of our mind on Google and see knowledge we have never seen before, and send and receive photos we've never seen before, we wanted to provide a way for the visually impaired to also see as they have never seen before. We take for granted our ability to move around freely in the world. This inspired us to enable others more freedom to do the same. We called it "GuideCam" because like a guide dog, or application is meant to be a companion and a guide to the visually impaired.
## What it does
Guide cam provides an easy to use interface for the visually impaired to ask questions, either through a braille keyboard on their iPhone, or through speaking out loud into a microphone. They can ask questions like "Is there a bottle in front of me?", "How far away is it?", and "Notify me if there is a bottle in front of me" and our application will talk back to them and answer their questions, or notify them when certain objects appear in front of them.
## How we built it
We have python scripts running that continuously take webcam pictures from a laptop every 2 seconds and put this into a bucket. Upon user input like "Is there a bottle in front of me?" either from Braille keyboard input on the iPhone, or through speech (which is processed into text using Google's Speech API), we take the last picture uploaded to the bucket use Google's Vision API to determine if there is a bottle in the picture. Distance calculation is done using the following formula: distance = ( (known width of standard object) x (focal length of camera) ) / (width in pixels of object in picture).
## Challenges we ran into
Trying to find a way to get the iPhone and a separate laptop to communicate was difficult, as well as getting all the separate parts of this working together. We also had to change our ideas on what this app should do many times based on constraints.
## Accomplishments that we're proud of
We are proud that we were able to learn to use Google's ML APIs, and that we were able to get both keyboard Braille and voice input from the user working, as well as both providing image detection AND image distance (for our demo object). We are also proud that we were able to come up with an idea that can help people, and that we were able to work on a project that is important to us because we know that it will help people.
## What we learned
We learned to use Google's ML APIs, how to create iPhone applications, how to get an iPhone and laptop to communicate information, and how to collaborate on a big project and split up the work.
## What's next for GuideCam
We intend to improve the Braille keyboard to include a backspace, as well as making it so there is simultaneous pressing of keys to record 1 letter. | ## Inspiration
Aviation and global shipping accounts for roughly 1.14 billion tons of CO2 emissions with over 4.56 billion global passengers in 2018 alone [1]. With many experts believing that the number could triple by 2050, there is a clear need for a more climate-friendly mode of aviation. While much work is being done on electric aviation, much of that is restricted to short-distance domestic service as opposed to the long-distance flights that make up the backbone of our modern aviation and transportation industry.
Another possible alternative is using existing aviation technology while flying in the ground effect. The ground effect is a set of flight characteristics encountered while flying near the ground which allow for more efficient flight with less drag. This creates aircraft which require less thrust to reach similar velocities, allowing for fuel savings and more efficient flight.
After learning about the potential of ground effect, our team endeavored to build a concept vehicle which would demonstrate these benefits on a small scale. GREG (Ground Effect Glider) was the resulting vehicle we built over the course of TreeHacks 2024, and we were successfully able to demonstrate its flight capabilities while laying out a future path for improvements and demonstrating the effectiveness of the ground effect.
## Capabilities
The GREG plane does everything an RC plane does. It’s not just a bunch of foam and wood. It can fly, and it has flown!
* Along with a DC motor and propeller to provide thrust, we also have control surfaces actuated by servos. These servos and DC motor are hooked up to a radio receiver, so all parts of the plane can be controller by a radio transmitter in the hands of the pilot.
The best part, the special design of the wings helps it utilize the ground effect when it’s close to the ground. The additional winglets at the end of the wing provide yaw stability.
* We also have electronics in the payload bay that can measure the distance from the ground, an IMU to measure rotation, and store the data in an SD card.
## How we built it
Before the hackathon, we planned out what our build process would look like. We made a checklist of all the things we thought we needed to do to finish the plane in 36 hours. We knew this was critical to getting this complicated design built in time.
We also looked into GEV and the common designs for planes of this style. We consulted with Stanford Aeronautics and Astronautics Professor Ilan Kroo, who is an expert in the field of sustainable aviation. We educated ourselves on aircraft electronics, and how we could control the plane with a pilot transmitter so that we had the appropriate parts by the start of TreeHacks.
The wing and tail of the aircraft were constructed primarily using foam board. The wing shape was designed in a way to maximize the ground effect on GREG, and thus required anhedral connections accomplished using hot glue. Carbon fiber rods were added as structural members for the wing. Additionally, a curved sheet of balsa wood was added on top of the foam board to create an airfoil shape garnering improved aerodynamic effects. Foam rudders and an elevator were added to give control over the flight. The avionics bay was constructed as a rectangular fuselage made of plywood (for motor mounting) and insulation foam. It was covered with a balsa wood cover and connections were secured with tape and hot glue.
We used an RC plane control rod kit that provides control horns to attach to wing elements so that they can be controlled with micro-servos. The control horns were assembled such that the servo motion would move the rudder and elevator. These were then wired to a receiver that received radio signals from the transmitter. The motor was controlled by an electronic speed controller that was also attached to the receiver.
A sensor suite was prepared for data acquisition onboard the vehicle. This included an IMU and time of flight sensor operated by a Teensy microcontroller. When integrated into the aircraft, autopilot capabilities can be achieved.
## Challenges we ran into
As you would expect, we faced every challenge that a hardware project would face. Here’s a best-of-the-best list of all the best issues we faced:
* The wing was too long to stay rigid when we mounted the fuselage to it. It sagged so much that our propeller was starting to hit the ground. We had to add spars, carbon fiber rods and more to make it more aerodynamic and rigid.
* The weather! All the rain on Saturday made it really hard to do flight tests. Indoor flights (like flying in an empty car garage) led to some pretty tough crashes that resulted in a broken propeller.
* Getting mass balance correct is pretty essential to steady flights. With hot glue everywhere and imperfect cuts, it was hard to estimate the centers of gravity and pressure very accurately. We had to be smart and move the battery accordingly.
* One of the biggest challenges, none of us had built or manually flown an aircraft before! It’s not easy to configure all the electronics, or pilot a plane that’s been built from scratch and has a lot of imperfections.
* The propeller broke on landing every time, and then we ran out of propellers!
## Accomplishments that we're proud of
During this project, we could have modified an existing RC plane to be better suited for ground effect. However, we decided to build the plane from scratch, because what’s a better way to learn how to make something than to build it from scratch?
Without a doubt, the accomplishment we are most proud of is our first successful flight of the aircraft. Going into the weekend, we were not sure if it was even possible to build an RC plane of this size in 36 hours. However, when the plane took off for the first time those worries were gone. In fact, we had the plane ready for glide testing in 24 hours! The videos show off our excitement.
## What we learned
The biggest lesson we learned was without a doubt the value of iterating through designs and rapid prototyping. In a 36 hour period, there is not a lot of time to meditate on decisions. Running a unit test on hardware requires a lot of moving parts to come together, and a crash in the real world isn’t as easy to fix as a missing semicolon. To combat this, we learned to trust our own decisions and to rely on each other to make decisions on the fly that would be critical to the success of the final product. Despite our best efforts, things do break. We learned how to take every failure and turn it into a chance to make our design better.
Many technical lessons were also learned over the course of this weekend. We learned to manufacture low mass and high rigidity structures, and how to couple our theoretical knowledge of mechanics and aerodynamics with our physical ability to cut, shape, bend, and glue. Additionally, we learned to interface with a radio controller in order to remotely send command inputs to the glider in flight.
## What's next for Ground Effect
There’s a lot more that can be done with this idea, and what you’re seeing here is only the most basic of prototypes.
To utilize the ground effect, the plane needs to be close to the ground and maintain its altitude at that level. This is extremely difficult with a manually piloted plane with no intelligent control systems. Our goal is to utilize the time of flight sensor and the IMU that have already been incorporated on-board, and develop an active control system that maintains altitude and the desired pitch angle. The team has prior experience in building these systems, and is confident in its ability to make this a reality. Then, we can incorporate cameras and GPS to help the plane fly to specific waypoints, further improving the autonomy stack.
We also want to get further data analysis working, by measuring pressure at the wing-tips or the current draw at various flight altitudes. With some of that aerospace-grade aerodynamic analysis to quantify the ground effect, the team could optimize the wing geometry and improve performance.
Going beyond the plane we have, the possibilities with commercializing the ground effect plane are endless. Aerial cargo transport and commercial travel that is more efficient than current air transport but much faster than sea transport.
## References
[1] <https://ourworldindata.org/co2-emissions-from-aviation> | partial |

## Why Make CTFort?
Recently I had the chance to participate in the [BSides Ottawa CTF](https://shorten.ninja/bsides) While I had a great time, I noticed a few issues; mainly the fact that the facebook CTF platform doesn't offer the ability to keep track of which flags have already been found. While the system does show the compromised countries, it does not show which flag was actually captured. This led to a few issues where people ended up wasting time because they were both finding the same flag, as they did not know each other's progress.
## How does it work?
CTFort aims to act as a centralized progress tracker towards flag captures. All you need to do to get started is log in/register and then create a team. Once a team has been created, you can invite your friends using the provided invitation code. They just need to enter this code when joining the team and they will be in. Once a part of the team, each of the members will have access to the various modules to enhance their CTF experience. | ## Inspiration
Picture this, I was all ready to go to Yale and just hack away. I wanted to hack without any worry. I wanted to come home after hacking and feel that I had accomplished a functional app that I could see people using. I was not sure if I wanted to team up with anyone, but I signed up to be placed on a UMD team. Unfortunately none of the team members wanted to group up, so I developed this application alone.
I volunteered at Technica last week and saw that chaos that is team formation and I saw the team formation at YHack. I think there is a lot of room for team formation to be more fleshed out, so that is what I set out on trying to fix. I wanted to build an app that could make team building at hackathons efficiently.
## What it does
Easily set up "Event rooms" for hackathons, allowing users to join the room, specify their interests, and message other participants that are LFT (looking for team). Once they have formed a team, they can easily get out of the chatroom simply by holding their name down and POOF, no longer LFT!
## How I built it
I built a Firebase server that stores a database of events. Events hold information obviously regarding the event as well as a collection of all members that are LFT. After getting acquainted with Firebase, I just took the application piece by piece. First order of business was adding an event, and then displaying all events, and then getting an individual event, adding a user, deleting a user, viewing all users and then beautifying with lots of animations and proper Material design.
## Challenges I ran into
Android Animation still seems to be much more complex than it needs to be, so that was a challenge and a half. I want my applications to flow, but the meticulousness of Android and its fragmentation problems can cause a brain ache.
## Accomplishments that I'm proud of
In general, I am very proud with the state of the app. I think it serves a very nifty core purpose that can be utilized in the future. Additionally, I am proud of the application's simplicity. I think I set out on a really solid and feasible goal for this weekend that I was able to accomplish.
I really enjoyed the fact that I was able to think about my project, plan ahead, implement piece by piece, go back and rewrite, etc until I had a fully functional app. This project helped me realize that I am a strong developer.
## What I learned
Do not be afraid to erase code that I already wrote. If it's broken and I have lots of spaghetti code, it's much better to try and take a step back, rethink the problem, and then attempt to fix the issues and move forward.
## What's next for Squad Up
I hope to continue to update the project as well as provide more functionalities. I'm currently trying to get a published .apk on my Namecheap domain (squadupwith.me), but I'm not sure how long this DNS propagation will take. Also, I currently have the .apk to only work for Android 7.1.1, so I will have to go back and add backwards compatibility for the android users who are not on the glorious nexus experience. | ## What it does
flarg.io is an Augmented Reality platform that allows you to play games and physical activities with your friends from across the world. The relative positions of each person will be recorded and displayed on a single augmented reality plane, so that you can interact with your friends as if they were in your own backyard.
The primary application is a capture the flag game, where your group will be split into two teams. Each team's goal is to capture the opposing flag and bring it back to the home-base. Tagging opposing players in non-safe-zones would put them on temporary time out, forcing them go back to their own home-base. May the best team win!
## What's next for flarg.io
Capture the flag is just the first of our suite of possible mini-games. Building off of the AR framework that we have built, the team foresees making other games like "floor is lava" and "sharks and minnows" with the same technology. | losing |
# cloud-copy
Cloud Copy is a novel solution to clipboard management. It allows you to copy something on one device, and paste it directly into another, seamlessly.
Cloud Copy differentiates itself from other clipboard applications by not depending on Chrome, or any browser you use. Versatile across all platforms, connect all your devices with a single account.
Everyone in our group except for one was inexperienced in front-end and back-end development. However, through his guidance and mentorship we were able to attain many new skills and have a functional product at the end of this hackathon.
A challenge that we faced was not being able to deploy the mobile version. This was largely due to authentication and encryption issues, but we hope to fix that in the coming future. | ## Inspiration
We wanted to make investing more community based, accessible, and beneficial to society.
## What it does
Allows people to trade stocks by democratic voting, and donate their capital gains to charity.
## How I built it
We used React Native to build our app, and we used Expo to manage how we view, tested, and ran our app on our mobile devices . Low fidelity prototypes were built on scratch paper. The back-end was built on Python with Flask and hosted on Heroku, we also went serverless with google cloud functions and CockroachDB.
## Challenges I ran into
We weren’t able to implement all the extra functionality we envisioned such as donations, and profile tabs. Our development team took on the challenge of working with new tools and languages such as javascript, react native, expo, and API technologies. Other challenges include, managing version control history, adding navigation in React Native, and installation and initial setup errors. The team had debugging issues that took up a substantial amount of our development time. Another struggle was working remotely from home and the technology limitations this presented.
## Accomplishments that we're proud of
Our development team had limited prior knowledge of mobile development prior to the hackathon. Given this, we’re proud of what we were able to accomplish in a short period of time, as well as the teamwork we exhibited throughout the process. We were able to develop a minimum viable product, and create app functionality like forms, cards, flatlists, touchable components, styling, using the alpaca API, creating mobile navigation, a splash screen, learning how to use gradient colour on apps, and sending debugging requests to service. We’re also proud of our idea which we believe, if implemented, has the potential to make a positive impact.
## What we learned
Our development team learned new technologies by using tools like react native, expo and APIs. In particular, how Expo has made development easier and more intuitive (e.g. publishing apps, multi-platform compatibility is automatic, React makes the app building a little more intuitive). During the opening ceremonies, we learned about different company API’s, and we learned how to incorporate Coackroach’s API into our project. We also learned a little about investing and how stocks work!
## What's next for RobinGood
Donations Feature = allow lending money to those with less buying power and allow donations to charitable causes.
Profile Feature = complete the profile page (users and management already created on back-end, but did not link everything)
Overall better styling = this wasn’t the minimum viable product of our project. That is why we concentrated much less on the styling, and much more on the actual functionality of our app. However, it would be beneficial for us to invest time in styling our app for better engagement with | ## Inspiration
Save plate is an app that focuses on narrowing the equity differences in society.It is made with the passion to solve the SDG goals such as zero hunger, Improving life on land, sustainable cities and communities and responsible consumption and production.
## What it does
It helps give a platform to food facilities to distribute their untouched meals to the shelter via the plate saver app. It asks the restaurant to provide the number of meals that are available and could be picked up by the shelters. It also gives the flexibility to provide any kind of food restriction to respect cultural and health restrictions for food.
## How we built it
* Jav
## Challenges we ran into
There were many challenges that I and my teammates ran into were learning new skills, teamwork and brainstorming.
## Accomplishments that we're proud of
Creating maps, working with
## What we learned
We believe our app is needed not only in one region but entire world, we all are taking steps towards building a safe community for everyone Therefore we see our app's potential to run in collaboration with UN and together we fight world hunger. | losing |
# Sharemuters
Web application built on NodeJS that uses the Mojio api to get user data. Finds other people that have similar trips as you so you can carpool with them and share the gas money!
Uses CockroachDB for storing user data.
## Built at nwhacks2017
### Sustainable Future
Reaching a sustainable future requires intermediate steps to achieve. Electric car adoption is currently under 1% in both the US and Canada. Most cities do not have an amazing transportation system like Vancouver; they rely on cars. Carpooling can reduce environmental impact of driving, and benefit users by saving money, meeting new people, and making commutes more enjoyable.
### Barriers
Transport Canada states the biggest barriers for carpooling are personal safety, flexibility, and effort to organize. Sharemuters targets all three of these inconveniences. Users are vetted through Mojio accounts, and the web app pairs drivers with similar commutes.
### Benefits
Carpooling is a large market; currently just under 10% of drivers in the US and Canada carpool, and 75% of those drivers carpool with family. In addition to reducing pollution, carpooling saves money on gas and parking, time spent commuting, and offers health benefits from reduced stress. Many companies also offer perks to employees that carpool. | ## Inspiration
We were inspired to create our detector by a desire to gain understanding of red team tactics and testing methodologies.
## What it does
The detector looks for running processes that match the names of known sniffing software, captures the information, and creates a hit on the security dashboard provided by Howler. the same process is repeated if a suspicious looking command is executed. Solution adheres to the detection recommendations outlined in the MITRE analysis of the network sniffing technique.
<https://attack.mitre.org/techniques/T1040/>
## How we built it
The detector is built using a continually running python script. The python script uses subprocesses and system utilities to access data on running processes and executed commands, and checks these against a list of known potentially malicious events. The demo and testing is done through Atomic Red Team test cases
<https://atomicredteam.io/discovery/T1040/>
## Challenges we ran into
The atomic testing framework required modification to run on the host system, getting logs of commands executed is difficult.
## Accomplishments that we're proud of
We are pleased that the script runs as expected and was able to detect both command execution and process creation.
## What we learned
* Containerized red team testing using atomic red team and Docker
* Analysis of system status using logging and process monitoring.
## What's next for Network Sniffing Detector - MacOS
Increasing the robustness of the program with more research into possible malicious processes.
* Using regex to allow for better classification of processes and commands. | ## Inspiration
The issue of traffic is one that affects all. We found ourselves frustrated with how the problem of traffic is continually getting worse with urbanization and how their is no solution in site. We wanted to tackle this issue in a way that benefits not just the individual but societies as a whole.
## What it does
Commute uses ML to determine suggested alternative routes to users on their morning commute that reduce traffic congestion in the effort to reduce urban smog.
## How we built it
This was all accomplished using Android Studio (Java) along with the google maps API, and JSON files. Along with sweat, tears and a lot of hard-work and collaboration.
## Challenges we ran into
We ran into multiple challenges whilst completing this challenging yet rewarding project of ours. Our first challenge was figuring out how to extrapolate the traffic data from our API and use it to determine which route would have the least congestion. Our other major challenge would be generating our paths for our users to take and creating an algorithm that fairly rewarded points for each less congested route.
## Accomplishments that we're proud of
We were so proud to accomplish this in our limited time with emphasis on our amazing collaboration and relentless effort to produce something not only beneficial to drivers, but also the environment and many other social factors.
## What we learned
From this we learned how to collaborate with the goal of developing projects that benefit a range of factors and in no way negatively impact the public, which reaffirmed our belief in the power of using software design to help society.
## What's next for Commute
In the future we hope to further improve Commute with rewarding users for taking public transportation, biking or using any other more Eco-friendly route to their destination, allowing us to create not only a cleaning environment but clearer roadways as well. | partial |
## Inspiration
According to the United State's Department of Health and Human Services, 55% of the elderly are non-compliant with their prescription drug orders, meaning they don't take their medication according to the doctor's instructions, where 30% are hospital readmissions. Although there are many reasons why seniors don't take their medications as prescribed, memory loss is one of the most common causes. Elders with Alzheimer's or other related forms of dementia are prone to medication management problems. They may simply forget to take their medications, causing them to skip doses. Or, they may forget that they have already taken their medication and end up taking multiple doses, risking an overdose. Therefore, we decided to solve this issue with Pill Drop, which helps people remember to take their medication.
## What it does
The Pill Drop dispenses pills at scheduled times throughout the day. It helps people, primarily seniors, take their medication on time. It also saves users the trouble of remembering which pills to take, by automatically dispensing the appropriate medication. It tracks whether a user has taken said dispensed pills by starting an internal timer. If the patient takes the pills and presses a button before the time limit, Pill Drop will record this instance as "Pill Taken".
## How we built it
Pill Drop was built using Raspberry Pi and Arduino. They controlled servo motors, a button, and a touch sensor. It was coded in Python.
## Challenges we ran into
Challenges we ran into was first starting off with communicating between the Raspberry Pi and the Arduino since we all didn't know how to do that. Another challenge was to structurally hold all the components needed in our project, making sure that all the "physics" aligns to make sure that our product is structurally stable. In addition, having the pi send an SMS Text Message was also new to all of us, so incorporating a User Interface - trying to take inspiration from HyperCare's User Interface - we were able to finally send one too! Lastly, bringing our theoretical ideas to fruition was harder than expected, running into multiple road blocks within our code in the given time frame.
## Accomplishments that we're proud of
We are proud that we were able to create a functional final product that is able to incorporate both hardware (Arduino and Raspberry Pi) and software! We were able to incorporate skills we learnt in-class plus learn new ones during our time in this hackathon.
## What we learned
We learned how to connect and use Raspberry Pi and Arduino together, as well as incorporating User Interface within the two as well with text messages sent to the user. We also learned that we can also consolidate code at the end when we persevere and build each other's morals throughout the long hours of hackathon - knowing how each of us can be trusted to work individually and continuously be engaged with the team as well. (While, obviously, having fun along the way!)
## What's next for Pill Drop
Pill Drop's next steps include creating a high-level prototype, testing out the device over a long period of time, creating a user-friendly interface so users can adjust pill-dropping time, and incorporating patients and doctors into the system.
## UPDATE!
We are now working with MedX Insight to create a high-level prototype to pitch to investors! | ## Inspiration
Hi! I am a creator of Snapill. Recently, I had to undergo wisdom tooth surgery. The surgery wasn't that bad at all, however, I was prescribed around 5 different medications, from Tylenol to NSAIDs to Steroids. It was hard to keep track of all these medications, and as a result, I ended up going through more pain during my recovery time.
However, to other people, the punishment of medication error is far more dangerous. Around half of drug-related deaths in the US is related to prescription drug use and error (22,000 people, NCBI). Furthermore, it is much harder for older people or those going through complex diseases/surgeries to keep track of their medication requirements. My personal experience, along with these pressing issues, has motivated me to design **Snapill**, a Computer-Vision based Medication App, in order to improve patient safety and awareness towards the power a single pill has.
## What it does
You may notice Snapill is very similar to the popular messaging app Snapchat! This was intentional, as our goal is to ... . Asides from the UI, Snapill is very powerful and differs from a common medication app.
Specifically, Snapill is the first app that leverages advanced Computer Vision techniques like homography with your common LLM. The combination allows for robustness and covers several user failure points (mainly: bad lighting). Snapill allows a patient to scan their medication vial, and automatically generates important metadata like prescription name, expiration date, dosage, and possible times to take the pill. Furthermore, we believe awareness is a strong factor for increased patient safety, so we incorporated the Cerebras AI Inference models to allow users to "chat" with their medications (we weren't joking when we alluded to SnapChat). Users are able to request more information about the drug through the chat, along with a personal vanguard that scans for incompatibility issues between medications. Finally, Snapill provides reminders when it is time for users to take their medications (we couldn't forget that part, could we?)
## How we built it
Knowing we needed data, me and Shishir worked with Scriptpro, a pharmaceutical company in Kansas. After pitching our ideas to a senior software engineer (a warm reception), we were able to acquire around 20 vials with different medication names. They were used to train both a Roboflow model to detect labels, and test a custom OpenCV unwarping pipeline.
Snapill is a combination of a react-native app ran on the user's phone that connects to a Flask backend hosted on Heroku. This allows us to use homemade openCV functions while calling APIs such as Roboflow, Cerebras, and Firebase. User data, auth, and storage is all stored through firebase, allowing for simple public downloads from other APIs. We tested the app using both an Android simulator and a physical iPhone. See below for more information about our backend process!
## Challenges we ran into
Our first challenge was that UPenn did not allow direct connection in a way. Specifically, we weren't able to run a local backend on the computer and forward its port so a phone could send requests on the same WiFi. This led us to deploy the backend on Heroku, which unfortunately slowed down build times but taught us a lot about deploying on production lines!
Another challenge we ran into was integrating our custom OpenCV code with Roboflow's Workflow design. However, it made us glad we chose to use a Python backend to process Computer Vision! We solved this issue by sandwiching our custom unwarp code between two different Roboflow Workflows.
## Accomplishments that we're proud of
The main accomplishment we're proud of was our CV pipeline. Regularly, OCR (Optical Character Recognition) is used for parsing data from images, however we noticed it has a very high error rate when trying to detect text on curved surfaces. To solve this, me and Shishir researched and utilized a pipeline that uses concepts from linear algebra to essentially map the text on the curved label to a flat rectangle. This wasn't easy because there were very few resources for using OCR in edge cases like this. But we curved around (pun not intended) and explored the depths of OpenCV to achieve our goal. OCR was much more accurate now and was able to safely parse label data.
Even cooler, is that it could be scaled to pretty much any text on the curved part of a cylinder! All the method needs are certain extreme points, and the rest is up to interpolation. After Pennapps, I plan to test the pipeline more, and see how to automatically detect the points using edge/contour recognition.
## What we learned
For one of us, this was his first time both attending a hackathon AND actually creating a mobile app. While he knew React from previous projects, he still had to learn how to use expo along with several other libraries, and was able to pull it off! We all also learnt about how important reliability is when making patient technology solutions. Handling important data like dosage should be done so carefully.
## What's next for Snapill
The next update for Snapill would be finding a way to make the refilling process easier. Usually, there are phone numbers on vial labels, which would be a good next step to detect. | ## Inspiration
Our journey with CareLink started at a healthcare workshop, coupled with a personal story close to our hearts. The workshop highlighted the fragmented nature of continuity in care, often leaving patients under-supported. Additionally, a team member's personal experience with their grandmother's terminal illness underlined the challenges faced by seniors and others who can sometimes feel sidelined by the healthcare system. These narratives drove us to develop a solution that ensures continuous care and support.
## What it does
CareLink is a cutting-edge app crafted to enhance healthcare communication. Built with React, JavaScript, OpenAI, and EmailJS, it ensures fluid interaction between patients and healthcare professionals. Patients can effortlessly log their health metrics, and the app, using AI, smartly analyzes this data, directing it to the appropriate caregivers like nurses, PSWs, or family doctors. This not only guarantees prompt medical care but also significantly eases the workload of healthcare workers, streamlining the entire care process.
## How we built it
We built CareLink with a tech stack comprising React for the front end, JavaScript for functionality, OpenAI for intelligent data processing, and EmailJS for effective communication.
## Challenges we ran into
Integrating EmailJS and OpenAI API presented significant challenges, particularly in tailoring these technologies to fit our specific needs. We navigated through complexities to ensure smooth functioning and effective implementation of these technologies.
## Accomplishments that we're proud of
We are immensely proud of transforming an idea into a tangible product that can make a real difference. Implementing AI to enhance healthcare communication and successfully integrating various technologies are significant achievements for our team.
## What we learned
This journey has been a great learning curve. We've gained insights into AI applications in healthcare, honed our skills in React and JavaScript, and learned the intricacies of integrating third-party services like EmailJS for practical solutions.
## What's next for CareLink
Moving forward, we plan to expand CareLink's functionality, including more personalized AI-driven recommendations and broader integration with healthcare systems. Our goal is to continuously improve patient engagement and support, making CareLink an indispensable tool in healthcare communication. | partial |
# The Ultimate Water Heater
February 2018
## Authors
This is the TreeHacks 2018 project created by Amarinder Chahal and Matthew Chan.
## About
Drawing inspiration from a diverse set of real-world information, we designed a system with the goal of efficiently utilizing only electricity to heat and pre-heat water as a means to drastically save energy, eliminate the use of natural gases, enhance the standard of living, and preserve water as a vital natural resource.
Through the accruement of numerous API's and the help of countless wonderful people, we successfully created a functional prototype of a more optimal water heater, giving a low-cost, easy-to-install device that works in many different situations. We also empower the user to control their device and reap benefits from their otherwise annoying electricity bill. But most importantly, our water heater will prove essential to saving many regions of the world from unpredictable water and energy crises, pushing humanity to an inevitably greener future.
Some key features we have:
* 90% energy efficiency
* An average rate of roughly 10 kW/hr of energy consumption
* Analysis of real-time and predictive ISO data of California power grids for optimal energy expenditure
* Clean and easily understood UI for typical household users
* Incorporation of the Internet of Things for convenience of use and versatility of application
* Saving, on average, 5 gallons per shower, or over ****100 million gallons of water daily****, in CA alone. \*\*\*
* Cheap cost of installation and immediate returns on investment
## Inspiration
By observing the RhoAI data dump of 2015 Californian home appliance uses through the use of R scripts, it becomes clear that water-heating is not only inefficient but also performed in an outdated manner. Analyzing several prominent trends drew important conclusions: many water heaters become large consumers of gasses and yet are frequently neglected, most likely due to the trouble in attaining successful installations and repairs.
So we set our eyes on a safe, cheap, and easily accessed water heater with the goal of efficiency and environmental friendliness. In examining the inductive heating process replacing old stovetops with modern ones, we found the answer. It accounted for every flaw the data decried regarding water-heaters, and would eventually prove to be even better.
## How It Works
Our project essentially operates in several core parts running simulataneously:
* Arduino (101)
* Heating Mechanism
* Mobile Device Bluetooth User Interface
* Servers connecting to the IoT (and servicing via Alexa)
Repeat all processes simultaneously
The Arduino 101 is the controller of the system. It relays information to and from the heating system and the mobile device over Bluetooth. It responds to fluctuations in the system. It guides the power to the heating system. It receives inputs via the Internet of Things and Alexa to handle voice commands (through the "shower" application). It acts as the peripheral in the Bluetooth connection with the mobile device. Note that neither the Bluetooth connection nor the online servers and webhooks are necessary for the heating system to operate at full capacity.
The heating mechanism consists of a device capable of heating an internal metal through electromagnetic waves. It is controlled by the current (which, in turn, is manipulated by the Arduino) directed through the breadboard and a series of resistors and capacitors. Designing the heating device involed heavy use of applied mathematics and a deeper understanding of the physics behind inductor interference and eddy currents. The calculations were quite messy but mandatorily accurate for performance reasons--Wolfram Mathematica provided inhumane assistance here. ;)
The mobile device grants the average consumer a means of making the most out of our water heater and allows the user to make informed decisions at an abstract level, taking away from the complexity of energy analysis and power grid supply and demand. It acts as the central connection for Bluetooth to the Arduino 101. The device harbors a vast range of information condensed in an effective and aesthetically pleasing UI. It also analyzes the current and future projections of energy consumption via the data provided by California ISO to most optimally time the heating process at the swipe of a finger.
The Internet of Things provides even more versatility to the convenience of the application in Smart Homes and with other smart devices. The implementation of Alexa encourages the water heater as a front-leader in an evolutionary revolution for the modern age.
## Built With:
(In no particular order of importance...)
* RhoAI
* R
* Balsamiq
* C++ (Arduino 101)
* Node.js
* Tears
* HTML
* Alexa API
* Swift, Xcode
* BLE
* Buckets and Water
* Java
* RXTX (Serial Communication Library)
* Mathematica
* MatLab (assistance)
* Red Bull, Soylent
* Tetrix (for support)
* Home Depot
* Electronics Express
* Breadboard, resistors, capacitors, jumper cables
* Arduino Digital Temperature Sensor (DS18B20)
* Electric Tape, Duct Tape
* Funnel, for testing
* Excel
* Javascript
* jQuery
* Intense Sleep Deprivation
* The wonderful support of the people around us, and TreeHacks as a whole. Thank you all!
\*\*\* According to the Washington Post: <https://www.washingtonpost.com/news/energy-environment/wp/2015/03/04/your-shower-is-wasting-huge-amounts-of-energy-and-water-heres-what-to-do-about-it/?utm_term=.03b3f2a8b8a2>
Special thanks to our awesome friends Michelle and Darren for providing moral support in person! | ## Inspiration 🔥
While on the way to CalHacks, we drove past a fire in Oakland Hills that had started just a few hours prior, meters away from I-580. Over the weekend, the fire quickly spread and ended up burning an area of 15 acres, damaging 2 homes and prompting 500 households to evacuate. This served as a harsh reminder that wildfires can and will start anywhere as long as few environmental conditions are met, and can have devastating effects on lives, property, and the environment.
*The following statistics are from the year 2020[1].*
**People:** Wildfires killed over 30 people in our home state of California. The pollution is set to shave off a year of life expectancy of CA residents in our most polluted counties if the trend continues.
**Property:** We sustained $19b in economic losses due to property damage.
**Environment:** Wildfires have made a significant impact on climate change. It was estimated that the smoke from CA wildfires made up 30% of the state’s greenhouse gas emissions. UChicago also found that “a single year of wildfire emissions is close to double emissions reductions achieved over 16 years.”
Right now (as of 10/20, 9:00AM): According to Cal Fire, there are 7 active wildfires that have scorched a total of approx. 120,000 acres.
[[1] - news.chicago.edu](https://news.uchicago.edu/story/wildfires-are-erasing-californias-climate-gains-research-shows)
## Our Solution: Canary 🐦🚨
Canary is an early wildfire detection system powered by an extensible, low-power, low-cost, low-maintenance sensor network solution. Each sensor in the network is placed in strategic locations in remote forest areas and records environmental data such as temperature and air quality, both of which can be used to detect fires. This data is forwarded through a WiFi link to a centrally-located satellite gateway computer. The gateway computer leverages a Monogoto Satellite NTN (graciously provided by Skylo) and receives all of the incoming sensor data from its local network, which is then relayed to a geostationary satellite. Back on Earth, we have a ground station dashboard that would be used by forest rangers and fire departments that receives the real-time sensor feed. Based on the locations and density of the sensors, we can effectively detect and localize a fire before it gets out of control.
## What Sets Canary Apart 💡
Current satellite-based solutions include Google’s FireSat and NASA’s GOES satellite network. These systems rely on high-quality **imagery** to localize the fires, quite literally a ‘top-down’ approach. Google claims it can detect a fire the size of a classroom and notify emergency services in 20 minutes on average, while GOES reports a latency of 3 hours or more. We believe these existing solutions are not effective enough to prevent the disasters that constantly disrupt the lives of California residents as the fires get too big or the latency is too high before we are able to do anything about it. To address these concerns, we propose our ‘bottom-up’ approach, where we can deploy sensor networks on a single forest or area level and then extend them with more sensors and gateway computers as needed.
## Technology Details 🖥️
Each node in the network is equipped with an Arduino 101 that reads from a Grove temperature sensor. This is wired to an ESP8266 that has a WiFi module to forward the sensor data to the central gateway computer wirelessly. The gateway computer, using the Monogoto board, relays all of the sensor data to the geostationary satellite. On the ground, we have a UDP server running in Google Cloud that receives packets from the satellite and is hooked up to a Streamlit dashboard for data visualization.
## Challenges and Lessons 🗻
There were two main challenges to this project.
**Hardware limitations:** Our team as a whole is not very experienced with hardware, and setting everything up and getting the different components to talk to each other was difficult. We went through 3 Raspberry Pis, a couple Arduinos, different types of sensors, and even had to fashion our own voltage divider before arriving at the final product. Although it was disheartening at times to deal with these constant failures, knowing that we persevered and stepped out of our comfort zones is fulfilling.
**Satellite communications:** The communication proved to be tricky due to inconsistent timing between sending and receiving the packages. We went through various socket ids and ports to see if there were any patterns to the delays. Through our thorough documentation of steps taken, we were eventually able to recognize a pattern in when the packages were being sent and modify our code accordingly.
## What’s Next for Canary 🛰️
As we get access to better sensors and gain more experience working with hardware components (especially PCB design), the reliability of our systems will improve. We ran into a fair amount of obstacles with the Monogoto board in particular, but as it was announced as a development kit only a week ago, we have full faith that it will only get better in the future. Our vision is to see Canary used by park services and fire departments in the most remote areas of our beautiful forest landscapes in which our satellite-powered sensor network can overcome the limitations of cellular communication and existing fire detection solutions. | ## Inspiration
Our inspiration for this project is that we, like many people, enjoy taking long showers. However, this is bad for the environment and it wastes water as well. This app is a way for us and other people to monitor their water usage effectively and take steps to conserve water, especially in the face of worsening droughts.
## What it does
Solution for effective day-to-day monitoring of water usage
Most people only know how much water they would use monthly from their water bill
This project is meant to help residents effectively locate which areas of their home are using too much water, potentially helping them assess leaks.
It also helps them save money by locating areas that are conserving water well so they can continue to save water in that area.
## How we built it
We chose to approach this project by using machine learning to help a user conserve their water by having an algorithm predict what sources of water (such as a toilet or a faucet) are using the most amount of water. We came up with a concept for the app and created a wireframe to outline it as well. The app would be paired with the use of sensors, which can be installed in the pipes of the various water sources in the house. We built this project using Google Colab, Canva for the wireframe, and Excel for the data formatting.
## Challenges we ran into
We spent a bit of time trying to look for datasets to match our goals, and we ended up having to format some of them differently.
We also had to change how our graphs looked because we wanted to display the data in the clearest way possible.
## Accomplishments that we're proud of
We formatted and used datasets pretty well. We are glad that we created a basic wireframe to model our app and we are proud we created a finished product, even if its not in a functional form yet.
## What we learned
We learned how to deal with data sets and how to put lines into graphs and we also learned how to analyze graphs at a deeper level and to make sure they matched the goals of our project.
## What's next for Hydrosaver
In the future we hope to create the sensor for this project and find a way to pair it with an accompanying app. We also hope to gain access to or collect more comprehensive data about water consumption in homes. | winning |
## Inspiration
According to WebAim, in 2021, the top 1 million websites had an average of 51.4 accessibility errors on the homepage alone (<https://webaim.org/projects/million/>). After learning about the lack of website accessibility, we wanted to find a solution for improving the user experience for disabled individuals. Creating accessible websites ensures equal access for all, aligns with social responsibility and innovation, and benefits both users and businesses.
## What it does
The user first inputs the URL of a website and after the click of a button, our web application identifies and displays various accessibility issues it finds on the web page through visual graphical elements (i.e. pie charts and bar graphs).
## How we built it
We split up our project into frontend and backend tasks. The frontend was responsible for collecting a user-inputted website URL and displaying the accessibility data as graphical elements using Taipy’s chart GUI functionality. The backend consisted of a GET request to the WAVE API using the URL passed in from the data. Then, it returned the accessibility issues from the WAVE API, capturing the necessary data in JSON format. We captured the JSON data into arrays and fed the information to the Taipy chart functions to display the relevant fields to the user. Both the frontend and the backend leveraged Taipy’s full-stack development capabilities.
## Challenges we ran into
As a relatively new library, we didn’t have many resources to consult when building our Taipy application. The biggest challenge was navigating the functionalities present in the library and making sure that we could return the right data to the user. One challenge we had was taking the user-inputted URL and using that in our GET request to the WAVE API. This was difficult as we could not get the data to update in real-time when a new URL was inputted. In the end, we resolved this by adding the JSON parsing functionality in an `on_button_action` function and realized that we had to update the `state` variable in order to have the data refreshed to accurately reflect the user-inputted URL field.
## Accomplishments that we're proud of
We’re proud that we were able to debug our issues particularly since there was not a lot of support for using Taipy. We were also a new team and were quick to come up with an idea and start collaborating on the project. While we didn’t get as much sleep this weekend, we are proud that we were able to get some rest and participate in workshops alongside working on this project!
## What we learned
Here are a couple of things we learned:
* How to develop a multi-page web application from scratch using Python
* How to handle HTTP requests using the WAVE API in Python
* How to parse and collect necessary data from a JSON file
* How to display information and datasets through Taipy bar graphs and pie charts
* How to break down a project into manageable tasks
* How to create PR templates on Github
## What's next for AccChecky
As next steps, we would like to highlight areas of the website with colour contrasting issues through heatmaps so the user has more actionable items to work on to improve the accessibility. We would also look into storing previous accessibility checks to display gradual improvement over time through other graphical charts (i.e. line graphs). Lastly, to scale our app, we could compare inputted websites against large datasets to show severity of accessibility issues from the particular user-inputted URL to other websites that exist on the internet as a form of calibration. | ## Inspiration
Ever join a project only to be overwhelmed by all of the open tickets? Not sure which tasks you should take on to increase the teams overall productivity? As students, we know the struggle. We also know that this does not end when school ends and in many different work environments you may encounter the same situations.
## What it does
tAIket allows project managers to invite collaborators to a project. Once the user joins the project, tAIket will analyze their resume for both soft skills and hard skills. Once the users resume has been analyzed, tAIket will provide the user a list of tickets sorted in order of what it has determined that user would be the best at. From here, the user can accept the task, work on it, and mark it as complete! This helps increase productivity as it will match users with tasks that they should be able to complete with relative ease.
## How we built it
Our initial prototype of the UI was designed using Figma. The frontend was then developed using the Vue framework. The backend was done in Python via the Flask framework. The database we used to store users, projects, and tickets was Redis.
## Challenges we ran into
We ran into a few challenges throughout the course of the project. Figuring out how to parse a PDF, using fuzzy searching and cosine similarity analysis to help identify the users skills were a few of our main challenges. Additionally working out how to use Redis was another challenge we faced. Thanks to the help from the wonderful mentors and some online resources (documentation, etc.), we were able to work through these problems. We also had some difficulty working out how to make our site look nice and clean. We ended up looking at many different sites to help us identify some key ideas in overall web design.
## Accomplishments that we're proud of
Overall, we have much that we can be proud of from this project. For one, implementing fuzzy searching and cosine similarity analysis is something we are happy to have achieved. Additionally, knowing how long the process to create a UI should normally take, especially when considering user centered design, we are proud of the UI that we were able to create in the time that we did have.
## What we learned
Each team member has a different skillset and knowledge level. For some of us, this was a great opportunity to learn a new framework while for others this was a great opportunity to challenge and expand our existing knowledge. This was the first time that we have used Redis and we found it was fairly easy to understand how to use it. We also had the chance to explore natural language processing models with fuzzy search and our cosine similarity analysis.
## What's next for tAIket
In the future, we would like to add the ability to assign a task to all members of a project. Some tasks in projects *must* be completed by all members so we believe that this functionality would be useful. Additionally, the ability for "regular" users to "suggest" a task. We believe that this functionality would be useful as sometimes a user may notice something that is broken or needs to be completed but the project manager has not noticed it. Finally, something else that we would work on in the future would be the implementation of the features located in the sidebar of the screen where the tasks are displayed. | ## Inspiration
As we all know the world has come to a halt in the last couple of years. Our motive behind this project was to help people come out of their shells and express themselves. Connecting various people around the world and making them feel that they are not the only ones fighting this battle was our main objective.
## What it does
People share their thought processes through speaking and our application identifies the problem the speaker is facing and connects him to a specialist in that particular domain so that his/her problem could be resolved. There is also a **Group Chat** option available where people facing similar issues can discuss their problems among themselves.
For example, if our application identifies that the topics spoken by the speaker are related to mental health, then it connects them to a specialist in the mental health field and also the user has an option to get into a group discussion which contains people who are also discussing mental health.
## How we built it
The front-end of the project was built by using HTML, CSS, Javascript, and Bootstrap. The back-end part was exclusively written in python and developed using the Django framework. We integrated the **Assembly AI** file created by using assembly ai functions to our back-end and were successful in creating a fully functional web application within 36 hours.
## Challenges we ran into
The first challenge was to understand the working of Assembly AI. None of us had used it before and it took us time to first understand it's working. Integrating the audio part into our application was also a major challenge. Apart from Assembly AI, we also faced issues while connecting our front-end to the back-end. Thanks to the internet and the mentors of **HackHarvard**, especially **Assembly AI Mentors** who were very supportive and helped us resolve our errors.
## Accomplishments that we're proud of
Firstly, we are proud of creating a fully functional application within 36 hours taking into consideration all the setbacks we had. We are also proud of building an application from which society can be benefitted. Finally and mainly we are proud of exploring and learning new things which is the very reason for hackathons.
## What we learned
We learned how working as a team can do wonders. Working under a time constraint can be a really challenging task, aspects such as time management, working under pressure, the never give up attitude and finally solving errors which we never came across are some of the few but very important things which we were successful in learning. | partial |
## Inspiration
We came up with the idea of Budget Bites when we faced the challenge of affording groceries and restaurants in the city, as well as the issue of food waste and environmental impact. We wanted to make a web app that would help people find the best deals on food and groceries that are close to expiry or on the date of expiry, and incentivize them to buy them before they go to waste by offering discounts on the items. With our app, you can easily find quick groceries for a fast meal, and see the cheapest and freshest options available in your area, along with the distance and directions to help you reduce your carbon footprint. Budget Bites is our pocket-friendly food guide that helps you save money, enjoy good food, and protect the planet every day of the week.
## What it does
Budget Bites is a website that gives you a list of the prices, freshness, and distance of food and groceries near you. You can also get discounts and coupons on items that are close to sale, and enjoy good food every day of the week without breaking the bank or harming the planet. Budget Bites is your pocket-friendly food guide that makes eating well easy and affordable.
## How we built it
We built the web app with React.js, Typescript, Vite, and React Router, as well as React Context Hooks. For styling and design, we utilized SASS and TailwindCSS. We used various React components/libraries such as Fuse.js and React-toast, as well as a payment system using Stripe.
## Challenges we ran into
One of our biggest challenges was trying to integrate the JavaScript SDK for Google Maps into our program. This was very new to us as none of us had any any experience with the Maps API. Getting the Google Maps API to function took hours of intense problems solving. It didn't stop there though as right after that we had to try adding landmarks which was another problem however we overcame it by looking through documentation and through the consultation of mentors
## Accomplishments that we're proud of
An accomplishment that we are proud of is that we figured out how to use google maps jdk and successfully implemented the google maps features to our web app. We also figured out how to calculate distances from where the user is to the store as well.
## What we learned
One of the key skills that we gained over the duration of this hackathon is that we learnt how to implement Google Maps to our system. This involved us learning how the Maps SDKs worked to integrate them into our application to provide proper location services. We also delved into the realm of distance and time calculations using two points. Using Google Maps, we learned how to determine the distance between where the user starts and where his end point is supposed to be as well as an estimate for the time it will take to reach the given point.
## What's next for Budget Bites
Our next steps for Budget Bites is to make it a more frequently used app as well as a global app that helps people around the world enjoy good food without wasting money or resources. To do this, we will make sure our app is tailored to the languages, cultures, preferences, and regulations of each market we enter. We will use Google Translate to make our app accessible and understandable in different languages. We will also work with local experts, partners, or customers to learn about their specific needs and expectations. We will also follow the local laws and standards that apply to our app and its features. We will design and execute a marketing strategy and a distribution plan for our app in each market we target. We will use Facebook Ads and Google Ads to create and run online campaigns that highlight the benefits and features of our app, such as saving money, reducing food waste, and enjoying good food. We will also use social media platforms such as Instagram and Twitter to share deals, testimonials, and reviews that showcase the experience that people have with budget bites to allow a greater influx of customers. | ## Inspiration
Let's be honest here, who **doesn't** go to hackathons for the free snacks and swag. Obviously everyone does. Snack should be accessible, in fact, Google makes sure that **no one** should be more than 200 feet away from food. Then why do hackers need to climb multiple floors and wait for snack trolleys to come around?
Here's the thing about hackathons. The first day is all hype and slowly the scope of the project sets in. By the close of the second day, everyone is tired and in desperate need of a change of scenery and some fresh ideas. Luckily, hackathons are home to some of the most big brain people we've ever met, and those very people will be able to provide their expertise.
What if...just what if, we could combine both of those ideas into one platform?
## What it does
The project is a platform for various hackers at hackathons to trade help for snacks. Say someone doesn't know how to implement something in React.js, but since there are so many talented and knowledgable hackers at a hackathon, they can send them a solution right away. The solution is held in a temporary space in the database until the user who request help, delivers the snack to the solution giver.
This brokers a Win-Win interaction between them both! One gets snacks (much needed fuel for a hackathon), and one gets a great solution! (also much needed for a hackathon)
## How we built it
Since both of us were new to hackathons and web-development in general, we decided against going for a full fledged full stack application with REST API and JWT authentication. We wanted to make sure we could finish this project and get it out there because it's just such a fun little concept! We used [Vue.js](https://vuejs.org) for the frontend framework, [tailwind css](https://tailwindcss.com) for all of our styling needs, and [Firebase](https://firebase.google.com) as our backend. We used Firestore for our database and seamlessly provided updates to the frontend. Additionally, to make the problem statements easier to read for others, we included a markdown library to render the input text, and any additional code they might have included.
## Challenges we ran into
Some of the challenges we ran into were managing the states of the user, the solutions, the snacks, and the delivery objects. In our code, we have a lot of actions in the vuex store, which made it hard to keep track of certain things so it complicated our code and made our development time longer.
## Accomplishments that we're proud of
As for our first hackathon, well one of us, we did surprisingly well and really enjoyed the process! It was really amazing learning the different APIs from sponsors and trying to figure out how it'll integrate into our app. What it exceptionally awesome was that we were able to finish all of this in just 48 hours! And whats more is it works as well (most of the time).
Using Vue.js was a great learning experience and we're really proud of our frontend skills now and how much we've learned about Vuex storage and state management patterns in vue.
## What we learned
Do not drink 6 cans of red bull in under 33 hours 😭
In all seriousness though, we learned so much about front-end development, making API calls, state and session management, and also Vue.js. Typescript was a pain sometimes, but it helped keep everything a lot more clean and tidy than if it was written in javascript. Additionally there were so many resources, so it was really a journey of exploring and learning, rather than building.
## What's next for SnhackDash
Pitching it to MLH of course! 🤭
Definitely polishing up every part of this app, as well as try to integrate some live messaging capabilities either through Firebase or through web sockets. We're also planning on breaking this app down into multiple micro-services in the future, so we can scale this up. Additionally, we're looking to integrate Twilio or a similar service for push notifications, so users can see real-time when they've received a solution proposal.
One major feature that we couldn't add in time was the ability for live navigation within the actual hackathon venue, and creating a dashboard to allow organizers to add in what kind of snacks there are for the snhackdashers to deliver. | # imHungry
## Inspiration
As Berkeley students we are always prioritizing our work over our health! Students often don't have as much time to go buy food. Why not pick a food for other students while buying your own food and make a quick buck? Or perhaps, place an order with a student who was already planning to head your way after buying some food for themselves? This revolutionary business model enables students to participate in the food delivery business while avoiding the hassles that are associated with the typical food delivery app. Our service does not require students to do anything differently than what they already do!
## What it does
This application allows students to be able to purchase or help purchase food from/for their fellow students. The idea is that students already purchase food often before heading out to the library or another place on campus. This app allows these students to list their plans in advance and allow other students to put in their orders as well. These buyers will then meet the purchaser at wherever they are expected to meet. That way, the purchaser doesn't need to make any adjustments to their plan besides buy a few extra orders! The buyers will also have the convenience of picking up their order near campus to avoid walking. This app enables students to get involved in the food delivery business while doing nearly nothing additional!
## How we built it
We used Flask, JavaScript, HTML/CSS, and Python. Some technologies we used include Mapbox API, Google Firebase, and Google Firestore. We built this as a team at CalHacks!
## Challenges we ran into
We had some trouble getting starting with using Google Cloud for user authentication. One of our team members went to the Google Cloud sponsor stand and was able to help fix part of the documentation!
## Accomplishments that we're proud of
We're proud of our use of the Mapbox API because it enabled us to use some beautiful maps in our application! As a food delivery app, we found it quite important that we are able to displays restaurants and locations on campus (for delivery) that we support and Mapbox made that quite easy. We are also quite proud of our use of Firebase and Firestore because we were able to use these technologies to authenticate users as Berkeley students while also quickly storing and retrieving data from the cloud.
## What we learned
We learned how to work with some great APIs provided by Mapbox and Google Cloud!
## What's next for imHungry
We hope to complete our implementation of the user and deliverer interface and integrate a payments API to enable users to fully use the service! Additional future plans are to add time estimates, improve page content, improve our back-end algorithms, and to improve user authentication. | losing |
## Inspiration
At Carb0, we're committed to empowering individuals to take control of their carbon footprint and contribute to a more sustainable future. Our inspiration comes from the fact that 72% of CO2 emissions could be reduced by changes in consumer behavior, yet many companies lack the motivation to conduct ESG reports if not required by investors or the government. We believe that establishing consumer-driven ESG can drive companies to be accountable and take action to provide more sustainable products and services.
## What it does
We created **a personal carbon tracker** that **incentivizes** customers to adopt low-carbon lifestyles and **democratizes carbon footprint data**, making it easier for everyone to contribute to a sustainable future. Our platform provides information to influence consumers' purchase decisions and provides alternatives to help them make sustainable decisions. This way, we can encourage companies, investors, and the government to take responsibility and be more sustainable.
## How we built it
We began by identifying the problem and then went through an intense ideation process to converge on our consumer-driven ESG idea. We defined the user journey and pain points to create a convenient, incentivizing, and user-centric platform. Our reward system easily links to digital payment details and helps track CO2 emissions with data visualization and cashback based on monthly summaries. We also make product carbon footprint data easily accessible and searchable.
## Challenges we ran into
Our biggest challenge was integrating front-end and back-end and defining scope. We faced technical assumptions since the accurate database was not available due to time constraints.
## Accomplishments that we're proud of
Despite these challenges, we are proud of our self-sustaining system to establish consumer-driven ESG, successful integration of front-end and back-end with a user-friendly interface, and the intense ideation process we went through.
## What we learned
During this project, we learned how to rapidly prototype a digital app in limited time and resources, gained a deeper understanding of ESG, its current challenges, and potential solutions.
## What's next for Carb0 - Empower your carbon journey
Our next steps are to conduct user testing and iterations for a higher-fidelity prototype, enrich carbon footprint database coverage and accuracy. We also plan to potentially add Carb0 as an add-on for digital wallets to reach a broader audience and engage more people in a more sustainable lifestyle.
Our vision is that **consumer-driven ESG** will incentivize governments, investors, and companies to take more initiatives in creating a more sustainable world. Join us on our journey to a sustainable future with Carb0! | ```
var bae = require('love.js')
```
## Inspiration
It's a hackathon. It's Valentine's Day. Why not.
## What it does
Find the compatibility of two users based on their Github handles and code composition. Simply type the two handles into the given text boxes and see the compatibility of your stacks.
## How I built it
The backend is built on Node.js and Javascript while the front-end consists of html, css, and javascript.
## Challenges I ran into
Being able to integrate the Github API in our code and representing the data visually.
## What's next for lovedotjs
Adding more data from Github like frameworks and starred repositories, creating accounts that are saved to databases and recommending other users at the same hackathon, using Devpost's hackathon data for future hackathons, matching frontend users to backend users, and integrating other forms of social media and slack to get more data about users and making access easier. | ## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities. | partial |
## Inspiration
In recent times, we have witnessed undescribable tragedy occur during the war in Ukraine. The way of life of many citizens has been forever changed. In particular, the attacks on civilian structures have left cities covered in debris and people searching for their missing loved ones. As a team of engineers, we believe we could use our skills and expertise to facilitate the process of search and rescue of Ukrainian citizens.
## What it does
Our solution integrates hardware and software development in order to locate, register, and share the exact position of people who may be stuck or lost under rubble/debris. The team developed a rover prototype that navigates through debris and detects humans using computer vision. A picture and the geographical coordinates of the person found are sent to a database and displayed on a web application. The team plans to use a fleet of these rovers to make the process of mapping the area out faster and more efficient.
## How we built it
On the frontend, the team used React and the Google Maps API to map out the markers where missing humans were found by our rover. On the backend, we had a python script that used computer vision to detect humans and capture an image. Furthermore,
For the rover, we 3d printed the top and bottom chassis specifically for this design. After 3d printing, we integrated the arduino and attached the sensors and motors. We then calibrated the sensors for the accurate values.
To control the rover autonomously, we used an obstacle-avoider algorithm coded in embedded C.
While the rover is moving and avoiding obstacles, the phone attached to the top is continuously taking pictures. A computer vision model performs face detection on the video stream, and then stores the result in the local directory. If a face was detected, the image is stored on the IPFS using Estuary's API, and the GPS coordinates and CID are stored in a Firestore Database. On the user side of the app, the database is monitored for any new markers on the map. If a new marker has been added, the corresponding image is fetched from IPFS and shown on a map using Google Maps API.
## Challenges we ran into
As the team attempted to use the CID from the Estuary database to retrieve the file by using the IPFS gateway, the marker that the file was attached to kept re-rendering too often on the DOM. However, we fixed this by removing a function prop that kept getting called when the marker was clicked. Instead of passing in the function, we simply passed in the CID string into the component attributes. By doing this, we were able to retrieve the file.
Moreover, our rover was initally designed to work with three 9V batteries (one to power the arduino, and two for two different motor drivers). Those batteries would allow us to keep our robot as light as possible, so that it could travel at faster speeds. However, we soon realized that the motor drivers actually ran on 12V, which caused them to run slowly and burn through the batteries too quickly. Therefore, after testing different options and researching solutions, we decided to use a lithium-poly battery, which supplied 12V. Since we only had one of those available, we connected both the motor drivers in parallel.
## Accomplishments that we're proud of
We are very proud of the integration of hardware and software in our Hackathon project. We believe that our hardware and software components would be complete projects on their own, but the integration of both makes us believe that we went above and beyond our capabilities. Moreover, we were delighted to have finished this extensive project under a short period of time and met all the milestones we set for ourselves at the beginning.
## What we learned
The main technical learning we took from this experience was implementing the Estuary API, considering that none of our teammembers had used it before. This was our first experience using blockchain technology to develop an app that could benefit from use of public, descentralized data.
## What's next for Rescue Ranger
Our team is passionate about this idea and we want to take it further. The ultimate goal of the team is to actually deploy these rovers to save human lives.
The team identified areas for improvement and possible next steps. Listed below are objectives we would have loved to achieve but were not possible due to the time constraint and the limited access to specialized equipment.
* Satellite Mapping -> This would be more accurate than GPS.
* LIDAR Sensors -> Can create a 3D render of the area where the person was found.
* Heat Sensors -> We could detect people stuck under debris.
* Better Cameras -> Would enhance our usage of computer vision technology.
* Drones -> Would navigate debris more efficiently than rovers. | ## Inspiration
We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area.
## What it does
We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe.
## How we built it
First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project.
## Challenges we ran into
Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it.
## Accomplishments that we are proud of
Ari: Being able to go above and beyond what I learned in school to create a cool project
Donya: Getting to know the basics of how machine learning works
Alok: How to deal with unexpected challenges and look at it as a positive change
Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away.
## What I learned
Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information.
## What's next for Smart City SOS
hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change. | Long-distance is challenging for any relationship, whether romantic, platonic, or familial.
* 32.5% of college relationships are long-distance relationships (LDRs)
* Nearly Three-Quarters (72%) of Americans Feel Lonely.
The lack of physical presence often leads to feelings of disconnect and loneliness. Current solutions, such as video calls and messaging apps, lack the depth and immersion needed to truly feel connected. The most crucial aspect of a relationship, shared activities and involvement of senses in addition to just sight and sound, are often the hardest to achieve from a distance.
This Valentine’s weekend, we present to you… VR-Tines! 💗VR-Tines is an innovative VR experience designed to enhance long-distance relationships through immersive, interactive, and emotionally fulfilling activities.
## ⭐Key Experience Points
* **Collaborative Scrapbooking**: Couples can work together on a scrapbook, flipping through pages and dragging in photos and various decorative elements. This scrapbook feels like you’re actually working on it in the real world due to its alignment with 3d surfaces and interactive elements of flipping pages and dragging in elements. By having your partner right next to you, it’s like you’re working on it together in this shared space. Love to reflect on the mems :)
* **Shared Taste**: Get a themed-meal at the same time! Here, we use the two users’ favorite boba orders. Using the Doordash API, we synchronize the delivery of identical beverages or snacks to both partners during their VR date, which we track in the Doordash delivery simulator.
* **Enhanced Realism with Live Webcam Feed**: People typically use passthrough with VR headsets to feed what’s “real” into their experiences. We take advantage of this idea to stream a live webcam feed into the VR-tines experience, so it feels like your partner is actually sitting right next to you (and you see them in passthrough) as you do activities together!
* **Tab Bar Navigation**: We support toggling through the three main options: scrapbooking, Doordash, and home.
* **Social Impact**: VR-tines is about bringing people together. Our project has the potential to significantly reduce the emotional distance in long-distance relationships, fostering stronger bonds and happier couples.
This problem is also important to us because all of us are in long-distance relationships! However, we notice many similar issues that arise in all kinds of relationships in life, such as family, friends, work, etc due to the difficulty of being far apart. Therefore, we sought to solve this real user problem faced every day and create a better solution than current methods of call communication to create more seamless, immersive experiences to feel closer to our loved ones.
As first-time hackers and Stanford freshmen, the concept of home stretches across oceans to Myanmar and Vietnam, where our families reside. The yearning for a deeper connection with our loved ones, despite the geographical miles that separate us, sparked not just a need but a personal quest. Facing this hackathon, our greatest challenge was not the complexity of the technology or the novelty of the concept, but the mental hurdle of believing we could make a significant impact. This project became more than a hack; it evolved into a journey of discovery, learning, and overcoming, driven by our shared experiences of longing and the universal desire to feel closer to those we hold dear. It’s a testament to our belief that distance shouldn't dim the bonds of love and friendship but rather, with the right innovation, can be bridged beautifully. | winning |
## Inspiration
Over one-fourth of Canadians during their lifetimes will have to deal with water damage in their homes. This is an issue that causes many Canadians overwhelming stress from the sheer economical and residential implications.
As an effort to assist and solve these very core issues, we have designed a solution that will allow for future leaks to be avoidable. Our prototype system made, composed of software and hardware will ensure house leaks are a thing of the past!
## What is our planned solution?
To prevent leaks, we have designed a system of components that when functioning together, would allow the user to monitor the status of their plumbing systems.
Our system is comprised of:
>
> Two types of leak detection hardware
>
>
> * Acoustic leak detectors: monitors abnormal sounds of pipes.
> * Water detection probes: monitors the presence of water in unwanted areas.
>
>
>
Our hardware components will have the ability to send data to a local network, to then be stored in the cloud.
>
> Software components
>
>
> * Secure cloud to store vital information regarding pipe leakages.
> * Future planned app/website with the ability to receive such information
>
>
>
## Business Aspect of Leakio
Standalone, this solution is profitable through the aspect of selling the specific hardware to consumers. Although for insurance companies, this is a vital solution that has the potential to save millions of dollars.
It is far more economical to prevent a leak, rather than fix it when it already happens. The time of paying the average cost of $10,900 US dollars to fix water damage or a freezing claim is now avoidable!
In addition to saved funds, our planned system will be able to send information to insurance companies for specific data purposes such as which houses or areas have the most leaks, or individual risk assessment. This would allow insurance companies to more appropriately create better rates for the consumer, for the benefit of both consumer and insurance company.
### Software
Front End:
This includes our app design in Figma, which was crafted using knowledge on proper design and ratios. Specifically, we wanted to create an app design that looked simple but had all the complex features that would seem professional. This is something we are proud of, as we feel this component was successful.
Back End:
PHP, MySQL, Python
### Hardware
Electrical
* A custom PCB is designed from scratch using EAGLE
* Consists of USBC charging port, lithium battery charging circuit, ESP32, Water sensor connector, microphone connector
* The water sensor and microphone are extended from the PCB which is why they need a connector
3D-model
* Hub contains all the electronics and the sensors
* Easy to install design and places the microphone within the walls close to the pipes
## Challenges we ran into
Front-End:
There were many challenges we ran into, especially regarding some technical aspects of Figma. Although the most challenging aspect in this would’ve been the implementation of the design.
Back-End:
This is where most challenges were faced, which includes the making of the acoustic leak detector, proper sound recognition, cloud development, and data transfer.
It was the first time any of us had used MySQL, and we created it on the Google Cloud SQL platform. We also had to use both Python and PHP to retrieve and send data, two languages we are not super familiar with.
We also had no idea how to set up a neural network with PyTorch. Also finding the proper data that can be used to train was also very difficult.
## Accomplishments that we're proud of
Learning a lot of new things within a short period of time.
## What we learned
Google Cloud:
Creating a MySQL database and setting up a Deep Learning VM.
MySQL:
Using MySQL and syntaxes, learning PHP.
Machine Learning:
How to set up Pytorch.
PCB Design:
Learning how to use EAGLE to design PCBs.
Raspberry Pi:
Autorun Python scripts and splitting .wav files.
Others:
Not to leave the recording to the last hour. It is hard to cut to 3 minutes with an explanation and demo.
## What's next for Leakio
* Properly implement audio classification using PyTorch
* Possibly create a network of devices to use in a single home
* Find more economical components
* Code for ESP32 to PHP to Web Server
* Test on an ESP32 | Track: Social Good
[Disaster Relief]
[Best Data Visualization Hack]
[Best Social Impact Hack]
[Best Hack for Building Community]
## Inspiration
Do you ever worry about what is in your water? While some of us live in the luxury of getting clean water for granted, some of us do not. In Flint, MI, another main water pipeline broke. Again. Under water boil advisory, citizens are subject to government inaction and lack of communication. Our goal is to empower communities with what is in their water using data analytics and crowd sourced reports of pollutants in tap water found in people's homes.
## What it does
Water Crusader is an app that uses two categories of data to give communities an informed assessment of their water quality: publicly available government data and crowd sourced tap water assessments taken from people's homes. Firstly, it takes available government data of blood lead levels tested in children, records of home age to indicate the age of utilities and materials used in pipes, and income levels as an indicator of maintenance in a home. With the programs we have built, governments can expand our models by giving us more data to use. Secondly, users are supplied with a cost-effective, open source IOT sensor that uploads water quality measurements to the app. This empowers citizens to participate in knowing the water quality of their communities. As a network of sensors is deployed, real-time, crowd-sourced data can greatly improve our risk assessment models. By fusing critical information from these two components, we use regression models to give our users a risk measurement of lead poisoning from their water pipelines. Armed with this knowledge, citizens are empowered with being able to make more informed health decisions and call their governments to act on the problem of lead in drinking water.
## How we built it
Hardware:
To simulate the sensor system with the available hardware materials at HackMIT, we used a ESP32 and DHT11 Temperature and Humidity Sensor. The ESP32 takes temperature and humidity data read from DHT11. Data is received in Nose-RED json by specifying HTTP request and the sending actual post in the Arduino IDE.
Data Analytics:
Using IBM's Watson Studio and AI development tools, data from government sources were cleaned and used to model lead poisoning risk. Blood lead levels tested in children from the Center for Disease Control was used as the feature we wanted to predict. House age and poverty levels taken from the U.S. Census were used to predict blood lead levels tested in children from the Center for Disease Control.
## Challenges we ran into
1. We are limited by the amount of hardware available. We tried our best to create the best simulation of the sensor system as possible.
2. It was hard to retrieve and clean government data, especially with the need to make data requests.
3. Not all of us are familiar with Node-RED js and there was a lot to learn!
## Accomplishments that we're proud of
1.Learning new software!
1. Working in a super diverse team!!
## What's next for Water Crusader
We will continue to do research for different water sensors that can be added to our system. From our research, we learned that there is a lead sensor in development. | ## Inspiration
The whole ideas was to not only a device a task management system for maintainence of Fannie Mae's properties but to exercise creativity device an interface that will help Fannie Mae make loss minimizing decisions.
## What it does
I learned from Fannie Mae's engineers that one of their biggest problem was to maintain the properties and prevent theft of 'copper pipes' and 'electrical appliances' from the property. FannieBae Maps Fannie Mae's properties and classifies them according to thefts and statuses. This helps predict future thefts as planned robbers follow a pattern tend to commit a crime nearby.
Whereas, it also helps give hints on the robbers.
If a recently vacated property was robbed/vandalized, it strongly hints that the defaulter might be behind it.
## How I built it
Back-end: Node.Js
Data: JSON
Front End: Javascript, HTML, CSS.
## Challenges I ran into
Getting the back-end in sync with the front end. I had zero past experience with Node.js and it works completely opposite to technologies I was used to working with.
## Accomplishments that I'm proud of
Learned Node.Js in 24 hours :)
## What I learned
Node.Js, Command Line, Solving a problem from top to bottom i.e. Product Design to Development.
## What's next for FannieBae
Include Twilio API that will directly help the Desk Analyst communicate with the onsite Inspector through internet.
Include Clarifai API that will parse through the images and help predict upcoming damages to the property such as Mould on walls, termite infection etc. | winning |
#### Inspiration
The Division of Sleep Medicine at Harvard Medical School stated that 250,000 drivers fall asleep at the wheel every day in the United States. CDC.gov claims the states has 6,000 fatal per year due to these drowsy drivers. We members of the LifeLine team understand the situation; in no way are we going to be able to stop these commuters from driving home after a long days work. So let us help them keep alert and awake!
#### What is LifeLine?
You're probably thinking "Lifeline", like those calls to dad or mom they give out on "Who wants to be a millionaire?" Or maybe your thinking more literal: "a rope or line used for life-saving". In both cases, you are correct! The wearable LifeLine system connects with an android phone and keeps the user safe and awake on the road through connecting them to a friend.
#### Technologies
Our prototype consists of an Arduino with an accelerometer as part of a headset, monitoring a driver's performance of that well-known head dip of fatigue. This headset communicates with a Go lang server, providing the user's android application with the accelerometer data through an http connection. The android app then processes the x, y tilt data to monitor the driver.
#### What it does
The application user sets an emergency contact upon entry. Then once in "drive" mode, the app displays the x and y tilt of the drivers head, relating it to an animated head that tilts to match the drivers. Upon sensing the first few head nods of the driver, the LifeLine app provides auditory feedback beeps to keep the driver alert. If the condition of the driver does not improve it then sends a text to a pre-entered contact suggesting to them that the user is drowsy driving and that they should reach out to him. If the state of the driver gets worse it then summons the LifeLine and calls their emergency contact.
#### Why call a friend?
Studies find conversation to be a great stimulus of attentiveness. Given that a large number of drivers are alone on the road, the resulting phone connections could save lives.
#### Challenges we ran into
Hardware use is always not fun for software engineers...
#### What's next for LifeLine
* Wireless Capabilities
* Stylish and more comfortable to wear
* Saving data for user review
* GPS feedback for where the driver is when he is dozing off (partly completed already)
**Thanks for reading. Hope to see you on the demo floor!** - Liam | ## Inspiration
One of the biggest roadblocks during disaster relief is reestablishing the first line of communication between community members and emergency response personnel. Whether it is the aftermath of a hurricane devastating a community or searching for individuals in the backcountry, communication is the key to speeding up these relief efforts and ensuring a successful rescue of those at risk.
In the event of a hurricane, blizzard, earthquake, or tsunami, cell towers and other communication nodes can be knocked out leaving millions stranded and without a way of communicating with others. In other instances where skiers, hikers, or travelers get lost in the backcountry, emergency personnel have no way of communicating with those who are lost and can only rely on sweeping large areas of land in a short amount of time to be successful in rescuing those in danger.
This is where Lifeline comes in. Our project is all about leveraging communication technologies in a novel way to create a new way to establish communication in a short amount of time without the need for prior existing infrastructures such as cell towers, satellites, or wifi access point thereby speeding up natural disaster relief efforts, search and rescue missions, and helping provide real-time metrics for emergency personnel to leverage.
Lifeline uses LoRa and Wifi technologies to create an on-the-fly mesh network to allow individuals to communicate with each other across long distances even in the absence of cell towers, satellites, and wifi. Additionally, Lifeline uses an array of sensors to send vital information to emergency response personnel to assist with rescue efforts thereby creating a holistic emergency response system.
## What it does
Lifeline consists of two main portions. First is a homebrewed mesh network made up of IoT and LoRaWAN nodes built to extend communication between individuals in remote areas. The second is a control center dashboard to allow emergency personnel to view an abundance of key metrics of those at risk such as heart rate, blood oxygen levels, temperature, humidity, compass directions, acceleration, etc.
On the mesh network side, Lifeline has two main nodes. A control node and a network of secondary nodes. Each of the nodes contains a LoRa antenna capable of communication up to 3.5km. Additionally, each node consists of a wifi chip capable of acting as both a wifi access point as well as a wifi client. The intention of these nodes is to allow users to connect their cellular devices to the secondary nodes through the local wifi networks created by the wifi access point. They can then send emergency information to response personnel such as their location, their injuries, etc. Additionally, each secondary node contains an array of sensors that can be used both by those in danger in remote communities or by emergency personnel when they venture out into the field so members of the control center team can view their vitals. All of the data collected by the secondary nodes is then sent using the LoRa protocol to other secondary nodes in the area before finally reaching the control node where the data is processed and uploaded to a central server. Our dashboard then fetches the data from this central server and displays it in a beautiful and concise interface for the relevant personnel to read and utilize.
Lifeline has several main use cases:
1. Establishing communication in remote areas, especially after a natural disaster
2. Search and Rescue missions
3. Providing vitals for emergency response individuals to control center personnel when they are out in the field (such as firefighters)
## How we built it
* The hardware nodes used in Lifeline are all built on the ESP32 microcontroller platform along with a SX1276 LoRa module and IoT wifi module.
* The firmware is written in C.
* The database is a real-time Google Firebase.
* The dashboard is written in React and styled using Google's Material UI package.
## Challenges we ran into
One of the biggest challenges we ran into in this project was integrating so many different technologies together. Whether it was establishing communication between the individual modules, getting data into the right formats, working with new hardware protocols, or debugging the firmware, Lifeline provided our team with an abundance of challenges that we were proud to tackle.
## Accomplishments that we're proud of
We are most proud of being able to have successfully integrated all of our different technologies and created a working proof of concept for this novel idea. We believe that combing LoRa and wifi in the way can pave the way for a new era of fast communication that doesn't rely on heavy infrastructures such as cell towers or satellites.
## What we learned
We learned a lot about new hardware protocols such as LoRa as well as working with communication technologies and all the challenges that came along with that such as race conditions and security.
## What's next for Lifeline
We plan on integrating more sensors in the future and working on new algorithms to process our sensor data to get even more important metrics out of our nodes. | ## Inspiration
**1/5 of traffic accidents are caused by drowsiness**. The National Highway Traffic Safety Administration estimates that between 56,000 and 100,000 crashes are the direct consequence of drowsiness resulting in more than 1500 fatalities and 71,000 injuries annually. All four of us are united by having experienced crashes due to sleepiness as passengers or as drivers and that's why we wanted to build a non-invasive human-computer interface that improves security on the road.
## What it does
MiSense is an **earbud that measures brain activity** (one-channel Electroencephalography /EEG) through one electrode placed in the ear. Thanks to wavelet analysis, it differentiates **alertness and drowsiness stages** and sends a music-alarm when detecting that the driver is drowsy to avoid an accident.
## How we built it
We used the Neurosky Mindwave mobile one-channel EEG headset.
We modified it so that the electrode is not placed on the forehead but inside the ear to make our device the least intrusive as possible. For the purpose of the early development we applied conductive paste on the electrode to improve the signal quality.
The signals sent by the headset via bluetooth were captured and parsed to extract meaningful data (raw signal of EEG sampled each 2ms, plus computed EEG powers each second).
We then conducted multiple experiments to capture the mental state signals associated with drowsiness (beginning of falling asleep - easy this weekend - and eyes closing) and alertness (playing video games, watching engaging media and eyes opened).
All this data was gathered in a database that we analyzed with machine learning: we separated a train set and a test set and ran a Perceptron algorithm to classify between drowsy and alert stages, mainly based of the EEG Powers (delta, theta, alpha, beta, gamma).
We eventually wrote a piece of code receiving in real time the data from the headset, testing it with the previous algorithm, and playing music if the drowsy state is detected for a few seconds.
## Challenges we ran into
* Connecting the headset : finding the correct ports and setting baud rates took us a lot of time.
* Signal noise: when doing the first experiments to collect data, the hundred PCs and thousands of people around us in the stadium caused a lot of noise and the signals were unreadable. So we had to move to a room with none around to continue with our experiments.
* In-ear EEG : this is a very new area of research so we didn't find a lot of papers to help us filter and
analyse the data. This means we had to run a lot of experiments to bebing to understand it ourselves and build our database from scratch.
## Accomplishments that we're proud of
We ended up with a classifier of drowsiness/alertness stages from in-ear EEG data: we are only at 70% accuracy but considering that this is an entire new research area and that we based our algorithms only on data collected during the weekend, we are really proud of the result and hope it will have a tremendous impact on drivers' safety. We are confident we can improve it quickly with better data collection and optimisation of algorithms. We are also aware of the challenges coming due to the chocs and noise in the car.
## What we learned
* Filtering in-ear EEG signals
* Conduct a machine learning project from scratch
* Spend a whole weekend awake with a passionate team :)
## What's next for MiSense
* Mechanical design of the earbud and iterations (3d-printing)
* PCB design
* Improving signal filtering and machine learning algorithm (regression to measure the level of drowsiness)
* Customer interviews with truck, bus and taxi driver + market analysis
* Road trip SF-LA hitchhiking to test our prototype ;) | winning |
## Inspiration ⛹️♂️
Regularly courts are getting many cases and currently, it is becoming challenging to prioritize those cases. There are about 73,000 cases pending before the Supreme Court and about 44 million in all the courts of India. Cases that have been in the courts for more than 30 years, as of January 2021. A software/algorithm should be developed for prioritizing and allocations of dates to the cases based on the following parameters:
* Time of filing of chargesheet
* Severity of the crime and sections involved
* Last hearing date
* Degree of responsibility of the alleged perpetrators.
To provide a solution to this problem, we thought of a system to prioritize the cases, considering various real-life factors using an efficient machine learning algorithm.
That's how we came up with **"e-Adalat"** ("digital law court" in English), e-Court Management System.
## What it does?
e-Adalat is a platform (website) that prioritizes court cases and suggests the priority order to the judges in which these cases should be heard so that no pending cases will be there and no case is left pending for long periods of time. Judges and Lawyers can create their profiles and manage complete information of all the cases, a lawyer can file a case along with all the info in the portal whereas a judge can view the suggested priority of the cases to be held in court using the ML model, the cases would be automatically assigned to the judge based on their location. The judge and the lawyer can view the status of all the cases and edit them.
## How we built it?
While some of the team members were working on the front-end, the other members started creating a dummy dataset and analyzing the best machine learning algorithm, after testing a lot of algorithms we reached to the conclusion that random forest regression is the best for the current scenario, after developing the frontend and creating the Machine Learning model, we started working on the backend functionality of the portal using Node.js as the runtime environment with express.js for the backend logic and routes, this mainly involved authorization of judges and lawyers, linking the Machine Learning model with backend and storing info in database and fetching the information while the model is running.
Once, The backend was linked with the Machine Learning model, we started integrating the backend and ML model with the frontend, and that's how we created e-Adalat.
## Challenges we ran into
We searched online for various datasets but were not able to find a dataset that matched our requirements and as we were not much familiar with creating a dataset, we learned how to do that and then created a dataset.
Later on, we also ran this data through various Machine Learning algorithms to get the best result. We also faced some problems while linking the ML model with the backend and building the Web Packs but we were able to overcome that problem by surfing through the web and running various tests.
## Accomplishments that we're proud of
The fact that our offline exams were going on and still we managed to create a full-fledged portal in such a tight schedule that can handle multiple judges and lawyers, prioritize cases and handle all of the information securely from scratch in a short time is a feat that we are proud of, while also at the same time diversifying our Tech-Stacks and learning how to use Machine Learning Algorithms in real-time integrated neatly into our platform!
## What we learned
While building this project, we learned many new things, and to name a few, we learned how to create datasets, and test different machine learning algorithms. Apart from technical aspects we also learned a lot about Law And Legislation and how courts work in a professional environment as our project was primarily focused on law and order, we as a team needed to have an idea about how cases are prioritized in courts currently and what are the existing gaps in this system.
Being in a team and working under such a strict deadline along with having exams at the same time, we learned time management while also being under pressure.
## What's next for e-Adalat
### We have a series of steps planned next for our platform :
* Improve UI/UX and make the website more intuitive and easy to use for judges and lawyers.
* Increase the scope of profile management to different judicial advisors.
* Case tracking for judges and lawyers.
* Filtering of cases and assignment on the basis of different types of judges.
* Increasing the accuracy of our existing Machine Learning model.
## Discord Usernames:
* Shivam Dargan - Shivam#4488
* Aman Kumar - 1Man#9977
* Nikhil Bakshi - nikhilbakshi#8994
* Bakul Gupta - Bakul Gupta#5727 | ## Inspiration
Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse.
We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data.
## What it does
On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses.
## How we built it
Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel.
The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js.
## Challenges we ran into
* It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked.
* There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end.
* Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end.
## Accomplishments that we're proud of
* We were able to create a full-fledged, functional product within the allotted time we were given.
* We utilized our knowledge of how APIs worked to incorporate multiple of them into our project.
* We worked positively as a team even though we had not met each other before.
## What we learned
* Learning how to incorporate multiple APIs into one product with Next.
* Learned a new tech-stack
* Learned how to work simultaneously on the same product with multiple people.
## What's next for DataDaddy
### Short Term
* Add a more diverse applicability to different types of datasets and statistical analyses.
* Add more compatibility with SQL/NoSQL commands from Natural Language.
* Attend more hackathons :)
### Long Term
* Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results.
* Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses. | ## Our Inspiration
We all know how frustrating slow wifi is. When making a major decision such as buying a home, having accurate information about network speed can make all the difference. More subtle variations in network strength also have huge impacts in our increasingly internet driven world. Weak, inconsistent connections can wreak havoc on everything from office productivity to self driving cars.
## What it does
Our tools give you powerful insight into the network connection quality of an area so that you can make informed decisions when it really matters. With MapWorks, you simply enter an address that you are interested in, and we will show you the distance to the nearest cell tower. You can also visualize the differences in network strength across neighborhoods, providing a powerful tool for spotting dead zones in the network.
## How we built it
Front-end built with react, back-end with Flask. We used data on cell tower location from the Government of Canada in conjunction with data collected by Martello to estimate network strength.
## Challenges we ran into
Complex data transformations in Python. Integrating Google Maps with ReactJS.
## Accomplishments that we're proud of
Clean, user friendly UI.
## What we learned
How to find the distance to the nearest cell tower using the Haversine formula.
## What's next for MapWorks
Dead zone prediction while planning trips and driving. We will use the Martello data on network strength and the data on cell tower location from the Government of Canada to analyze trends in network strength along the route we plan using the Google directions API. Our program will then reroute the car if it is approaching an area that we predict will have dangerously low network strength. As the car drives along the route, it can even contribute its own data, which will improve the predictions. | partial |
## Inspiration
As a team, we had a collective interest in sustainability and knew that if we could focus on online consumerism, we would have much greater potential to influence sustainable purchases. We also were inspired by Honey -- we wanted to create something that is easily accessible across many websites, with readily available information for people to compare.
Lots of people we know don’t take the time to look for sustainable items. People typically say if they had a choice between sustainable and non-sustainable products around the same price point, they would choose the sustainable option. But, consumers often don't make the deliberate effort themselves. We’re making it easier for people to buy sustainably -- placing the products right in front of consumers.
## What it does
greenbeans is a Chrome extension that pops up when users are shopping for items online, offering similar alternative products that are more eco-friendly. The extension also displays a message if the product meets our sustainability criteria.
## How we built it
Designs in Figma, Bubble for backend, React for frontend.
## Challenges we ran into
Three beginner hackers! First time at a hackathon for three of us, for two of those three it was our first time formally coding in a product experience. Ideation was also challenging to decide which broad issue to focus on (nutrition, mental health, environment, education, etc.) and in determining specifics of project (how to implement, what audience/products we wanted to focus on, etc.)
## Accomplishments that we're proud of
Navigating Bubble for the first time, multiple members coding in a product setting for the first time... Pretty much creating a solid MVP with a team of beginners!
## What we learned
In order to ensure that a project is feasible, at times it’s necessary to scale back features and implementation to consider constraints. Especially when working on a team with 3 first time hackathon-goers, we had to ensure we were working in spaces where we could balance learning with making progress on the project.
## What's next for greenbeans
Lots to add on in the future:
Systems to reward sustainable product purchases. Storing data over time and tracking sustainable purchases. Incorporating a community aspect, where small businesses can link their products or websites to certain searches.
Including information on best prices for the various sustainable alternatives, or indicators that a product is being sold by a small business. More tailored or specific product recommendations that recognize style, scent, or other niche qualities. | ## Inspiration
We admired the convenience Honey provides for finding coupon codes. We wanted to apply the same concept except towards making more sustainable purchases online.
## What it does
Recommends sustainable and local business alternatives when shopping online.
## How we built it
Front-end was built with React.js and Bootstrap. The back-end was built with Python, Flask and CockroachDB.
## Challenges we ran into
Difficulties setting up the environment across the team, especially with cross-platform development in the back-end. Extracting the current URL from a webpage was also challenging.
## Accomplishments that we're proud of
Creating a working product!
Successful end-to-end data pipeline.
## What we learned
We learned how to implement a Chrome Extension. Also learned how to deploy to Heroku, and set up/use a database in CockroachDB.
## What's next for Conscious Consumer
First, it's important to expand to make it easier to add local businesses. We want to continue improving the relational algorithm that takes an item on a website, and relates it to a similar local business in the user's area. Finally, we want to replace the ESG rating scraping with a corporate account with rating agencies so we can query ESG data easier. | ## Inspiration
It's more important than ever for folks to inform themselves on how to reduce waste production, and the information isn't always as easy to find as it could be. With GreenShare, a variety of information on how to reduce, reuse and recycle any product is just a click away.
## What it does
Allows you to scan by barcode or object any product, and then gives you insight on how you might proceed to reduce your waste
## How we built it
Using a Flutter barcode scanner and image recognition package, and firebase for our database
## Challenges we ran into
The image recognition proved to be more finicky than anticipated
## Accomplishments that we're proud of
Having produced a nice UI that has all the basic functionality it needs to be the start of something new
## What we learned
To never underestimate the roadblocks you'll come across at any time! Also lots of Flutter
## What's next for GreenShare
Implementing some more communal aspects which would allow users to collaborate more in the green effort
## Ivey business model challenge
<https://docs.google.com/presentation/d/18f5gj-cJ79kL53FLt4npz_2CM2wjaN04F9OoPKwpPKs/edit#slide=id.g76c67f0fc7_6_66> | winning |
## Inspiration
Rizzmo was heavily inspired by our dissatisfaction for Gizmo's poor-quality science simulations.
## What it does
Rizzmo is low-poly exploration game of a thriving coral reef region. Learn about the coral reef ecosystem by exploring around in a submarine and clicking on a various marine life for their descriptions and biological fun facts.
## How we built it
The project was built with Unity, and thus C#. Some of the 3D models were made using Blender.
## Challenges we ran into
Switching from Unreal to Unity / Setting up Unity Version Control / Pulling and Checking in with Unity Workspace / Underwater rendering (no tutorial)
## Accomplishments that we're proud of
Visually appealing / Completing the project / Quickly learning Unity & Blender
## What we learned
How to use Blender / How to use Unity 3D / How to do underwater rendering / How to animate in Unity / How to power nap
## What's next for Rizzmo
Implement more STEM related simulations and make it more accessible. | ## Inspiration
When attending crowded lectures or tutorials, it's fairly difficult to discern content from other ambient noise. What if a streamlined pipeline existed to isolate and amplify vocal audio while transcribing text from audio, and providing general context in the form of images? Is there any way to use this technology to minimize access to education? These were the questions we asked ourselves when deciding the scope of our project.
## The Stack
**Front-end :** react-native
**Back-end :** python, flask, sqlalchemy, sqlite
**AI + Pipelining Tech :** OpenAI, google-speech-recognition, scipy, asteroid, RNNs, NLP
## What it does
We built a mobile app which allows users to record video and applies an AI-powered audio processing pipeline.
**Primary use case:** Hard of hearing aid which:
1. Isolates + amplifies sound from a recorded video (pre-trained RNN model)
2. Transcribes text from isolated audio (google-speech-recognition)
3. Generates NLP context from transcription (NLP model)
4. Generates an associated image to topic being discussed (OpenAI API)
## How we built it
* Frameworked UI on Figma
* Started building UI using react-native
* Researched models for implementation
* Implemented neural networks and APIs
* Testing, Testing, Testing
## Challenges we ran into
Choosing the optimal model for each processing step required careful planning. Algorithim design was also important as responses had to be sent back to the mobile device as fast as possible to improve app usability.
## Accomplishments that we're proud of + What we learned
* Very high accuracy achieved for transcription, NLP context, and .wav isolation
* Efficient UI development
* Effective use of each tem member's strengths
## What's next for murmr
* Improve AI pipeline processing, modifying algorithms to decreasing computation time
* Include multi-processing to return content faster
* Integrate user-interviews to improve usability + generally focus more on usability | Innovation’s mindset is to build on existing technology, improving efficiency and fool-proofing everything around us. The problem is that this mindset believes in unlimited growth. This is the root of the environmental crisis. We are consuming earth’s resources at a rate that could make us the last generation to appreciate much of the planet’s natural beauties and assets that keep us alive. We need to think long term, looking ahead to guarantee that many further generations will live in a sustainable world that takes only what it needs and gives just as much back to sustain a natural balance, nurturing all life for a long, long time to come.
## Inspiration
We started our project by thinking of ideas that could combine different platforms for some sort of purpose. We first decided to do a project that would, somehow, consider ways to help the environment, address the effects of climate change, and think about natural disasters. We wanted to learn about augmented reality, and use a cloud system. Niantic inspired us to challenge ourselves to follow the path of AR, despite the fact that none of us had previous experience working on AR.
## What it does
RealFuture provides the opportunity to show the users the world around them, as it will look in a not-very-far future. We want to show the drastic effects that climate changes is causing and will continue to cause to our planet. The idea of our project is to raise awareness of what we are doing to the planet and what will happen if we don't address the issues and find sustainable solutions.
## How we built it
RealFuture AR features were developed through Unity, Vuforia, and C#. We used Android developer to create an app that would connect to a phone's camera and search the location of the user. The location data would them be used to do an analysis of what effects of climate change could potentially occur at such location. For the purposes of the hackathon, we limited our use to Boston area. The app would them show effects of climate change using augmented reality.
## Challenges we ran into
-We had to learn how to use Unity to create AR environment.
-We had difficulties creating an app that could be launched in our cellphones.
-Getting the camera to work, as we were overlapping images by accident.
## Accomplishments that we're proud of
-We learn how to design AR environments as a team!
-We have a functional AR app that connects to different APIs
-We are using technology to create awareness of an issue we care about
## What we learned
Technical skills to develop AR applications. This was perhaps the most exciting part of our project. We came to HackMIT with no idea of what our project would be but the past 24 hours became a journey to learn and develop our project using AR. As this hackathon is coming to an end, we can say that we are leaving with a better understanding of AR and many more projects to come. We already have ideas of what else we can design with similar interfaces.
## What's next for RealFuture
We would like to expand the locations where our app can work. Creating different simulations through AR for different locations around the world. Ideally, we want to show an honest representation of how climate change may affect our planet in the future if we don't take actions about it. For some of those scenarios, we already know it's too late, but we can still prevent others. | partial |
## Inspiration
To any financial institution, the most valuable asset to increase revenue, remain competitive and drive innovation, is aggregated **market** and **client** **data**. However, a lot of data and information is left behind due to lack of *structure*.
So we asked ourselves, *what is a source of unstructured data in the financial industry that would provide novel client insight and color to market research*?. We chose to focus on phone call audio between a salesperson and client on an investment banking level. This source of unstructured data is more often then not, completely gone after a call is ended, leaving valuable information completely underutilized.
## What it does
**Structerall** is a web application that translates phone call recordings to structured data for client querying, portfolio switching/management and novel client insight. **Structerall** displays text dialogue transcription from a phone call and sentiment analysis specific to each trade idea proposed in the call.
Instead of loosing valuable client information, **Structerall** will aggregate this data, allowing the institution to leverage this underutilized data.
## How we built it
We worked with RevSpeech to transcribe call audio to text dialogue. From here, we connected to Microsoft Azure to conduct sentiment analysis on the trade ideas discussed, and displayed this analysis on our web app, deployed on Azure.
## Challenges we ran into
We had some trouble deploying our application on Azure. This was definitely a slow point for getting a minimum viable product on the table. Another challenge we faced was learning the domain to fit our product to, and what format/structure of data may be useful to our proposed end users.
## Accomplishments that we're proud of
We created a proof of concept solution to an issue that occurs across a multitude of domains; structuring call audio for data aggregation.
## What we learned
We learnt a lot about deploying web apps, server configurations, natural language processing and how to effectively delegate tasks among a team with diverse skill sets.
## What's next for Structurall
We also developed some machine learning algorithms/predictive analytics to model credit ratings of financial instruments. We built out a neural network to predict credit ratings of financial instruments and clustering techniques to map credit ratings independent of s\_and\_p and moodys. We unfortunately were not able to showcase this model but look forward to investigating this idea in the future. | ## Inspiration
My father put me in charge of his finances and in contact with his advisor, a young, enterprising financial consultant eager to make large returns. That might sound pretty good, but to someone financially conservative like my father doesn't really want that kind of risk in this stage of his life. The opposite happened to my brother, who has time to spare and money to lose, but had a conservative advisor that didn't have the same fire. Both stopped their advisory services, but that came with its own problems. The issue is that most advisors have a preferred field but knowledge of everything, which makes the unknowing client susceptible to settling with someone who doesn't share their goals.
## What it does
Resonance analyses personal and investment traits to make the best matches between an individual and an advisor. We use basic information any financial institution has about their clients and financial assets as well as past interactions to create a deep and objective measure of interaction quality and maximize it through optimal matches.
## How we built it
The whole program is built in python using several libraries for gathering financial data, processing and building scalable models using aws. The main differential of our model is its full utilization of past data during training to make analyses more wholistic and accurate. Instead of going with a classification solution or neural network, we combine several models to analyze specific user features and classify broad features before the main model, where we build a regression model for each category.
## Challenges we ran into
Our group member crucial to building a front-end could not make it, so our designs are not fully interactive. We also had much to code but not enough time to debug, which makes the software unable to fully work. We spent a significant amount of time to figure out a logical way to measure the quality of interaction between clients and financial consultants. We came up with our own algorithm to quantify non-numerical data, as well as rating clients' investment habits on a numerical scale. We assigned a numerical bonus to clients who consistently invest at a certain rate. The Mathematics behind Resonance was one of the biggest challenges we encountered, but it ended up being the foundation of the whole idea.
## Accomplishments that we're proud of
Learning a whole new machine learning framework using SageMaker and crafting custom, objective algorithms for measuring interaction quality and fully utilizing past interaction data during training by using an innovative approach to categorical model building.
## What we learned
Coding might not take that long, but making it fully work takes just as much time.
## What's next for Resonance
Finish building the model and possibly trying to incubate it. | >
> Domain.com domain: IDE-asy.com
>
>
>
## Inspiration
Software engineering and development have always been subject to change over the years. With new tools, frameworks, and languages being announced every year, it can be challenging for new developers or students to keep up with the new trends the technological industry has to offer. Creativity and project inspiration should not be limited by syntactic and programming knowledge. Quick Code allows ideas to come to life no matter the developer's experience, breaking the coding barrier to entry allowing everyone equal access to express their ideas in code.
## What it does
Quick Code allowed users to code simply with high level voice commands. The user can speak in pseudo code and our platform will interpret the audio command and generate the corresponding javascript code snippet in the web-based IDE.
## How we built it
We used React for the frontend, and the recorder.js API for the user voice input. We used runkit for the in-browser IDE. We used Python and Microsoft Azure for the backend, we used Microsoft Azure to process user input with the cognitive speech services modules and provide syntactic translation for the frontend’s IDE.
## Challenges we ran into
>
> "Before this hackathon I would usually deal with the back-end, however, for this project I challenged myself to experience a different role. I worked on the front end using react, as I do not have much experience with either react or Javascript, and so I put myself through the learning curve. It didn't help that this hacakthon was only 24 hours, however, I did it. I did my part on the front-end and I now have another language to add on my resume.
> The main Challenge that I dealt with was the fact that many of the Voice reg" *-Iyad*
>
>
> "Working with blobs, and voice data in JavaScript was entirely new to me." *-Isaac*
>
>
> "Initial integration of the Speech to Text model was a challenge at first, and further recognition of user audio was an obstacle. However with the aid of recorder.js and Python Flask, we able to properly implement the Azure model." *-Amir*
>
>
> "I have never worked with Microsoft Azure before this hackathon, but decided to embrace challenge and change for this project. Utilizing python to hit API endpoints was unfamiliar to me at first, however with extended effort and exploration my team and I were able to implement the model into our hack. Now with a better understanding of Microsoft Azure, I feel much more confident working with these services and will continue to pursue further education beyond this project." *-Kris*
>
>
>
## Accomplishments that we're proud of
>
> "We had a few problems working with recorder.js as it used many outdated modules, as a result we had to ask many mentors to help us get the code running. Though they could not figure it out, after hours of research and trying, I was able to successfully implement recorder.js and have the output exactly as we needed. I am very proud of the fact that I was able to finish it and not have to compromise any data." *-Iyad*
>
>
> "Being able to use Node and recorder.js to send user audio files to our back-end and getting the formatted code from Microsoft Azure's speech recognition model was the biggest feat we accomplished." *-Isaac*
>
>
> "Generating and integrating the Microsoft Azure Speech to Text model in our back-end was a great accomplishment for our project. It allowed us to parse user's pseudo code into properly formatted code to provide to our website's IDE." *-Amir*
>
>
> "Being able to properly integrate and interact with the Microsoft Azure's Speech to Text model was a great accomplishment!" *-Kris*
>
>
>
## What we learned
>
> "I learned how to connect the backend to a react app, and how to work with the Voice recognition and recording modules in react. I also worked a bit with Python when trying to debug some problems in sending the voice recordings to Azure’s servers." *-Iyad*
>
>
> "I was introduced to Python and learned how to properly interact with Microsoft's cognitive service models." *-Isaac*
>
>
> "This hackathon introduced me to Microsoft Azure's Speech to Text model and Azure web app. It was a unique experience integrating a flask app with Azure cognitive services. The challenging part was to make the Speaker Recognition to work; which unfortunately, seems to be in preview/beta mode and not functioning properly. However, I'm quite happy with how the integration worked with the Speach2Text cognitive models and I ended up creating a neat api for our app." *-Amir*
>
>
> "The biggest thing I learned was how to generate, call and integrate with Microsoft azure's cognitive services. Although it was a challenge at first, learning how to integrate Microsoft's models into our hack was an amazing learning experience. " *-Kris*
>
>
>
## What's next for QuickCode
We plan on continuing development and making this product available on the market. We first hope to include more functionality within Javascript, then extending to support other languages. From here, we want to integrate a group development environment, where users can work on files and projects together (version control). During the hackathon we also planned to have voice recognition to recognize and highlight which user is inputting (speaking) which code. | winning |
## Inspiration
Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need.
## What it does
It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive.
## How we built it
The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted.
## Challenges we ran into
There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users.
## Accomplishments that we're proud of
We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before. | ## Inspiration
We wanted to do something fun and exciting, nothing too serious. Slang is a vital component to thrive in today's society. Ever seen Travis Scott go like, "My dawg would prolly do it for a Louis belt", even most menials are not familiar with this slang. Therefore, we are leveraging the power of today's modern platform called "Urban Dictionary" to educate people about today's ways. Showing how today's music is changing with the slang thrown in.
## What it does
You choose your desired song it will print out the lyrics for you and then it will even sing it for you in a robotic voice. It will then look up the urban dictionary meaning of the slang and replace with the original and then it will attempt to sing it.
## How I built it
We utilized Python's Flask framework as well as numerous Python Natural Language Processing libraries. We created the Front end with a Bootstrap Framework. Utilizing Kaggle Datasets and Zdict API's
## Challenges I ran into
Redirecting challenges with Flask were frequent and the excessive API calls made the program super slow.
## Accomplishments that I'm proud of
The excellent UI design along with the amazing outcomes that can be produced from the translation of slang
## What I learned
A lot of things we learned
## What's next for SlangSlack
We are going to transform the way today's menials keep up with growing trends in slang. | ## Inspiration
We were interested in disaster relief for those impacted by Hurricances like Dorian and Maria for people that don't know what areas are affected and for first responders that don't know what infrastructure works are damaged and can't deliver appropriate resources in time.
## What it does
This website shows the location of the nearest natural disasters.
## How we built it
We used Amazon API and Google Cloud and Maps API and Python and Java Script.
## Challenges we ran into
We have not been to a hackathon before so we weren't sure about how in-depth or general our problem should be. We started with an app that first responders can use during a natural disasters to input vitals
## Accomplishments that we're proud of
A website that can map the GPS locations of flood data that we are confident in and a uniquely trained model for urban flooding.
## What we learned
We learned about Google Cloud APIs, AWS S3 Visual Recognition Software and about how to operate in a hackathon.
## What's next for Crisis Apps | winning |
# Inspiration
We’ve all been in a situation where collaborators’ strengths all lie in different areas, and finding “the perfect” team to work with is more difficult than expected. We wanted to make something that could help us find people with similar strengths without painstakingly scouring dozens of github accounts.
# What it does
MatchMadeIn.Tech is a platform where users are automatically matched with other github users who share similar commit frequencies, language familiarity, and more!
# How we built it
We used a modern stack that includes React for the front end and python Flask for the back end. Our model is a K-Means Cluster model, and we implemented it using scikit-learn, storing the trained model in a PickleDB. We leveraged GitHub's API to pull user contribution data and language preferences data for over 3 thousand users, optimizing our querying using GraphQL.
# Challenges we ran into
A big issue we faced was how to query the Github API to get full representation of all the users on the platform. Because there are over 100 million registered users on Github, many of which are bots and accounts that have no contribution history, we needed a way to parse these users.
Another obstacle we ran into was implementing the K-Means Cluster model. This was our first time using any machine learning algorithms other than Chat-GPT, so it was a very big learning curve. With multiple people working on the querying of data and training the model, our documentation regarding storing the data in code needed to be perfect, especially because the model required all the data to be in the same format.
# Accomplishments that we're proud of
Getting the backend to actually work! We decided to step out of our comfort zone and train our own statistical inference model. There were definitely times we felt discouraged, but we’re all proud of each other for pushing through and bringing this idea to life!
# What we learned
We learned that there's a real appetite for a more meaningful, niche-focused dating app in the tech community. We also learned that while the tech is essential, user privacy and experience are just as crucial for the success of a platform like this.
# What's next for MatchMadeIn.Tech
We’d love to add more metrics to determine user compatibility such as coding style, similar organizations, and similar feature use (such as the project board!). | ## Inspiration
We wanted to be able to connect with mentors. There are very few opportunities to do that outside of LinkedIn where many of the mentors are in a foreign field to our interests'.
## What it does
A networking website that connects mentors with mentees. It uses a weighted matching algorithm based on mentors' specializations and mentees' interests to prioritize matches.
## How we built it
Google Firebase is used for our NoSQL database which holds all user data. The other website elements were programmed using JavaScript and HTML.
## Challenges we ran into
There was no suitable matching algorithm module on Node.js that did not have version mismatches so we abandoned Node.js and programmed our own weighted matching algorithm. Also, our functions did not work since our code completed execution before Google Firebase returned the data from its API call, so we had to make all of our functions asynchronous.
## Accomplishments that we're proud of
We programmed our own weighted matching algorithm based on interest and specialization. Also, we refactored our entire code to make it suitable for asynchronous execution.
## What we learned
We learned how to use Google Firebase, Node.js and JavaScript from scratch. Additionally, we learned advanced programming concepts such as asynchronous programming.
## What's next for Pyre
We would like to add interactive elements such as integrated text chat between matched members. Additionally, we would like to incorporate distance between mentor and mentee into our matching algorithm. | ## Inspiration
As we were brainstorming ideas that we felt we could grow and expand upon we came to a common consensus. Finding a roommate for residence. We all found the previous program was very unappealing and difficult, during our first year we heard so many roommate horror stories. We felt providing a more modern and accustomed model, that hasn’t been explored before, towards finding a roommate would extremely benefit the process and effectiveness of securing a future roommate and more importantly a friend for the next generations.
## What it does
The program is a fully functional full stack web app that allows users to log in or create a profile, and find other students for roommates. The program includes a chat functionality which creates a chat room between two people (only once they've both matched with one another). The product also automatically recommends users as roommates based on an AI model which reads data from a database filled with metrics that the user sets when creating their profile, such as their program of study, sleep schedule, etc.
## How we built it
We first tackled the front end aspect by creating a rough template of what we imagined our final product would look like with React.js. After we set up a database using MonogoDB to act as our backend to the project and to store our users profiles, data, and messages. Once our database was secured we connected the backend to the frontend through Python using libraries such as Flask, PyMongo, and Panadas. Our python code would read through the database and use an AI model to compute a score for the compatibility between two people. Finally, we cleaned up our frontend and formatted our application to look nice and pretty.
## Challenges we ran into
As we had no experience with full-stack development, figuring out how to develop the backend and connect it with the front end was a very difficult challenge. Thankfully, using our resources and some intuition, we were able to overcome this. With such a drastic knowledge gap going into the project, another challenge was keeping our team on time and on track. We needed to make sure we weren't spending too much time learning something that wouldn't help our project or too much time trying to incorporate a function that wouldn't have a large impact on the abilities of the final product.
## Accomplishments that we're proud of
We are proud that we were able to create a fully functioning full-stack application with practically no knowledge in that field. We're especially proud of our accomplishment of learning so many new things in such a short amount of time. The feeling of satisfaction and pride we felt after coming up with a unique idea, overcoming many challenges, and finally reaching our desired goal is immeasurable.
## What we learned
None of us knew how to use the technologies we used during the project. There was definitely a learning curve with incorporating multiple technologies that we had not used before into a cohesive app, but we learned how to properly divide and plan out our workload between the four of us. We gained experience in assessing our own strengths and weaknesses as well as our teammates and gained strong knowledge in multiple languages, full-stack concepts, AI, libraries, and databases.
## What's next for RoomMate
In the near future, we plan to fully host our website on a public domain name, rather than having it work from the console. We also hope to expand our questionnaire and survey in order to further tailor our search results depending on users' preferences and improve users' experience with the site. | partial |
## Inspiration
This project was inspired by the countless hours that went into trying to decide on a place to eat.
## What it does
Using the Yelp Fusion API, our app provides a restaurant suggestion based on the top 20 restaurants nearby. The user should rate any restaurants on the list that they have been to, and information on the location and ratings is used to suggest the top restaurant that the user should go to.
## How we built it
The app was built using Expo.io. Starting on the homepage, the user is prompted for information on their ratings of restaurants that they have been to in the list of restaurants nearby. The data is then sent to a Compute Machine on the Google Cloud Platform, where a machine learning model is created using the k-nearest neighbors algorithm of the scikit-learn machine learning package. Once a model has been trained, information on the user's current location is used to make a prediction, which is the suggested restaurant that the user should go to.
## Challenges we ran into
We ran into many challenges while working on Magic Ate Ball. Since we were both inexperienced with using Expo.io and GCP, a lot of the time was spent trying to make the app function as we wanted it to as well as setting up the server.
## Accomplishments that we're proud of
We are very proud that we were able to create a functional app that has many of the features we had originally planned. As we were unable to create a functional app in QHacks 2018, this was a very big improvement. Although we ran into many issues along the way, we managed to overcome them one step at a time. We were each able to do our parts, and pulled together a functioning project.
## What we learned
We were both able to learn a lot while doing our respective parts of this project. Over this experience, we gained a lot of knowledge on the ins and outs of a machine learning app, how it works and how everything ties together.
## What's next for Magic Ate Ball
There were still many features of Magic Ate Ball that we did not have time to implement, such as taking into account user preferences on types of food, whether they favor proximity, ratings, or price, and being able to suggest restaurants based on similar foods to the user's preferences. | # Our Project: SlurSensor
Assembly AI's SlurSensor is a detection system for harmful speeches in online video games. Using AssemblyAI’s Audio Intelligence APIs, we can distinguish players who display verbally abusive behaviors in the video game community. Data extracted is plotted to have a track of the possible abusive language along the video or streaming. Thus, large video game companies and streaming platforms can use this data-monitoring to identify the most toxic users to create a safer environment for their community. We are willing to take part in the new generation of voice-controlled games and streaming.
## Problematics
Gaming is a competitive environment where the majority of members in the gaming community resorts to bullying. What’s more, even with today’s security technology, the online gaming industry still suffers from endless negative users practicing violence, cyberbullying, hate speech, harassment, death threats, and direct attacks against different groups of people including minors. Bullying victims generally face verbal harassment in multiplayer lobbies and in-game chats. According to ResearchGate & ScienceDaily, cyberbullying victims are 1.9 times more likely to commit suicide.
How are companies dealing with this problem?
Video game companies detect toxic players via in-game reporting which most of the time, is not quite precise. Additionally, games like Clash of Clans use a text detection system that only replaces the ‘inappropriate’ words with asterisks. However, these prevention methods have not been helpful at all, since cyberbullying statistics do nothing but to increase daily.
## What is SlurSensor?
With AssemblyAI Audio Intelligence Content Monitoring and Sentiment Analysis APIs, video game companies could monitor users and finally implement a secure system by training the AI based on every user’s audio files; thus, detecting if any sensitive content is spoken in those files and pinpointing exactly when and what was spoken so that after a short period. Our final aim is to train the system so that it can successfully identify when someone is just joking around and when it is being an intentional threat, insult, and so on. For each audio, we generate a graph to better comprehend the harmful behavior of each user. Therefore, these video game platforms could finally create a safe and regulated environment for all players and try to eliminate negative and toxic users once and for all.
## Tech implementation / Tools
1. AssemblyAI Audio Intelligence:
- Content Moderation API
- Sentiment Analysis API
2. VS Code
3. Python
* matplotlib.pyplot to generate graphs and figures
* Requests module to create HTTP requests to interact with API
4. NodeJS/Express
5. HTML/CSS/Bootstrap
6. Heroku
## Mission
Thanks to this protection implementation, thousands of gaming users will safely coexist with each other in both a well-protected and monitored environment. Understanding the severity and confidence of the scores accurately will allow us to detect the intentions of the user so companies can identify them, take further action, and prevent mental health issues such as suicide. Our goal is to find only those hurtful comments without restricting the users to express themselves with others. Implementing this project correctly will make the detection more efficient and as a result, users won’t have to worry about reporting toxic players on their own anymore.
We are aware that these APIs are available to the public and are ready to use in practically no time. Nonetheless, what we do with them is what matters. Having a groundbreaking technology means nothing if it does not have a beneficial purpose and positive impact on the community.
## What we learned:
During the last 36 hours, we learned to think outside the box, challenge ourselves, and acquire skills we never thought we could gain. Furthermore, meeting incredible people who took this experience to the next level has been a privilege; realizing that we can create something beneficial and unique in such a short time is truly remarkable.
In addition, we learned how to use AssemblyAI. We learned how to work with optional command line arguments in python. We also learned how to use version control with git/Github, such as using branches and pull requests to simultaneously work on different tasks at the same time. We learned how to incorporate python scripts into Express/NodeJS projects. Finally, we learned how to work with file uploads in Javascript.
## Possible setbacks / Next steps:
* Possible Setbacks:
Some companies might not be willing to change or replace their current user’s security/protection systems.
Users flow could decrease due to these restrictions.
Conflict of interest.
* Next Steps:
Using machine learning to increase the accuracy of the program to fulfill the goal of correctly identifying if a player is truly bothering or joking with others.
Provide a more effective visual and detailed analysis of each detected toxic user to ease the data outcome interpretation. We are looking only for toxic users to make a healthier environment.
## Team Members
* Salvador Chavez - B.Sc. in Engineering Physics, Tecnológico de Monterrey
* Nguyen Tran - B.A in Computer Science, Columbia University
* Jesús Guzman - B.Sc. in Engineering Physics, Tecnológico de Monterrey
* Geoffrey Wu - B.Sc. in Computer Science, Columbia University | ## What it does
InternetVane is an IOT wind/weather vane. Set and calibrate the InternetVane and the arrow will point in the real-time direction of the wind at your location.
First, the arrow is calibrated to the direction of the magnetometer. Then, the GRPS Shield works with twilio and google maps geo location api to retrieve the GPS location of the device. Using the location, the EarthNetworks API delivers real-time wind information to InternetVane . Then, the correct, IRL direction of the wind is calculated. The arrow's orientation is updated and recalculated every 30 seconds to provide the user with the most accurate visualization possible.
## How we built it
We used an Arduino UNO with a GPRS Shield, twillio functions, google maps geo-location api, magnetometer, stepper motor, and other various hardware.
The elegant, laser cut enclosure is the cherry ontop!
## Challenges we ran into
We were rather unfamiliar with laser cutting and 3d printing. There were many trials and errors to get the perfect, fitted enclosure.
Quite a bit of time was spent deciding how to calibrate the device. The arrow needs to align with the magnetometer before moving to the appropriate direction. Many hardware options were considered before deciding on the specific choice of switch. | losing |
## Inspiration
You haven't seen your friends in real life for a long, long time. Now that COVID restrictions are loosening, you decide that it's a perfect time for the squad to hang out again. You don't care where you'll all be meeting up; you just want a chance to speak with them in-person. Let whereyouapp decide where your next meetup location should be.
## What it does
WhereYouApp allows you to create Hangouts and invite your friends to them. All invitees can mark dates where they're unavailable, and then vote on a hangout location. A list of hangout locations are automatically generated based on proximity to all invitees as well as when they are open. Once votes are finalized, it generates a travel plan for all of your friends, allowing for options such as carpooling.
## How we built it
NestJS for the backend, storing information on CockroachDB.
OpenStreetMap for geographic-related calculations.
SolidJS for the frontend, displaying the map information with Leaflet.
## Challenges we ran into
## Accomplishments that we're proud of
## What we learned
## What's next for WhereYouApp | ## 💡 Inspiration:
The inspiration behind our Post-Secondary Admission Website was to help high school students transition to post-secondary education smoothly.
We understand the challenges and anxiety that come with the process of applying to post-secondary schools, and we wanted to create a platform to guide and support students throughout the entire application process.
## 🤖 What it does:
Our website has multiple pages, including a home page, a chatbot feature, and essay and resume review pages.
Our chatbot feature allows students to connect with an AI personal advisor to receive guidance and support for their post-secondary education.
Our essay and resume review pages provide expert feedback to students to help them create a compelling application that stands out from the rest.
## 🧠 How we built it:
Our team of four contributors developed the website using w3.css, a lightweight and responsive CSS framework.
We used JavaScript for our chatbot feature and PHP for our backend database.
## 🧩 Challenges we ran into:
One of the biggest challenges we ran into was developing our AI personal advisor feature to provide relevant and accurate advice to students.
It required a lot of research and trial and error to create an effective chatbot that could guide students through the application process.
## 😎 Accomplishments that we're proud of:
We're proud of creating a website that helps high school students achieve their goals and pursue their passions.
We're also proud of the positive feedback we've received from our clients, with 90% of them being admitted to their desired post-secondary schools and 85% having positive outcomes.
## 🥸 What we learned:
Through the development of our website, we learned the importance of creating a user-friendly and accessible platform for students.
We also learned the significance of providing personalized guidance and support to students, as every student's journey to post-secondary education is unique.
## 🥳 What's next for Post-Secondary Admission Website:
In the future, we plan to expand our services to include more post-secondary schools and offer additional features to further support students in their post-secondary journey.
We also plan to enhance our AI personal advisor feature to provide more accurate and personalized advice to students. | ## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once. | losing |
## Inspiration
We are students at Minerva University which uses innovative curriculum and learning methods. Online forum classes is how we learn, they require active participation. While on the classes, we might doze off or forget about something discussed, which diminishes our grades. We always wanted a live, automatic AI note taker so we won't need to take notes all the time.
## What it does
Our product, Notefly, turns the talks in the meetings into live transcript text. Then it turns the transcripts into short note blocks, a 40 words speech becomes 10 words. The short note blocks turn into bullet points and bullet points turn into main idea summary notes. All these summaries can be seen while in the meeting so everyone knows what we are discussing and important details, like a customers mail address.
## How we built it
Started by using agora's SDK to create a basic web conferencing software using next.js for the frontend. Then used agoras transcription features and figured out how we retrieve, store and process that data using fetch.ai agents. Used gemini for processing and mongo.db for storing data
## Challenges we ran into
Our tech stack didn't exactly matched with the project. Storing transcripts was a BIG bugger, we thought agora had place to store data but they didn't. we tried to send to singlestore but that didn't worked properly as well. Making fetch.ai agents work and make them retrieve and send data, and communicate with each other was also a problem. Making the app live update was also a challenge.
## Accomplishments that we're proud of
IT FUC'IN WORRKKS!!!
## What we learned
How to create a video calling app, something that we have 0 knowledge about, using tools that we have 0 knowledge about. Also deciding on the idea took much longer than it should, we will focus on not the perfect but the most applicable and team friendly idea next time.
## What's next for Notefly - AI Meeting Notes on the Fly
After an all-nighter, the app is far from done, there could be more improvements in the summary logic. We can turn this into a more scalable product that allows people to host their own meetings in the cloud, its on a local host that lives in agoras serves now, we sadly can't send try us links rn. | ## Inspiration
In this age of big tech companies and data privacy scares, it's more important than ever to make sure your data is safe. There have been many applications made to replicate existing products with the added benefit of data privacy, but one type of data that has yet to be decentralized is job application data. You could reveal a lot of private personal data while applying to jobs through job posting websites. We decided to make a site that guarantees this won't happen and also helps people automate the busy work involved with applying to jobs
## What it does
JOBd is a job postings website where users own all of their data and can automate a lot of the job search process. It uses Blockstack technology to ensure that all of your employment history, demographic info, and other personal details used during job applications are stored in a location you trust and that it can't be saved and used by the website. It also uses stdlib to automate email replies to job application updates and creates spreadsheets for user recordkeeping.
## How we built it
We used the Gaia storage system from Blockstack to store users' personal profile info and resumes. We used stdlib to easily create API workflows to integrate Gmail and Google sheets into our project. We built the website using React, Material UI, Mobx, and TypeScript.
## Challenges we ran into
Learning how to use Blockstack and stdlib during the hackathon, struggling to create a cohesive project with different skillsets, and maintaining sanity.
## Accomplishments that we're proud of
We got a functional website working and used the technologies and APIs we set out to use. We also came up with an idea we haven't found elsewhere and implemented the valuable features we came up with. | ## Inspiration
Imagine you're sitting in your favorite coffee shop and a unicorn startup idea pops into your head. You open your laptop and choose from a myriad selection of productivity tools to jot your idea down. It’s so fresh in your brain, you don’t want to waste any time so, fervently you type, thinking of your new idea and its tangential components. After a rush of pure ideation, you take a breath to admire your work, but disappointment. Unfortunately, now the hard work begins, you go back though your work, excavating key ideas and organizing them.
***Eddy is a brainstorming tool that brings autopilot to ideation. Sit down. Speak. And watch Eddy organize your ideas for you.***
## Learnings
Melding speech recognition and natural language processing tools required us to learn how to transcribe live audio, determine sentences from a corpus of text, and calculate the similarity of each sentence. Using complex and novel technology, each team-member took a holistic approach and learned news implementation skills on all sides of the stack.
## Features
1. **Live mindmap**—Automatically organize your stream of consciousness by simply talking. Using semantic search, Eddy organizes your ideas into coherent groups to help you find the signal through the noise.
2. **Summary Generation**—Helpful for live note taking, our summary feature converts the graph into a Markdown-like format.
3. **One-click UI**—Simply hit the record button and let your ideas do the talking.
4. **Team Meetings**—No more notetakers: facilitate team discussions through visualizations and generated notes in the background.

## Challenges
1. **Live Speech Chunking** - To extract coherent ideas from a user’s speech, while processing the audio live, we had to design a paradigm that parses overlapping intervals of speech, creates a disjoint union of the sentences, and then sends these two distinct groups to our NLP model for similarity.
2. **API Rate Limits**—OpenAI rate-limits required a more efficient processing mechanism for the audio and fewer round trip requests keyword extraction and embeddings.
3. **Filler Sentences**—Not every sentence contains a concrete and distinct idea. Some sentences go nowhere and these can clog up the graph visually.
4. **Visualization**—Force graph is a premium feature of React Flow. To mimic this intuitive design as much as possible, we added some randomness of placement; however, building a better node placement system could help declutter and prettify the graph.
## Future Directions
**AI Inspiration Enhancement**—Using generative AI, it would be straightforward to add enhancement capabilities such as generating images for coherent ideas, or business plans.
**Live Notes**—Eddy can be a helpful tool for transcribing and organizing meeting and lecture notes. With improvements to our summary feature, Eddy will be able to create detailed notes from a live recording of a meeting.
## Built with
**UI:** React, Chakra UI, React Flow, Figma
**AI:** HuggingFace, OpenAI Whisper, OpenAI GPT-3, OpenAI Embeddings, NLTK
**API:** FastAPI
# Supplementary Material
## Mindmap Algorithm
 | partial |
# 🤖🖌️ [VizArt Computer Vision Drawing Platform](https://vizart.tech)
Create and share your artwork with the world using VizArt - a simple yet powerful air drawing platform.

## 💫 Inspiration
>
> "Art is the signature of civilizations." - Beverly Sills
>
>
>
Art is a gateway to creative expression. With [VizArt](https://vizart.tech/create), we are pushing the boundaries of what's possible with computer vision and enabling a new level of artistic expression. ***We envision a world where people can interact with both the physical and digital realms in creative ways.***
We started by pushing the limits of what's possible with customizable deep learning, streaming media, and AR technologies. With VizArt, you can draw in art, interact with the real world digitally, and share your creations with your friends!
>
> "Art is the reflection of life, and life is the reflection of art." - Unknow
>
>
>
Air writing is made possible with hand gestures, such as a pen gesture to draw and an eraser gesture to erase lines. With VizArt, you can turn your ideas into reality by sketching in the air.



Our computer vision algorithm enables you to interact with the world using a color picker gesture and a snipping tool to manipulate real-world objects.

>
> "Art is not what you see, but what you make others see." - Claude Monet
>
>
>
The features I listed above are great! But what's the point of creating something if you can't share it with the world? That's why we've built a platform for you to showcase your art. You'll be able to record and share your drawings with friends.


I hope you will enjoy using VizArt and share it with your friends. Remember: Make good gifts, Make good art.
# ❤️ Use Cases
### Drawing Competition/Game
VizArt can be used to host a fun and interactive drawing competition or game. Players can challenge each other to create the best masterpiece, using the computer vision features such as the color picker and eraser.
### Whiteboard Replacement
VizArt is a great alternative to traditional whiteboards. It can be used in classrooms and offices to present ideas, collaborate with others, and make annotations. Its computer vision features make drawing and erasing easier.
### People with Disabilities
VizArt enables people with disabilities to express their creativity. Its computer vision capabilities facilitate drawing, erasing, and annotating without the need for physical tools or contact.
### Strategy Games
VizArt can be used to create and play strategy games with friends. Players can draw their own boards and pieces, and then use the computer vision features to move them around the board. This allows for a more interactive and engaging experience than traditional board games.
### Remote Collaboration
With VizArt, teams can collaborate remotely and in real-time. The platform is equipped with features such as the color picker, eraser, and snipping tool, making it easy to interact with the environment. It also has a sharing platform where users can record and share their drawings with anyone. This makes VizArt a great tool for remote collaboration and creativity.
# 👋 Gestures Tutorial





# ⚒️ Engineering
Ah, this is where even more fun begins!
## Stack
### Frontend
We designed the frontend with Figma and after a few iterations, we had an initial design to begin working with. The frontend was made with React and Typescript and styled with Sass.
### Backend
We wrote the backend in Flask. To implement uploading videos along with their thumbnails we simply use a filesystem database.
## Computer Vision AI
We use MediaPipe to grab the coordinates of the joints and upload images. WIth the coordinates, we plot with CanvasRenderingContext2D on the canvas, where we use algorithms and vector calculations to determinate the gesture. Then, for image generation, we use the DeepAI open source library.
# Experimentation
We were using generative AI to generate images, however we ran out of time.


# 👨💻 Team (”The Sprint Team”)
@Sheheryar Pavaz
@Anton Otaner
@Jingxiang Mo
@Tommy He | # QThrive
Web-based chatbot to facilitate journalling and self-care. Built for QHacks 2017 @ Queen's University. | ## Inspiration
Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer.
## What it does
We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert.
## How we built it
OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found.
## Challenges we ran into
We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well!
## Accomplishments that we're proud of
Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected.
## What we learned
Without the proper environment, your code is useless.
## What's next for EyeSee
Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people.
## Links
Feel free to read more about visual impairment, and how to help;
<https://w3c.github.io/low-vision-a11y-tf/requirements.html> | winning |
## Reimagining Patient Education and Treatment Delivery through Gamification
Imagine walking into a doctors office to find out you’ve been diagnosed with a chronic illness. All of a sudden, you have a slew of diverse healthcare appointments, ongoing medication or lifestyle adjustments, lots of education about the condition and more. While in the clinic/hospital, you can at least ask the doctor questions and try to make sense of your condition & management plan. But once you leave to go home, **you’re left largely on your own**.
We found that there is a significant disconnect between physicians and patients after patients are discharged and diagnosed with a particular condition. Physicians will hand patients a piece of paper with suggested items to follow as part of a "treatment plan". But after this diagnosis meeting, it is hard for the physicians to keep up-to-date with their patients on the progress of the plan. The result? Not surprisingly, patients **quickly fall off and don’t adhere** to their treatment plans, costing the healthcare system **upwards of $300 billion** as they get readmitted due to worsening conditions that may have been prevented.
But it doesn’t have to be that way…
We're building an engaging end-to-end experience for patients managing chronic conditions, starting with one of the most prevalent ones - diabetes. **More than 100 million U.S. adults are now living with diabetes or prediabetes**
## How does Glucose Guardian Work?
Glucose Guardian is a scalable way to gamify education for chronic conditions using an existing clinical technique called “teachback” (see here- [link](https://patientengagementhit.com/features/developing-patient-teach-back-to-improve-patient-education)). We plan to partner with clinics and organizations, scrape their existing websites/documents where they house all their information about the chronic condition, and instantly convert that into short (up to 2 min) voice modules.
Glucose Guardian users can complete these short, guided, voice-based modules that teach and validate their understanding of their medical condition. Participation and correctness earn points which go towards real-life rewards for which we plan to partner with rewards organizations/corporate programs.
Glucose Guardian users can also go to the app to enter their progress on various aspects of their personalized treatment plan. Their activity on this part of the app is also incentive-driven.
This is inspired by current non-health solutions our team has had experience with using very low barrier audio-driven games that have been proven to drive user engagement through the roof.
## How we built it
We've simplified how we can use gamification to transform patient education & treatment adherence by making it more digestible and fun. We ran through some design thinking sessions to work out how we could create a solution that wouldn’t simply look great but could be implemented clinically and be HIPAA compliant.
We then built Glucose Guardian as a native iOS application using Swift. Behind the scenes, we use Python toolkits to perform some of our text matching for patient education modules, and we utilize AWS for infrastructure needs.
## Challenges we ran into
It was difficult to navigate the pre-existing market of patient adherence apps and create a solution that was unique and adaptable to clinical workflow. To tackle this, we dedicated ample time to step through user journeys - patients, physicians and allied health professionals. Through this strategy, we identified education as our focus because it is critical to treatment adherence and a patient-centric solution.
## We're proud of this
We've built something that has the potential to fulfill a large unmet need in the healthcare space, and we're excited to see how the app is received by beta testers, healthcare partners, and corporate wellness organizations.
## Learning Points
Glucose Guardian has given our cross-disciplined team the chance to learn more about the intersection of software + healthcare. Through developing speech-to-text features, designing UIs, scraping data, and walking through patient journeys, we've maximized our time to learn as much as possible in order to deliver the biggest impact.
## Looking Ahead
As per the namesake, so far we've implemented one use case (diabetes) but are planning to expand to many other diseases. We'd also like to continue building other flows beyond patient education. This includes components such as the gamified digital treatment plan which can utilize existing data from wearables and wellness apps to provide a consolidated view on the patient's post-discharge health.
Beyond that, we also see potential for our platform to serve as a treasure trove of data for clinical research and medical training. We're excited to keep building and keep creating more impact. | ### Inspiration
Like many students, we face uncertainty and difficulty when searching for positions in the field of computer science. Our goal is to create a way to tailor your resume to each job posting to increase your odds of an interview or position.
Our aim is to empower individuals by creating a tool that seamlessly tailors resumes to specific job postings, significantly enhancing the chances of securing interviews and positions.
### What it does
ResumeTuner revolutionizes the job application process by employing cutting-edge technology. The platform takes a user's resume and a job description and feeds it into a Large Language Model through the Replicate API. The result is a meticulously tailored resume optimized for the specific requirements of the given job, providing users with a distinct advantage in the competitive job market.
### How we built it
Frontend: React / Next.js + Tailwind + Material UI
Backend: Python / FastAPI + Replicate + TinyMCE
### Challenges
Creating a full stack web application and integrating it with other API's and components was challenging at times, but we pushed through to complete our project.
## Accomplishments that we're proud of
We're proud that we were able to efficiently manage our time and build a tool that we wanted to use. | ## Inspiration
My inspiring was really just my daily life. I use apps like my fitness pal and libre link to keep track of all these things but I feel like it doesn't paint the whole picture for someone with my condition. I'm really making this app for people like myself who have a challenging time dealing with diabetes and simplifying an aspect of their lives' at least a little bit.
## What it does
The mobile app keeps track of your blood sugars for the day by either taking data from your wearable sensor or finger pricks and puts it side by side with the exercise that you've done that day and the food you've ate. It allows you to clearly quantify and compare how much you eat, exercise and take insulin individually and it also helps you clearly see the relationship those three activities have with each other. The My Fitness Pal API does a really good job of tracking macro nutrients in each meal and making it easier the more you use the API.
## How we built it
I built the app using React native and it was my first time using it. I plan to integrate the my fitness pal API for the fitness and meals portion of the app and the terra API to get sensor data as well as the choice to manually update a csv file of your glucose logs that most glucometers come with.
## Challenges we ran into
## Accomplishments that we're proud of
Creating something meaningful to myself and other people using a passion/skill that is also meaningful to me.
## What we learned
I learned a lot about how to organize my files while making a large project, companies are very stingy when it comes to healthcare related APIs and how to actually create a cross platform mobile app.
## What's next for Diathletic
I plan to make the app fully functional because right now there is a lot of dummy data. I wish to be able to use this app in my everyday life because its great to actually see the effects of something great that you have completed. I also REALLY hope that one stranger finds any sort of value in the software that I created. | partial |
## Realm Inspiration
Our inspiration stemmed from our fascination in the growing fields of AR and virtual worlds, from full-body tracking to 3D-visualization. We were interested in realizing ideas in this space, specifically with sensor detecting movements and seamlessly integrating 3D gestures. We felt that the prime way we could display our interest in this technology and the potential was to communicate using it. This is what led us to create Realm, a technology that allows users to create dynamic, collaborative, presentations with voice commands, image searches and complete body-tracking for the most customizable and interactive presentations. We envision an increased ease in dynamic presentations and limitless collaborative work spaces with improvements to technology like Realm.
## Realm Tech Stack
Web View (AWS SageMaker, S3, Lex, DynamoDB and ReactJS): Realm's stack relies heavily on its reliance on AWS. We begin by receiving images from the frontend and passing it into SageMaker where the images are tagged corresponding to their content. These tags, and the images themselves, are put into an S3 bucket. Amazon Lex is used for dialog flow, where text is parsed and tools, animations or simple images are chosen. The Amazon Lex commands are completed by parsing through the S3 Bucket, selecting the desired image, and storing the image URL with all other on-screen images on to DynamoDB. The list of URLs are posted on an endpoint that Swift calls to render.
AR View ( AR-kit, Swift ): The Realm app renders text, images, slides and SCN animations in pixel perfect AR models that are interactive and are interactive with a physics engine. Some of the models we have included in our demo, are presentation functionality and rain interacting with an umbrella. Swift 3 allows full body tracking and we configure the tools to provide optimal tracking and placement gestures. Users can move objects in their hands, place objects and interact with 3D items to enhance the presentation.
## Applications of Realm:
In the future we hope to see our idea implemented in real workplaces in the future. We see classrooms using Realm to provide students interactive spaces to learn, professional employees a way to create interactive presentations, teams to come together and collaborate as easy as possible and so much more. Extensions include creating more animated/interactive AR features and real-time collaboration methods. We hope to further polish our features for usage in industries such as AR/VR gaming & marketing. | ## Inspiration
We were inspired to build Schmart after researching pain points within Grocery Shopping. We realized how difficult it is to stick your health goals or have a reduced environmental impact while grocery shopping. Inspired by innovative technlogy that exists, we wanted to create an app which would conveniently allow anyone to feel empowered to shop by reaching their goals and reducing friction.
## What it does
Our solution, to gamify the grocery shopping experience by allowing the user to set goals before shopping and scan products in real time using AR and AI to find products that would meet their goals and earn badges and rewards (PC optimum points) by doing so.
## How we built it
This product was designed on Figma, and we built the backend using Flask and Python, with the database stored using SQLite3. We then built the front end with React Native.
## Challenges we ran into
Some team members had school deadlines during the hackathon, so we could not be fully concentrated on the Hackathon coding. In addition, our team was not too familiar with React Native, so development of the front end took longer than expected.
## Accomplishments that we're proud of
We are extremely proud that we were able to build an deployed an end-to-end product in such a short timeframe. We are happy to empower people while shopping and make the experience so much more enjoyable and problem solve areas that exist while shopping.
## What we learned
Communication is key. This project would not have been possible without the relentless work of all our team members striving to make the world a better place with our product. Whether it be using technology we have never used before or sharing our knowledge with the rest of the group, we all wanted to create a product that would have a positive impact and because of this we were successful in creating our product.
## What's next for Schmart
We hope everyone can use Schmart in the future on their phones as a mobile app. We can see it being used in Grocery (and hopefully all stores) in the future. Leaders. Meeting health and environmental goals should be barrier-free, and being an app that anyone can use, this makes this possible. | ## Inspiration 💡
Our inspiration for this project was to leverage new AI technologies such as text to image, text generation and natural language processing to enhance the education space. We wanted to harness the power of machine learning to inspire creativity and improve the way students learn and interact with educational content. We believe that these cutting-edge technologies have the potential to revolutionize education and make learning more engaging, interactive, and personalized.
## What it does 🎮
Our project is a text and image generation tool that uses machine learning to create stories from prompts given by the user. The user can input a prompt, and the tool will generate a story with corresponding text and images. The user can also specify certain attributes such as characters, settings, and emotions to influence the story's outcome. Additionally, the tool allows users to export the generated story as a downloadable book in the PDF format. The goal of this project is to make story-telling interactive and fun for users.
## How we built it 🔨
We built our project using a combination of front-end and back-end technologies. For the front-end, we used React which allows us to create interactive user interfaces. On the back-end side, we chose Go as our main programming language and used the Gin framework to handle concurrency and scalability. To handle the communication between the resource intensive back-end tasks we used a combination of RabbitMQ as the message broker and Celery as the work queue. These technologies allowed us to efficiently handle the flow of data and messages between the different components of our project.
To generate the text and images for the stories, we leveraged the power of OpenAI's DALL-E-2 and GPT-3 models. These models are state-of-the-art in their respective fields and allow us to generate high-quality text and images for our stories. To improve the performance of our system, we used MongoDB to cache images and prompts. This allows us to quickly retrieve data without having to re-process it every time it is requested. To minimize the load on the server, we used socket.io for real-time communication, it allow us to keep the HTTP connection open and once work queue is done processing data, it sends a notification to the React client.
## Challenges we ran into 🚩
One of the challenges we ran into during the development of this project was converting the generated text and images into a PDF format within the React front-end. There were several libraries available for this task, but many of them did not work well with the specific version of React we were using. Additionally, some of the libraries required additional configuration and setup, which added complexity to the project. We had to spend a significant amount of time researching and testing different solutions before we were able to find a library that worked well with our project and was easy to integrate into our codebase. This challenge highlighted the importance of thorough testing and research when working with new technologies and libraries.
## Accomplishments that we're proud of ⭐
One of the accomplishments we are most proud of in this project is our ability to leverage the latest technologies, particularly machine learning, to enhance the user experience. By incorporating natural language processing and image generation, we were able to create a tool that can generate high-quality stories with corresponding text and images. This not only makes the process of story-telling more interactive and fun, but also allows users to create unique and personalized stories.
## What we learned 📚
Throughout the development of this project, we learned a lot about building highly scalable data pipelines and infrastructure. We discovered the importance of choosing the right technology stack and tools to handle large amounts of data and ensure efficient communication between different components of the system. We also learned the importance of thorough testing and research when working with new technologies and libraries.
We also learned about the importance of using message brokers and work queues to handle data flow and communication between different components of the system, which allowed us to create a more robust and scalable infrastructure. We also learned about the use of NoSQL databases, such as MongoDB to cache data and improve performance. Additionally, we learned about the importance of using socket.io for real-time communication, which can minimize the load on the server.
Overall, we learned about the importance of using the right tools and technologies to build a highly scalable and efficient data pipeline and infrastructure, which is a critical component of any large-scale project.
## What's next for Dream.ai 🚀
There are several exciting features and improvements that we plan to implement in the future for Dream.ai. One of the main focuses will be on allowing users to export their generated stories to YouTube. This will allow users to easily share their stories with a wider audience and potentially reach a larger audience.
Another feature we plan to implement is user history. This will allow users to save and revisit old prompts and stories they have created, making it easier for them to pick up where they left off. We also plan to allow users to share their prompts on the site with other users, which will allow them to collaborate and create stories together.
Finally, we are planning to improve the overall user experience by incorporating more customization options, such as the ability to select different themes, characters and settings. We believe these features will further enhance the interactive and fun nature of the tool, making it even more engaging for users. | winning |
## Inspiration
Fall lab and design bay cleanout leads to some pretty interesting things being put out at the free tables. In this case, we were drawn in by a motorized Audi Spyder car. And then, we saw the Neurosity Crown headsets, and an idea was born. A single late night call among team members, excited about the possibility of using a kiddy car for something bigger was all it took. Why can't we learn about cool tech and have fun while we're at it?
Spyder is a way we can control cars with our minds. Use cases include remote rescue, non able-bodied individuals, warehouse, and being extremely cool.
## What it does
Spyder uses the Neurosity Crown to take the brainwaves of an individual, train an AI model to detect and identify certain brainwave patterns, and output them as a recognizable output to humans. It's a dry brain-computer interface (BCI) which means electrodes are placed against the scalp to read the brain's electrical activity. By taking advantage of these non-invasive method of reading electrical impulses, this allows for greater accessibility to neural technology.
Collecting these impulses, we are then able to forward these commands to our Viam interface. Viam is a software platform that allows you to easily put together smart machines and robotic projects. It completely changed the way we coded this hackathon. We used it to integrate every single piece of hardware on the car. More about this below! :)
## How we built it
### Mechanical
The manual steering had to be converted to automatic. We did this in SolidWorks by creating a custom 3D printed rack and pinion steering mechanism with a motor mount that was mounted to the existing steering bracket. Custom gear sizing was used for the rack and pinion due to load-bearing constraints. This allows us to command it with a DC motor via Viam and turn the wheel of the car, while maintaining the aesthetics of the steering wheel.
### Hardware
A 12V battery is connected to a custom soldered power distribution board. This powers the car, the boards, and the steering motor. For the DC motors, they are connected to a Cytron motor controller that supplies 10A to both the drive and steering motors via pulse-width modulation (PWM).
A custom LED controller and buck converter PCB stepped down the voltage from 12V to 5V for the LED under glow lights and the Raspberry Pi 4. The Raspberry Pi 4 uses the Viam SDK (which controls all peripherals) and connects to the Neurosity Crown for vision software controlling for the motors. All the wiring is custom soldered, and many parts are custom to fit our needs.
### Software
Viam was an integral part of our software development and hardware bringup. It significantly reduced the amount of code, testing, and general pain we'd normally go through creating smart machine or robotics projects. Viam was instrumental in debugging and testing to see if our system was even viable and to quickly check for bugs. The ability to test features without writing drivers or custom code saved us a lot of time. An exciting feature was how we could take code from Viam and merge it with a Go backend which is normally very difficult to do. Being able to integrate with Go was very cool - usually have to do python (flask + SDK). Being able to use Go, we get extra backend benefits without the headache of integration!
Additional software that we used was python for the keyboard control client, testing, and validation of mechanical and electrical hardware. We also used JavaScript and node to access the Neurosity Crown, Neurosity SDK and Kinesis API to grab trained AI signals from the console. We then used websockets to port them over to the Raspberry Pi to be used in driving the car.
## Challenges we ran into
Using the Neurosity Crown was the most challenging. Training the AI model to recognize a user's brainwaves and associate them with actions didn't always work. In addition, grabbing this data for more than one action per session was not possible which made controlling the car difficult as we couldn't fully realise our dream.
Additionally, it only caught fire once - which we consider to be a personal best. If anything, we created the world's fastest smoke machine.
## Accomplishments that we're proud of
We are proud of being able to complete a full mechatronics system within our 32 hours. We iterated through the engineering design process several times, pivoting multiple times to best suit our hardware availabilities and quickly making decisions to make sure we'd finish everything on time. It's a technically challenging project - diving into learning about neurotechnology and combining it with a new platform - Viam, to create something fun and useful.
## What we learned
Cars are really cool! Turns out we can do more than we thought with a simple kid car.
Viam is really cool! We learned through their workshop that we can easily attach peripherals to boards, use and train computer vision models, and even use SLAM! We spend so much time in class writing drivers, interfaces, and code for peripherals in robotics projects, but Viam has it covered. We were really excited to have had the chance to try it out!
Neurotech is really cool! Being able to try out technology that normally isn’t available or difficult to acquire and learn something completely new was a great experience.
## What's next for Spyder
* Backflipping car + wheelies
* Fully integrating the Viam CV for human safety concerning reaction time
* Integrating Adhawk glasses and other sensors to help determine user focus and control | ## Inspiration
We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible!
## What it does
This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.)
## How we built it
Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone.
## Challenges we ran into
It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture
## Accomplishments that we're proud of
After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition.
## What we learned
Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware.
## What's next for i4Noi
We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people. | ## Inspiration
Self driving cars are often in the news, being cited as an impressive new innovation and the future of transportation. Our team recognized the value of this automation in many sectors other than the main roads that we frequently drive in. Automated vehicles could also be used on a smaller scale, in warehouses, hospitals, homes, and grocery stores. However, the full suite of electronics in a self driving car would prove to be extremely costly in these situations, where there would be significantly less need for precision driving. In response, our team looked towards a more modest, affordable, and flexible method of introducing automated vehicles to various sectors of life.
## What it does
Our solution to this problem is a small, but flexible robot with various levels of control capability corresponding to need. The first level is a simple line sensor, where our vehicle is able to move along predetermined paths. This would be useful in places such as warehouses or home security, where there are predicable paths, and easily programmable stopping points. We hope to include an GUI that would let warehouse owners easily implement these routes in the future.
The second level is a smartphone controlled Bluetooth robot, where we have used the Arduino bluetooth module along with a simple smartphone app to manually control the robot. This level of implementation, along with various sensors such a camera could prove valuable for remote home monitoring, from interacting with pets to observing small children.
Lastly, we introduced the possibility of interfacing with Google Assistant with the use of voice commands. Right now, the extent of our implementation is controlling the direction of travel of our robot, but the code could easily be expended. Our code turned out to be quite modular with the use of IFTTT, which can easily be expanded to an app GUI that the user could program in their own features, such as a kitchen assistant or robotic nurse in a hospital, where the need for hands-off interaction is needed
## How we built it
The base of our project is a simple car chassis, where we used the standard Arduino motor control and Bluetooth module. Navigation at this level is aided by two line sensors and an ultrasonic sensor. Power is provided through a makeshift power bank, which allows for easy and fast charging. The Google assistant implementation is a bit more complicated. We used a particle photon kit to send commands to the Arduino through wifi, aided by IFTTT.
## Challenges we ran into
The main challenge we faced was our implementation of the voice commands. Since we were inexperienced with Google home and wi-fi modules, we were challenged to start from the bottom. We were originally going to use the ESP8266 Board, but due to the lack of components, we went with the particle photon instead. It was easy to set up the voice recognition on it's own, but it was more challenging to actually send the data to the Arduino and in turn the motors, since the Photon is not that well documented at the moment, but we eventually solved this problem by making an intermediate circuit and using Serial commands.
## Accomplishments that we're proud of
In terms of our mechanical design, we did not need to make any major modifications since our first model, apart from a few circuity changes. We also managed to make the whole package relatively compact, allowing for easy expansion in the future. Along the software side, we were proud that we were able to integrate so many control capabilities into a simple motor car, as well as making different programs and hardware interface with each other.
## What we learned
Since the theme for this Hackathon is connectivity, all of us learned a great deal of how different devices interacted with each other. Since we used Arduino, Particle Photon, Google Assistant and Bluetooth in our project, we had to spend a great deal of time learning how to write code and design circuits that worked with all four aspects. This was ultimately beneficiary to us, since we also realized the endless possibilities that different IoT device combinations could bring.
## What's next for BoundlessBot:
We imagine BoundlessBot to be a basis for a wide range of consumer implementations, and taking this to an entrepreneurial level, we would ideally like our hardware and software to be extremely easy to personalize and use. People could design their own robot based on their own needs, and companies could have small fleets of these robots perform jobs based on cloud networks. We hope to host our own mini-programs store for developers to get creative. | winning |
## Inspiration
While thinking about the state of travelling today, our team has been really saddened to see the effect that COVID has had on areas that depend on tourism revenue to survive.
For this Makeathon, we wanted to build something that might help struggling venues and businesses that benefit from travel return to a more stable state by helping eliminate some of the barriers that make it difficult for tourists to travel or enjoy their vacations.
The inspiration behind the name 'Soteria' was from the Greek goddess of the same name, who was the entity responsible for safety and salvation, deliverance, and preservation from harm. And we felt that her name is best suited for our project which has the same purpose.
## What it does
* Monitor indoor air quality and environment using onboard sensors.
-Identify conditions that may be uncomfortable for those with chronic lung conditions or allergic to fine dust.
* Detect hazardous conditions and find their source using autonomous technology.
* Send data over the internet for caretakers to be alarmed or informed about when a place needs assistance regarding the risk of increased air quality index.
* Provide real-time data to people about the conditions of their staying place/room over the internet which will create greater transparency between businesses and tourists.
## How we built it
We built the system using the ESP32 microcontroller as the brain of the system and a few sensors like the DHt11, for the temperature and humidity content in that area, the mq135 for the CO2 content in the air, the hcsr04 ultrasonic distance sensor for the distance measurement and making the decision which way to go. Along with that, we used a servo to rotate the hcsr04 to the left and right direction. We were initially using the l293d motor driver for driving the motors, but later we changed it to relay control since we, unfortunately, fried it 😦 oops!
However, we overcame that with a quick solution by using the relays. 😁
We used google Firebase to publish the data so that it can be read by the UI front end that we made as well. A node.js server gets live updates from firebase DB and sends to a frontend dashboard application built in react.js.
## Challenges we ran into
Having a hardware and software working on this project from different continents. Both of us are more experienced with backend/hardware development so it was fun but challenges to make a front-end dashboard
## Accomplishments that we're proud of
We were really happy to have finished the project in just 24 hours and overcame almost all the technical challenges that we faced.
## What we learned
We learned about the different aspects of hardware and software that we both worked on respectively.
## What's next for Soteria
We want to equip the Soteria autonomous rover with emergency tools that can inform others as well as take respective measures to minimize hazardous conditions inside the building, be it door and window unlocking capabilities or that of fire extinguishing.
Another goal we have next is to smooth out the interface further and continue working on it for making it a marketable device. | ## Inspiration
Very boring classes.
## What it does
Gives you a transcript and summary of what's going on in class real-time.
## How we built it
Integrated GPT-3 with OpenAI whisper.
## Challenges we ran into
First time using HTML. Proud we got that working
## Accomplishments that we're proud of
That we got it working. Really useful project.
## What we learned
How to make a frontend and communicate with APIs
## What's next for ClassSnap
Better frontend | ## Inspiration
As students with busy lives, it's difficult to remember to water your plants especially when you're constantly thinking about more important matters. So as solution we thought it would be best to have an app that centralizes, monitors, and notifies users on the health of your plants.
## What it does
The system is setup with 2 main components hardware and software. On the hardware side, we have multiple sensors placed around the plant that provide input on various parameters (i.e. moisture, temperature, etc.). Once extracted, the data is then relayed to an online database (in our case Google Firebase) where it's then taken from our front end system; an android app. The app currently allows user authentication and the ability to add and delete plants.
## How we built it
**The Hardware**:
The Hardware setup for this hack was reiterated multiple times through the hacking phase due to setbacks of the hardware given. Originally we planned on using the Dragonboard 410c as a central hub for all the sensory input before transmitting it via wifi. However, the Dragonboard taken by the hardware lab had a corrupted version of windows iot which meant we had to flash the entire device before starting. After flashing, we learned that dragonboards (and raspberry Pi's) lack the support for analog input meaning the circuit required some sort of ADC (analog to digital converter). Afterwards, we decided to use the ESP-8266 wifi boards to send data as it better reflected the form factor of a realistic prototype and because the board itself supports analog input. In addition we used an Arduino UNO to power the moisture sensor because it required 5V and the esp outputs 3.3V (Arduino acts as a 5v regulator).
**The Software**:
The app was made in Android studios and was built with user interaction in mind by having users authenticate themselves and add their corresponding plants which in the future would each have sensors. The app is built with scalability in mind as it uses Google Firebase for user authentication, sensor datalogging,
## Challenges we ran into
The lack of support for the Dragonboard left us with many setbacks; endless boot cycles, lack of IO support, flashing multiple OSs on the device. What put us off the most was having people tell us not to use it because of its difficultly. However, we still wanted to incorporate in some way.
## Accomplishments that we're proud of
* Flashing the Dragonboard and booting it with Windows IOT core
* a working hardware/software setup that tracks the life of a plant using sensory input.
## What we learned
* learned how to program the dragonboard (in both linux and windows)
* learning how to incorporate Firebase into our hack
## What's next for Dew Drop
* Take it to the garden world where users can track multiple plants at once and even support a self watering system | losing |
## Inspiration
Professor Michelle McAllister, formerly at Northeastern University, provided notes styled in such a manner. These "incomplete" notes were filled out by students during her lectures. A team member was her student and, having done very well in her class, wanted to create similarly-styled notes for other classes. The team member envisioned other students being able to style their notes in such a way so they could practice for tests and ace their exams!
## What it does
By entering their notes to a textbox, students can use Text2Test to create fill-in-the-blank questions in order to practice for tests.
## How we built it
We used AWS APIs, specifically Lambda, Comprehend, Gateway. Additionally, we used HTML, JavaScript, Python and Ruby.
## Challenges we ran into
We had more ideas we wanted to implement, like running Optical Character Recognition (OCR) on an uploaded PDF scan of handwritten notes, but we would need more than 24 hours for such an ambitious idea. Having never used AWS prior, we had to learn how to upload code to the Lambda API and connect the Comprehend and Gateway APIs to it. We also had difficulty posting requests from the website to the Gateway API, but we eventually solved that problem. The Python code for Lambda code and the website code are saved to GitHub.
## Accomplishments that we're proud of
This website allows students to create a profile and login to a portal. Students can schedule exams, upload their notes via a submission portal, add notes to a particular subject, generate customized tests based on their notes, and test their knowledge by entering answers to fill-in-the-blank questions.
## What we learned
We learned how to utilize many AWS APIs like Comprehend, Lambda, and Gateway. In addition, we learned how to post requests in the right format from outside entities (like our Ruby on Rails website) to the AWS Gateway API.
## What's next for Text2Test
We want to give students the ability to upload handwritten notes. To increase the numbers of learning tools, we wish to include a multiple-choice section with "smart" answer choices. Finally, we want to allow images to be used as inputs and test students on content pertaining to the images. | ## KEEP YOUR PHONE CLOSE, AND YOUR FRIENDS CLOSER
DRIFT is a web app designed for college students and young adults who want to party safely. With a need for 21st-century solutions to common dangers associated with parties, like alcohol overdose and sexual assault, Drift presents young adults with an effective way to stick together in the excitement and chaos of parties and social gatherings. Offering easy-to-use functionality that enables friends to form party groups in advance, keeps track of partygoers on a map, and with options like the “panic” and “emergency contacts” button, Drift is a must-have for anyone who wants to enjoy a night out knowing their friends have their back.
## What it does:
DRIFT keeps track of your friend group at parties to ensure that no friend gets left behind. If a member is noticed leaving the party, all friends will be notified of their location and vice versa. DRIFT keeps track of member's last known location for the entirety of the party and implements additional safety including a PANIC button and emergency contacts.
## How we built it:
Using HTML, CSS and Javascript and uses the Twilio, ipgeolocation and google maps API's to send text messages and keep track of locations, respectively. The site is currently hosted using Google Cloud's Firebase.
## Challenges we ran into:
Implementing a database with Firebase.
## Accomplishments that we're proud of:
We believe this kind of technology can really make a positive impact on the safety of our and the upcoming generations and we hope to see this project grow in the future.
## What we learned:
How to use Google's Firebase!
## What's next for DRIFT:
Database integration, more accurate location tracking.
*note: designed for smartphone use so UI appears best when opened on phone (or scaling to a smaller size)!* | ## Inspiration
To students, notes are an academic gold standard. Many work tirelessly to create immaculate lecture transcriptions and course study guides - but not everyone writes in a style that is accessible to everyone else, making the task of sharing notes sometimes fruitless. Moreover, board notes are often messy, or unreliable. Wouldn't it be nice for a crowdsourced bank of lecture notes to be minimized to just the bare essentials? But moreover, essentials tailored to your chosen style of reading and learning?
## What it does
The hope was to build a web app for students to share class notes, and be able to merge those notes into their own custom file, which would contain their favourite parts of everyone else's notes. In reality, we made an app that authenticates a user, and not much else.
## How we built it
We built a MERN stack app - MongoDB, Express.js, and Node.js for the back end, while React for the front end - and powered the AI that we were able to put together using Google Cloud.
## Challenges we ran into
We found user registration and authentication to be a huge hurdle to our development, stunting nearly all of our wished-for features. Moreover, the lack of a solid plan and framework made fitting the back-end and the front-end together rather difficult, as these pieces were all developed seperately
## Accomplishments that we're proud of
We're proud that we all learned a great deal about web development, front-end frameworks, databasing, and why we should use Flask. We're also incredibly proud of our designer, who put together the most impressive parts of our app. | partial |
## Inspiration
On our way to PennApps, our team was hungrily waiting in line at Shake Shack while trying to think of the best hack idea to bring. Unfortunately, rather than being able to sit comfortably and pull out our laptops to research, we were forced to stand in a long line to reach the cashier, only to be handed a clunky buzzer that countless other greasy fingered customer had laid their hands on. We decided that there has to be a better way; a way to simply walk into a restaurant, be able to spend more time with friends, and to stand in line as little as possible. So we made it.
## What it does
Q'd (pronounced queued) digitizes the process of waiting in line by allowing restaurants and events to host a line through the mobile app and for users to line up digitally through their phones as well. Also, it functions to give users a sense of the different opportunities around them by searching for nearby queues. Once in a queue, the user "takes a ticket" which decrements until they are the first person in line. In the meantime, they are free to do whatever they want and not be limited to the 2-D pathway of a line for the next minutes (or even hours).
When the user is soon to be the first in line, they are sent a push notification and requested to appear at the entrance where the host of the queue can check them off, let them in, and remove them from the queue. In addition to removing the hassle of line waiting, hosts of queues can access their Q'd Analytics to learn how many people were in their queues at what times and learn key metrics about the visitors to their queues.
## How we built it
Q'd comes in three parts; the native mobile app, the web app client, and the Hasura server.
1. The mobile iOS application built with Apache Cordova in order to allow the native iOS app to be built in pure HTML and Javascript. This framework allows the application to work on both Android, iOS, and web applications as well as to be incredibly responsive.
2. The web application is built with good ol' HTML, CSS, and JavaScript. Using the Materialize CSS framework gives the application a professional feel as well as resources such as AmChart that provide the user a clear understanding of their queue metrics.
3. Our beast of a server was constructed with the Hasura application which allowed us to build our own data structure as well as to use the API calls for the data across all of our platforms. Therefore, every method dealing with queues or queue analytics deals with our Hasura server through API calls and database use.
## Challenges we ran into
A key challenge we discovered was the implementation of Cordova and its associated plugins. Having been primarily Android developers, the native environment of the iOS application challenged our skills and provided us a lot of learn before we were ready to properly implement it.
Next, although a less challenge, the Hasura application had a learning curve before we were able to really us it successfully. Particularly, we had issues with relationships between different objects within the database. Nevertheless, we persevered and were able to get it working really well which allowed for an easier time building the front end.
## Accomplishments that we're proud of
Overall, we're extremely proud of coming in with little knowledge about Cordova, iOS development, and only learning about Hasura at the hackathon, then being able to develop a fully responsive app using all of these technologies relatively well. While we considered making what we are comfortable with (particularly web apps), we wanted to push our limits to take the challenge to learn about mobile development and cloud databases.
Another accomplishment we're proud of is making it through our first hackathon longer than 24 hours :)
## What we learned
During our time developing Q'd, we were exposed to and became proficient in various technologies ranging from Cordova to Hasura. However, besides technology, we learned important lessons about taking the time to properly flesh out our ideas before jumping in headfirst. We devoted the first two hours of the hackathon to really understand what we wanted to accomplish with Q'd, so in the end, we can be truly satisfied with what we have been able to create.
## What's next for Q'd
In the future, we're looking towards enabling hosts of queues to include premium options for users to take advantage to skip lines of be part of more exclusive lines. Furthermore, we want to expand the data analytics that the hosts can take advantage of in order to improve their own revenue and to make a better experience for their visitors and customers. | ## The problem it solves :
During these tough times when humanity is struggling to survive, **it is essential to maintain social distancing and proper hygiene.** As a big crowd now is approaching the **vaccination centres**, it is obvious that there will be overcrowding.
This project implements virtual queues which will ensure social distancing and allow people to stand separate instead of crowding near the counter or the reception site which is a evolving necessity in covid settings!
**“ With Quelix, you can just scan the OR code, enter the virtual world of queues and wait for your turn to arrive. Timely notifications will keep the user updated about his position in the Queue.”**
## Key-Features
* **Just scan the OR code!**
* **Enter the virtual world of queues and wait for your turn to arrive.**
* **Timely notifications/sound alerts will keep the user updated about his position/Time Left in the Queue.**
* **Automated Check-in Authentication System for Following the Queue.**
* **Admin Can Pause the Queue.**
* **Admin now have the power to remove anyone from the queue**
* Reduces Crowding to Great Extent.
* Efficient Operation with Minimum Cost/No additional hardware Required
* Completely Contactless
## Challenges we ran into :
* Simultaneous Synchronisation of admin & queue members with instant Updates.
* Implementing Queue Data structure in MongoDB
* Building OTP API just from scratch using Flask.
```
while(quelix.on):
if covid_cases.slope<0:
print(True)
>>> True
```
[Github repo](https://github.com/Dart9000/Quelix2.0)
[OTP-API-repo](https://github.com/Dart9000/OTP-flask-API)
[Deployment](https://quelix.herokuapp.com/) | ## Inspiration
With the world in a technology age, it is easy to lose track of human emotions when developing applications to make the world a better place. Searching for restaurants using multiple filters and reading reviews is often times inefficient, leading the customer to give up searching and settle for something familiar. With a more personal approach, we hope to connect people to restaurants that they will love.
## What it does
Using Indico’s machine learning API for text analysis, we are able to create personality profiles for individuals and recommend them restaurants from people of a similar personality.
## How we built it
Backend: We started with drafting the architecture of the application, then defining the languages, frameworks and API's to be used within the project. We then proceeded on the day of the hackathon to create a set of mock data from the Yelp dataset. The dataset is then imported into MongoDB and managed through Mlab. In order to query the data, we used Node.js and Mongoose to communicate with the database.
Frontend: The front end is built off of the semantic ui framework. We used default layouts to start and then built on top of them as new functionality was required. The landing page was developed from something a member had done in the past using modernizer and bootstrap slideshow functionality to rotate through background images. Lastly we used ejs as our templating language as it integrates with express very easily.
## Challenges we ran into
1. We realized that the datasets we've compiled was not diverse enough to show a wide range of possible results.
2. The team had an overall big learning curve throughout the weekend as we all were picking up some new languages along the way.
3. There was an access limit to the resources that we were using towards testing efforts for our application, which we never predicted.
## Accomplishments that we're proud of
1. Learning new web technologies, frameworks and APIs that are available and hot in the market at the moment!
2. Using the time before the hackathon to brainstorm and discussing a little more in depth of each team member's task.
3. Collaboratively working together using version control through Git!
4. Asking for help and guidance when needed, which leads to a better understanding of how to implement certain features.
## What we learned
Node.js, Mongoose, Mlab, Heroku, No SQL Databases, API integration, Machine Learning & Sentiment Analysis!
## What's next for EatMotion
We hope that with our web app and with continued effort, we may be able to predict restaurant preferences for people with a higher degree of accuracy than before. | winning |
## Inspiration
It may have been the last day before an important exam, the first day at your job, or the start of your ambitious journey of learning a new language, where you were frustrated at the lack of engaging programming tutorials. It was impossible to get their "basics" down, as well as stay focused due to the struggle of navigating through the different tutorials trying to find the perfect one to solve your problems.
Well, that's what led us to create Code Warriors. Code Warriors is a platform focused on encouraging the younger and older audience to learn how to code. Video games and programming are brought together to offer an engaging and fun way to learn how to code. Not only are you having fun, but you're constantly gaining new and meaningful skills!
## What it does
Code warriors provides a gaming website where you can hone your skills in all the coding languages it provides, all while levelling up your character and following the storyline! As you follow Asmodeus the Python into the jungle of Pythania to find the lost amulet, you get to develop your skills in python by solving puzzles that incorporate data types, if statements, for loops, operators, and more. Once you finish each mission/storyline, you unlock new items, characters, XP, and coins which can help buy new storylines/coding languages to learn! In conclusion, Code Warriors offers a fun time that will make you forget you were even coding in the first place!
## How we built it
We built code warriors by splitting our team into two to focus on two specific points of the project.
The first team was the UI/UX team, which was tasked with creating the design of the website in Figma. This was important as we needed a team that could make our thoughts come to life in a short time, and design them nicely to make the website aesthetically pleasing.
The second team was the frontend team, which was tasked with using react to create the final product, the website. They take what the UI/UX team has created, and add the logic and function behind it to serve as a real product. The UI/UX team shortly joined them after their initial task was completed, as their task takes less time to complete.
## Challenges we ran into
The main challenge we faced was learning how to code with React. All of us had either basic/no experience with the language, so applying it to create code warriors was difficult. The main difficulties associated with this were organizing everything correctly, setting up the react-router to link pages, as well as setting up the compiler.
## Accomplishments that we're proud of
The first accomplishment we were proud of was setting up the login page. It takes only registered usernames and passwords, and will not let you login in without them. We are also proud of the gamified look we gave the website, as it gives the impression that the user is playing a game. Lastly, we are proud of having the compiler embedded in the website as it allows for a lot more user interaction and function to the website.
## What we learned
We learnt a lot about react, node, CSS, javascript, and tailwind. A lot of the syntax was new to us, as well as the applications of a lot of formatting options, such as padding, margins, and more. We learnt how to integrate tailwind with react, and how a lot of frontend programming works.
We also learnt how to efficiently split tasks as a team. We were lucky enough to see that our initial split up of the group into two teams worked, which is why we know that we can continue to use this strategy for future competitions, projects, and more.
## What's next for Code Warriors
What's next for code warriors is to add more lessons, integrate a full story behind the game, add more animations to give more of a game feel to it, as well as expand into different coding languages! The potential for code warriors is unlimited, and we can improve almost every aspect and expand the platform to proving a multitude of learning opportunities all while having an enjoyable experience.
## Important Info for the Figma Link
**When opening the link, go into the simulation and press z to fit screen and then go full screen to experience true user interaction** | ## Inspiration
Games on USB!
Back in the good old day when we had actual handheld console games, we used to carry around game cartridges with us the whole time. Cartridges not only carried the game data itself, but also your save files, the game progress and even your childhood memories. As a 20-something young adult who played too much Pokemon in his childhood days, I couldn't resist building a homage project in a gaming-themed hackathon. With some inspiration from the USB club, I decided to build a USB game cartridge that can be plugged into any computer to play the games stored in it. Just like a game cartridge! You can save the game, carry them around, and wherever you can plug a USB into, you can continue playing the game on any computer.
## What it does
Visit <https://uboy.esinx.net> and you will see... nothing! Well, it's because that's how consoles worked back in those days. Without a cartridge, you can't really play anything, can you? The same applies here! Without a USB, you won't be able to play any games. But once you have your games & saves ready on a USB, you'll be able to load up your game on uBoy and enjoy playing classic console games in your browser! Everything from the game data to the save files are stored in the USB, so you can carry them around and play them on any computer with a browser.
## How we built it
uBoy is heavily inspired by the [wasmboy](wasmboy.app) project, which brings Nintendo's GameBoy(GB) & GameBoy Color(GBC) emulation into the web. WASM, or WebAssembly is a technology that allows assembly code to be executed in a browser environment, which unleashes the capabilities of your computer. Previously written emulation projects were easily ported to the web thanks to WASM.
Saving to USB works using the File System Access API, a standard web API that allows web applications to read and write files on the user's local file system. This API is still in development, but it's already available in Chrome and Edge. Regardless, many browsers with modern web API support should be able to play uBoy!
## Challenges we ran into
* WASM is not the most efficient way to run an emulator. It's not as fast as native code, and it's not as easy to debug as JavaScript. It's a trade-off between performance and portability.
* The File System Access API is still in development, so there is not much way to interact with the USB itself.
* Its alternative, WebUSB is available but lacks the support for mass storage devices like the ones provided by the USB Club.
## Accomplishments that we're proud of
* Games on USB!
* Saving on USB!
* GameBoy & GameBoy Color emulation
* Cute UI to make you feel more nostalgic
## What we learned
* WebAssembly
* File System Access API
* GameBoy & GameBoy Color emulation
+ Game memory management
* How to create an interaction for web based gaming
## What's next for uBoy
* DS Emulation (hopefully)
* Trading using USB!
+ What if you plug multiple USBs into the computer? | ## Inspiration
As students, we have found that there are very few high-quality resources on investing for those who are interested but don't have enough resources. Furthermore, we have found that investing and saving money can be a stressful experience. We hope to change this for those who want to save better with the help of our app, hopefully making it fun in the process!
## What it does
Our app first asks a new client a brief questionnaire about themselves. Then, using their banking history, it generates 3 "demons", aka bad spending habits, to kill. Then, after the client chooses a habit to work on, it brings them to a dashboard where they can monitor their weekly progress on a task. Once the week is over, the app declares whether the client successfully beat the mission - if they did, they get rewarded with points which they can exchange for RBC Loyalty points!
## How we built it
We built the frontend using React + Tailwind, using Routes to display our different pages. We used Cohere for our AI services, both for generating personalized weekly goals and creating a more in-depth report. We used Firebase for authentication + cloud database to keep track of users. For our data of users and transactions, as well as making/managing loyalty points, we used the RBC API.
## Challenges we ran into
Piecing the APIs together was probably our most difficult challenge. Besides learning the different APIs in general, integrating the different technologies got quite tricky when we are trying to do multiple things at the same time!
Besides API integration, definitely working without any sleep though was the hardest part!
## Accomplishments that we're proud of
Definitely our biggest accomplishment was working so well together as a team. Despite only meeting each other the day before, we got along extremely well and were able to come up with some great ideas and execute under a lot of pressure (and sleep deprivation!) The biggest reward from this hackathon are the new friends we've found in each other :)
## What we learned
I think each of us learned very different things: this was Homey and Alex's first hackathon, where they learned how to work under a small time constraint (and did extremely well!). Paige learned tons about React, frontend development, and working in a team. Vassily learned lots about his own strengths and weakness (surprisingly reliable at git, apparently, although he might have too much of a sweet tooth).
## What's next for Savvy Saver
Demos! After that, we'll just have to see :) | winning |
## What it does
Tickets is a secure, affordable, and painless system for registration and organization of in-person events. It utilizes public key cryptography to ensure the identity of visitors, while staying affordable for organizers, with no extra equipment or cost other than a cellphone. Additionally, it provides an easy method of requesting waiver and form signatures through Docusign.
## How we built it
We used Bluetooth Low Energy in order to provide easy communication between devices, PGP in order to verify the identities of both parties involved, and a variety of technologies, including Vue.js, MongoDB Stitch, and Bulma to make the final product.
## Challenges we ran into
We tried working (and struggling) with NFC and other wireless technologies before settling on Bluetooth LE as the best option for our use case. We also spent a lot of time getting familiar with MongoDB Stitch and the Docusign API.
## Accomplishments that we're proud of
We're proud of successfully creating a polished and functional product in a short period of time.
## What we learned
This was our first time using MongoDB Stitch, as well as Bluetooth Low Energy.
## What's next for Tickets
An option to allow for payments for events, as well as more input formats and data collection. | ## Inspiration
The project was inspired by looking at the challenges that artists face when dealing with traditional record labels and distributors. Artists often have to give up ownership of their music, lose creative control, and receive only a small fraction of the revenue generated from streams. Record labels and intermediaries take the bulk of the earnings, leaving the artists with limited financial security. Being a music producer and a DJ myself, I really wanted to make a product with potential to shake up this entire industry for the better. The music artists spend a lot of time creating high quality music and they deserve to be paid for it much more than they are right now.
## What it does
Blockify lets artists harness the power of smart contracts by attaching them to their music while uploading it and automating the process of royalty payments which is currently a very time consuming process. Our primary goal is to remove the record labels and distributors from the industry since they they take a majority of the revenue which the artists generate from their streams for the hard work which they do. By using a decentralized network to manage royalties and payments, there won't be any disputes regarding missed or delayed payments, and artists will have a clear understanding of how much money they are making from their streams since they will be dealing with the streaming services directly. This would allow artists to have full ownership over their work and receive a fair compensation from streams which is currently far from the reality.
## How we built it
BlockChain: We used the Sui blockchain for its scalability and low transaction costs. Smart contracts were written in Move, the programming language of Sui, to automate royalty distribution.
Spotify API: We integrated Spotify's API to track streams in real time and trigger royalty payments.
Wallet Integration: Sui wallets were integrated to enable direct payments to artists, with real-time updates on royalties as songs are streamed.
Frontend: A user-friendly web interface was built using React to allow artists to connect their wallets, and track their earnings. The frontend interacts with the smart contracts via the Sui SDK.
## Challenges we ran into
The most difficult challenge we faced was the Smart Contract Development using the Move language. Unlike commonly known language like Ethereum, Move is relatively new and specifically designed to handle asset management. Another challenge was trying to connect the smart wallets in the application and transferring money to the artist whenever a song was streamed, but thankfully the mentors from the Sui team were really helpful and guided us in the right path.
## Accomplishments that we're proud of
This was our first time working with blockchain, and me and my teammate were really proud of what we were able to achieve over the two days. We worked on creating smart contracts and even though getting started was the hardest part but we were able to complete it and learnt some great stuff along the way. My teammate had previously worked with React but I had zero experience with JavaScript, since I mostly work with other languages, but we did the entire project in Node and React and I was able to learn a lot of the concepts in such a less time which I am very proud of myself for.
## What we learned
We learned a lot about Blockchain technology and how can we use it to apply it to the real-world problems. One of the most significant lessons we learned was how smart contracts can be used to automate complex processes like royalty payments. We saw how blockchain provides an immutable and auditable record of every transaction, ensuring that every stream, payment, and contract interaction is permanently recorded and visible to all parties involved. Learning more and more about this technology everyday just makes me realize how much potential it holds and is certainly one aspect of technology which would rule the future. It is already being used in so many aspects of life and we are still discovering the surface.
## What's next for Blockify
We plan to add more features, such as NFTs for exclusive content or fan engagement, allowing artists to create new revenue streams beyond streaming. There have been some real-life examples of artists selling NFT's to their fans and earning millions from it, so we would like to tap into that industry as well. Our next step would be to collaborate with other streaming services like Apple Music and eliminate record labels to the best of our abilities. | ## Inspiration
We were inspired by the custom algorithm of tinder and how it utilizes first impressions and preferences to narrow down larger catalogs to improve “shopping” experience. We also wanted to drive traffic to second-hand charity thrift shops and improve the system currently used by second-hand stores to price their items.
## What it does
The app has two main parts: the explore page, where users engage in a tinder- like experience of seeing clothing options and swiping up or down. The emotions they experience when viewing a product are evaluated using Hume’s Expression Measurement API and used with their swipe decision and additional factors to calculate what products they’re more likely to want. Users can see the video footage used in the measurements and the emotions recorded as a show of transparency. The other page is the donation page, where users can add a photo to items they are donating to thrift stores. The donated pieces are occasionally cycled in the explore page to gauge interest using the same expression measurement, which helps drive customer traffic to local charity stores. Users will be able to view a donation count of the number of items they have donated, as well as a metric demonstrating the conservation that occurred as a result of repurposing used clothes. The conservation metric encourages users to repurpose clothes that are still good quality but may no longer need, reducing waste and providing cheaper alternatives for clothing.
## How we built it
We employed TypeScript for the frontend and Flask for the backend. Additionally, we leveraged Gemini's multimodal API to extract characteristics of clothing from images, used Word2Vec to create a vector representation of these characteristics generated by GPT, and utilized Hume's API to analyze video streaming via a WebSocket connection, generating data related to emotions. We calculated our pricing for the donated items using a linear regression model using emotion data, prices, and other clothing data generated from advertisements placed in between other products on the Explore page.
## Challenges we ran into
Our linear regression model correlating prices to emotions did not exhibit a positive correlation as expected. Additionally, we faced issues with the swiping feature in the frontend, experienced delays in calculations and retrieving images, leading to potential lag on the page. We had to manage our use of API credits carefully and conduct thorough testing. Ensuring the correct data and format were passed between the frontend and backend was also a significant concern. Finally, we grappled with the complexity of weighing different characteristics of clothes and determining the optimal algorithms for preferences but ultimately found a vector-based approach worked best.
## Accomplishments that we're proud of
Innovative Algorithm: We developed a custom algorithm for personalizing clothing recommendations and narrowed down larger catalogues.
Improved Pricing: We improved the pricing system used by thrift and second-hand stores, providing a more efficient and accurate method for thrift stores to price their items.
Transparent Emotion Evaluation: Users can see the video footage used in emotion measurements, fostering transparency in how their preferences are analyzed and utilized.
Integration with Thrift Stores: Thrift stores can sell their items on the platform, increasing traffic and visibility for their most popular pieces, and driving more revenue for charities like the American Cancer Society or Salvation Army.
Linear Regression Model: Developing a pricing model using linear regression, incorporating emotion data, prices, and other clothing characteristics to determine optimal pricing for donated items.
## What we learned
We learned how to use the Hume API and advanced emotion recognition and how to integrate it into an application to provide customers with value. We also learned how to use word2vec to form vector representations and make classifications of data, which allowed us to compare clothing based on string classifications like style or pattern.
## What's next for Style Sync
We have a few additional features we want to add to our application. We want to improve the user experience and further encourage people to donate clothes by adding leaderboards and more "game-like" elements to make the process more exciting.
We want to increase the size of our clothing database, by scraping many more retailers and adding their catalogs to ours to provide more options to users and allow for more fine tuning of preferences. More products would also allow us to fix our regression model and come up with more accurate pricing for the donations tab.
We want to improve the integration onto platforms to reach a larger audience and we want to add an interfaces specifically for thrift stores to be able to post their items.
Additionally, we want to add log in, verification, and a user profile where users can view more precise information about the amount of money they've spent, previous purchases, trends over time, and other information. | winning |
## Inspiration
Our team focuses on real-world problems. One of our own classmates is blind, and we've witnessed firsthand the difficulties he encounters during lectures, particularly when it comes to accessing the information presented on the board.
It's a powerful reminder that innovation isn't just about creating flashy technology; it's about making a tangible impact on people's lives. "Hawkeye" isn't a theoretical concept; it's a practical solution born from a genuine need.
## What it does
"Hawkeye" utilizes Adhawk MindLink to provide essential visual information to the blind and visually impaired. Our application offers a wide range of functions:
* **Text Recognition**: "Hawkeye" can read aloud whiteboard text, screens, and all text that our users would not otherwise see.
* **Object Identification**: The application identifies text and objects in the user's environment, providing information about their size, shape, and position.
* **Answering Questions** Hawkeye takes the place of Google for the visually impaired, using pure voice commands to search
## How we built it
We built "Hawkeye" by combining state-of-the-art computer vision and natural language processing algorithms with the Adhawk MindLink hardware. The development process involved several key steps:
1. **Data Collection**: We used open-source AI models to recognize and describe text and object elements accurately.
2. **Input System**: We developed a user-friendly voice input system that can be picked up by anyone.
3. **Testing and Feedback**: Extensive testing and consultation with the AdHawk team was conducted to fine-tune the application's performance and usability.
## Challenges we ran into
Building "Hawkeye" presented several challenges:
* **Real-time Processing**: We knew that real-time processing of so much data on a wearable device was possible, but did not know how much latency there would be. Fortunately, with many optimizations, we were able to get the processing to acceptable speeds.
* **Model Accuracy**: Ensuring high accuracy in text and object recognition, as well as facial recognition, required continuous refinement of our AI models.
* **Hardware Compatibility**: Adapting our software to work effectively with Adhawk MindLink's hardware posed compatibility challenges that we had to overcome.
## Accomplishments that we're proud of
We're immensely proud of what "Hawkeye" represents and the impact it can have on the lives of blind and visually impaired individuals. Our accomplishments include:
* **Empowerment**: Providing a tool that enhances the independence and quality of life for visually impaired individuals. To no longer rely upon transcribers and assistants is something that real
* **Inclusivity**: Breaking down barriers to education and employment, making these opportunities more accessible.
* **Innovation**: Combining cutting-edge technology and AI to create a groundbreaking solution for a pressing societal issue.
* **User-Centric Design**: Prioritizing user feedback and needs throughout the development process to create a genuinely user-friendly application.
## What we learned
Throughout the development of "Hawkeye," we learned valuable lessons about the power of technology to transform lives. Key takeaways include:
* **Empathy**: Understanding the daily challenges faced by visually impaired individuals deepened our empathy and commitment to creating inclusive technology.
* **Technical Skills**: We honed our skills in computer vision, natural language processing, and hardware-software integration.
* **Ethical Considerations**: We gained insights into the ethical implications of AI technology, especially in areas like facial recognition.
* **Collaboration**: Effective teamwork and collaboration were instrumental in overcoming challenges and achieving our goals.
## What's next for Hawkeye
The journey for "Hawkeye" doesn't end here. In the future, we plan to:
* **Expand Functionality**: We aim to enhance "Hawkeye" by adding new features and capabilities, such as enhanced indoor navigation and support for more languages.
* **Accessibility**: We will continue to improve the user experience, ensuring that "Hawkeye" is accessible to as many people as possible.
* **Partnerships**: Collaborate with organizations and institutions to integrate "Hawkeye" into educational and workplace environments.
* **Advocacy**: Raise awareness about the importance of inclusive technology and advocate for its widespread adoption.
* **Community Engagement**: Foster a supportive user community for sharing experiences, ideas, and feedback to further improve "Hawkeye."
With "Hawkeye," our vision is to create a more inclusive and accessible world, where visual impairment is no longer a barrier to achieving one's dreams and aspirations. Together, we can make this vision a reality. | ## Inspiration
Millions of people around the world are either blind, or partially sighted. For those who's vision is impaired, but not lost, there are tools that can help them see better. By increasing contrast and detecting lines in an image, some people might be able to see clearer.
## What it does
We developed an AR headset that processes the view in front of it and displays a high contrast image. It also has the capability to recognize certain images and can bring their existence to the attention of the wearer (one example we used was looking for crosswalk signs) with an outline and a vocal alert.
## How we built it
OpenCV was used to process the image stream from a webcam mounted on the VR headset, the image is processed with a Canny edge detector to find edges and contours. Further a BFMatcher is used to find objects that resemble a given image file, which is highlighted if found.
## Challenges we ran into
We originally hoped to use an oculus rift, but we were not able to drive the headset with the available hardware. We opted to use an Adafruit display mounted inside a Samsung VR headset instead, and it worked quite well!
## Accomplishments that we're proud of
Our development platform was based on macOS 10.12, Python 3.5 and OpenCV 3.1.0, and OpenCV would not cooperate with our OS. We spent many hours compiling and configuring our environment until it finally worked. This was no small feat. We were also able to create a smooth interface using multiprocessing, which operated much better than we expected.
## What we learned
Without the proper environment, your code is useless.
## What's next for EyeSee
Existing solutions exist, and are better suited for general use. However a DIY solution is endlessly customizable, we this project inspires other developers to create projects that help other people.
## Links
Feel free to read more about visual impairment, and how to help;
<https://w3c.github.io/low-vision-a11y-tf/requirements.html> | ## Inspiration
Moved by the compassion displayed by those of the front lines of this pandemic, we wanted to help ease quarantine and symptom monitoring so as to reduce the burden on patients, nurses, doctors, and ICU staff. Our application serves to eliminate confusion and provide clear, level-headed guidance.
## What it does
## How we built it
We built our mobile app using Xcode and the Swift programming language. We designed our app for iOS use. We drafted the app on figma.
## Challenges we ran into
The primary challenge we faced was team communication. With our team members situated in diverse time zones, a few conflicting visions of the app arose that reduced our capacity to focus on the back end.
## Accomplishments that we're proud of
Despite our time zone differences, and despite the fact that our team members met for the first time on February 12th, our team was bonded by a desire to effect real change – to reduce suffering and provide common-sense COVID-19 guidance in a period marked by confusion and mistrust.
## What we learned
## What's next for COVID-19 Symptom Tracker
Logging symptoms can feel scary to do alone. Leaving the app user assured that their doctor has their back will help promote calm during the period of self-isolation. We would like the daily symptoms questionnaire to be emailed/texted to the patient's doctor. We plan to implement this functionality using the Twilio and SendGrid APIs. | partial |
## Inspiration
I need to have a ray tracer going and need to understand how they work, especially since I need to be able to write a Monte Carlo based renderer, and implement my own Monte Carlo Based estimators and integrator solvers with custom sampling algorithms. I thought Unity would be a good base to start versus OpenGl, WebGL, or Direct3D, since I am more used to Unity. I should be able to simulate most of the things I want with it Unity since I can access the above graphic libraries if it's necessary.
## What it does
Currently I did not get my ray tracer working fully, and am running into some errors. However, I have 3D models imported from Unity's built in assets, and was able to import molecules from ChemSpider and have them viewing in Unity very well.
## How I built it
## Challenges I ran into
-Wolfram Alpha API did not authenciate me so I couldnt use it
-PubChem and ChemSpider API access was something I wasn't used to.
-Deprecated API usage documentation
## Accomplishments that I'm proud of
Actually getting a molecule imported into Unity and having it work (I can rotate it, and pinch and zoom on my phone)
## What I learned
A team is very good to have.
## What's next for Ariful's Unity Raytracer
Finish up the ray tracer, and try to see if I can make it real time.
Implement SQLite database and load up a bunch of chemical models with their .mol or .json data, and have it load onto the scene. | # [deepstreetview.com](http://deepstreetview.com)
## What it does
It turns Google Street View into works of art. Your world will never be the same.
## Inspiration
The first iteration of the website was a Chrome Extension: *VR Harambe Sighting* in which Harambe would be stochastically overlaid on the sidewalks of Google Street View using a suite of CV algorithms. Due to the nature of async flow, not all functionalities can be implemented in browser. As a result, I decided to pivot the app towards deep learning.
## The Tech
The frontend is written in vanilla Javascript. It overrides Google Streetview API calls with custom event handlers that redirect AJAX from Google's server to a custom server where tiles are processed on-the-fly. The backend is a Python Flask microservice proxied with Nginx. The heavy-lifting algorithm is abstracted away using torch and system calls. [Fast-neural-style](https://github.com/jcjohnson/fast-neural-style) was released one month ago and provides drastic improvement over the speed of image rendering. As opposed to a full-fledged Convolutional Neural Network(CNN), it was implemented with simple tensor arithmetic not dissimilar to those commonly found in everyday Computer Vision. Usually, rendering a huge image such as Google Streetview( 3300x2048 ) would take at least a couple of hours. This new algorithm allows on-the-fly rendering: video and server apps such as this one. Special thanks to 1and1 for sponsoring the server which is 4cpu with 16GB memory.
## Challenges
1. I started deepstreetview around 6pm Saturday and finished at 4am Sunday. It would have more features if I woke up earlier.
2. Google Maps API was a low-level "callback hell". In the age of ES2017, it should be rewritten with the help of async/await and AMD. I want to thank EMCAScript Guru Ron Chen for his guidance with the Google Maps API.
3. The very first successful build uses the same format as Google Streetview. The tile size is 512x512 so the output looks pieced together with visible gridlines between tiles. This is solved by stitching all the parts and cutting the result in half vertically (rationale being the middle part is usually the road). This makes the image much more integral and easy to look at. However, the downside is that it takes the server much longer (10x) to respond because of the nature of the algorithm.
4. The (7x4) tiles are fetched from Google in 512x512 so 28 requests have to be made. It was originally written synchronously but then converted to parallel using a simple threading model. This increased the speed from 2s to 210ms.
5. Computation is expensive so caching is necessary. Although the cache is implemented using /tmp/ and serialized python strings, it works like a charm so not a challenge when n is small.
6. The server does not have a GPU as 1and1 don't offer GPU instances. This drastically slows process down. With a GPU, responding should be within seconds.
7. GPU instances are unreliable and expensive. Definitely not worth it for a sensational project such as this one. I think the future of this is to cache enough processed street view images and throw them in a vault.
## Accomplishments
1. Buying a Namecheap domain.
2. The idea for the hack and the execution were pretty on point
3. The ML library Torch is easy to deploy. It took about 8 lines of code and works flawlessly.
4. This being a generally successful hack with lots of hacky code written is good
## Lessons learned
1. It's canonical to respect the API
2. Patience is the archenemy of programming
3. Neural nets are hard to devise but bring good vibes when deployed
4. Keeping up with The Hacker News will get you clever ideas
## What's next for DeepStreetView
I have an Ethereum mining rig in my dorm so I'll just use that to cache enough processed street view tiles and throw everything on AWS and forget about this project. And the python flask microservice is simply not robust enough to handle any load so that has to be rewritten. | ## Inspiration
A week or so ago, Nyle DiMarco, the model/actor/deaf activist, visited my school and enlightened our students about how his experience as a deaf person was in shows like Dancing with the Stars (where he danced without hearing the music) and America's Next Top Model. Many people on those sets and in the cast could not use American Sign Language (ASL) to communicate with him, and so he had no idea what was going on at times.
## What it does
SpeakAR listens to speech around the user and translates it into a language the user can understand. Especially for deaf users, the visual ASL translations are helpful because that is often their native language.

## How we built it
We utilized Unity's built-in dictation recognizer API to convert spoken word into text. Then, using a combination of gifs and 3D-modeled hands, we translated that text into ASL. The scripts were written in C#.
## Challenges we ran into
The Microsoft HoloLens requires very specific development tools, and because we had never used the laptops we brought to develop for it before, we had to start setting everything up from scratch. One of our laptops could not "Build" properly at all, so all four of us were stuck using only one laptop. Our original idea of implementing facial recognition could not work due to technical challenges, so we actually had to start over with a **completely new** idea at 5:30PM on Saturday night. The time constraint as well as the technical restrictions on Unity3D reduced the number of features/quality of features we could include in the app.
## Accomplishments that we're proud of
This was our first time developing with the HoloLens at a hackathon. We were completely new to using GIFs in Unity and displaying it in the Hololens. We are proud of overcoming our technical challenges and adapting the idea to suit the technology.
## What we learned
Make sure you have a laptop (with the proper software installed) that's compatible with the device you want to build for.
## What's next for SpeakAR
In the future, we hope to implement reverse communication to enable those fluent in ASL to sign in front of the HoloLens and automatically translate their movements into speech. We also want to include foreign language translation so it provides more value to **deaf and hearing** users. The overall quality and speed of translations can also be improved greatly. | losing |
## **Inspiration**
The inspiration behind our project stems from a shared aspiration to actively contribute to resolving the ongoing **environmental** predicaments originating from human activities. Witnessing the ramifications of climate change, pollution, deforestation, and biodiversity loss, we recognized the imperative to address these issues at an individual level. Thus, **ECOTRACK** was conceived as a response to this urgency, aiming to empower individuals with the means to adopt sustainable practices. This initiative is driven by the belief that even the smallest individual actions, when aggregated, possess the potential to yield substantial positive outcomes. By fostering a greater awareness and encouraging purposeful choices, we envision our project playing a pivotal role in promoting a more harmonious relationship between humanity and the environment.
## **What it does**
ECOTRACK is a website that was meticulously crafted to cultivate **sustainable lifestyles and nurture environmental awareness**. Its core functionality revolves around the evaluation of an individual's eco-friendliness, achieved through user input in response to a series of thoughtfully designed questions. This input encapsulates various aspects of daily life, ranging from transportation habits and energy consumption to waste management practices. The platform then generates tailored recommendations that resonate with each user's unique lifestyle and preferences. These suggestions serve as tangible steps toward reducing environmental impact, forming a bridge between personal actions and collective efforts to preserve our planet.
## **How we built it**
For our hackathon project, we set out to create a sustainability tracker that empowers users to assess and improve their eco-friendly habits. Our project was built using a combination of HTML, CSS, and JavaScript to deliver an intuitive and engaging user experience. To start off, we began by structuring the project using HTML to create the necessary elements and layout. We designed a user-friendly interface that showcased different elements for the pages. Along with this, we used CSS to create an aesthetic, user-friendly, and responsive design that aligned with the theme of sustainability and eco-friendliness. Lastly, the core functionality of our project was implemented using JavaScript, which was employed to randomize the selection of questions from our predefined question bank and calculate their sustainability score.
## **Challenges we ran into**
During the initial stages, we navigated the challenge of generating and selecting ideas. The process of ideation required a balance between creativity and practicality. We faced the task of exploring various concepts, each with its own potential, and then deliberating on which one would best serve the objectives we had set. The challenge lay in evaluating the feasibility, impact, and alignment of each idea with our aspirations. Through thorough discussions and critical analysis, we eventually arrived at ECOTRACK as the most appropriate and impactful solution, reflecting our commitment to addressing environmental awareness and sustainability.
## **Accomplishments that we're proud of**
Our achievements with ECOTRACK are a source of great pride for our team. Beyond the project's success as a whole, we are particularly pleased with our effective communication and collaboration. It's noteworthy that we had never met one another before, yet we spontaneously formed a team and successfully developed a functional solution. We believe that ECOTRACK holds the potential to evolve into a highly valuable community resource, which adds to our sense of fulfillment in overcoming challenges and creating something impactful through our joint efforts.
## **What we learned**
Our weekend experience taught us some valuable lessons, and one of the standout takeaways was effective collaboration in high-pressure situations. We grasped the significance of adhering to best practices while working on a project with time constraints. What stood out was the development of trust among team members, allowing us to confidently rely on each other's contributions to complete our assigned tasks, while offering assistance whenever possible.
## **What's next for ECOTRACK**
Looking ahead, ECOTRACK has exciting plans to make an even bigger impact. We're not just stopping here – we're building a community where people can come together to encourage and help each other along their green journey. This community will be a place to challenge each other, share ideas, stories, and successes, creating a united push for positive change.
Our next step involves creating a handy Chrome extension. This little tool will tag along with your online shopping adventures, keeping track of the carbon footprint of what you buy. It's all about giving you real-time info on the environmental impact of your choices, right when you need it.
And that's not all. We're aiming high – we want to make waves in the fashion world. Our goal is to team up with online shopping platforms, so when you're shopping for clothes, you'll get the scoop on how eco-friendly your choices are. This is especially important given the fast fashion issue, where rapid production and disposal of clothing contribute to environmental harm. By shining a light on the green side of fashion, we're hoping to inspire a new way of shopping that's kinder to the planet. Through these steps, ECOTRACK aims to spark a movement that reshapes how we shop and live, making sustainability the norm. | ## Inspiration
## What it does
Xwalk is a pair of glasses that help the blind cross the street. It has two primary functions: it helps the blind know when to walk by making a buzzing noise whenever there is a walk signal and it makes repetitive beeps when it sees someone else crossing the street so that he or she can assist the blind.
## How I built it
I needed to be able to see the surroundings so I chose to use the Raspberry Pi Zero W for its size and Wifi capabilities. For the walk signal recognition, I used OpenCV2 to train the recognizer. For the assistance part, I used the Google Vision API and ultrasensor to see if there is a person nearby to help
## Challenges I ran into
Time and Hardware were major challenges. It was often hard to tell whether software or hardware was the issue. I wished I had more time to 3D print custom fit glasses to put all the electronics in. Also training the data took super long.
## Accomplishments that I'm proud of
I'm proud of making functional glasses!
## What I learned
I learned how to debug hardware more efficiently!
## What's next for Xwalk
The image processing is very slow right now and I hope to make the image processing 2x faster. | ## Inspiration:
Our journey began with a simple, yet profound realization: sorting waste is confusing! We were motivated by the challenge many face in distinguishing recyclables from garbage, and we saw an opportunity to leverage technology to make a real environmental impact. We aimed to simplify recycling, making it accessible and accurate for everyone.
## What it does:
EcoSort uses a trained ML model to identify and classify waste. Users present an item to their device's webcam, take a photo, and our website instantly advises whether it is recyclable or garbage. It's user-friendly, efficient, and encourages responsible waste disposal.
## How we built it:
We used Teachable Machine to train our ML model, feeding it diverse data and tweaking values to ensure accuracy. Integrating the model with a webcam interface was critical, and we achieved this through careful coding and design, using web development technologies to create a seamless user experience.
## Challenges we ran into:
* The most significant challenge was developing a UI that was not only functional but also intuitive and visually appealing. Balancing these aspects took several iterations.
* Another challenge we faced, was the integration of our ML model with our UI.
* Ensuring our ML model accurately recognized a wide range of waste items was another hurdle, requiring extensive testing and data refinement.
## Accomplishments that we're proud of:
What makes us stand out, is the flexibility of our project. We recognize that each region has its own set of waste disposal guidelines. To address this, we made our project such that the user can select their region to get the most accurate results. We're proud of creating a tool that simplifies waste sorting and encourages eco-friendly practices. The potential impact of our tool in promoting environmentally responsible behaviour is something we find particularly rewarding.
## What we learned:
This project enhanced our skills in ML, UI/UX design, and web development. On a deeper level, we learned about the complexities of waste management and the potential of technology to drive sustainable change.
## What's next for EcoSort:
* We plan to expand our database to accommodate different types of waste and adapt to varied recycling policies across regions. This will make EcoSort a more universally applicable tool, further aiding our mission to streamline recycling for everyone.
* We are also in the process of hosting the EcoSort website as our immediate next step. At the moment, EcoSort works perfectly fine locally. However, in regards to hosting the site, we have started to deploy it but are unfortunately running into some hosting errors.
* Our [site](https://stella-gu.github.io/EcoSort/) is currently working | losing |
## Inspiration
Have you ever had a friend who consistently showed up late to gatherings, blaming it on oversleeping? Or perhaps you've experienced the frustration yourself of sleeping through multiple alarms, only to wake up groggy and disoriented? This common dilemma was the driving force behind the creation of our project.
Inspired by personal experiences and anecdotes shared among our team, we recognized a glaring gap in the effectiveness of traditional alarm clocks. While sound-based alarms have been the standard for decades, they often fall short when it comes to waking up heavy sleepers or those who easily fall back asleep after hitting the snooze button.
Fuelled by the desire to tackle this issue head-on, our team embarked on a brainstorming session to reimagine the modern alarm clock. We sought to create a solution that not only jolted users awake but also provided an unforgettable and energizing wake-up experience.
## What it does
Masoclock is designed to serve the same purpose of any alarm clock, to wake the person up. Unlike most modern alarm clocks, the Masoclock not only plays a loud abnoxious sound every couple seconds, but also triggers confetti poppers in an attempt to wake up the sleeper. (current prototype is set for 8:00am)
## How we built it
We started by getting basic alarm features implemented such as coding the display and buzzer. Next we figured out how to activate the servos using the Arduino and then we implemented a way to trigger the servos based on a certain amount of time. After getting all the modules to functions we designed a simple frame to house all the components using cardboard.
## Challenges we ran into
(1) Getting the LCD display to light up
It was the first time our group worked with an LCD display so we had to rely on the internet a lot to figure out how to wire and code the display. Eventually we found a the rgb\_lcd library that came with example code, studying the example code we managed to get the display to display text and display a counter which could be used to later display the time.
Finding a way to get the Arduino to keep track of time, since we didn't have a separate rtc module for our Arduino. Fortunately, we found out that. the Arduino Due's microprocessor has a built in RTC and the associated library. This allowed us to code in a specific starting time and let the microprocessor do all the calculations related to keeping track of time.
(2) Changing from a Raspberry Pi to an Arduino
Originally our project was going to be built with a Raspberry Pi, however we ran into problems trying to get our Pi to work. We tried many solutions and even went to the mentors for help, but eventually we decided to switch to an Arduino via a Qualcomm dev kit. This allowed us to get some major progress in such as getting the LCD display to light up and getting the servos to spin too.
(3) Getting the Nerf Gun and Water Gun to work
Originally the Masoclock was suppose to shoot Nerf gun darts and water via a watergun towards the sleeper but unfortunately due to time constraints we couldn't find a way to pull the triggers of both guns
## Accomplishments that we're proud of
Despite of all the challenges we encounter within the 24 hours time constraint, we are proud of all the following features that were successfully able to implement:
(1) Coding the Ardunio Due and servo
(2) Getting the display to show "Rise and Shine" along with the time being displayed exactly underneath it
(3) The buzzer being triggered at the exacted expected time and effect
(4) Getting the confetti blower to pop when the alarm is triggered
However, something that we all are extremely proud of is that despite all of us having a CS background with few hardware experiences, we were able to configure an alarm system from scratch and adapting on the spot to the difficulties that we encountered, especially having to shift our whole plan when we had to go from Raspberry Pi to an Arduino.
## What we learned
We learned how to both wire and implement code for various arduino modules and how to configure parts together to create a whole system starting from stratch. We learned how to implement our creative ideas into real life given time contraints.
## What's next for Masoclock
The future of Masoclock is the an alarm clock that has all three successful features being a water gun, nerf gun, and the confetti blower. The product design is simple, covered, functional, and very human. | ## Inspiration
In our daily lives, there are numerous occations where people need to wake up early morning, either going to work or going to school. Although we could try our best to sleep earlier the night before, but sometimes it is very hard. Therefore we made this ShoClock with the hope that it will be useful to someone.
## What it does
ShoClock is connected to bluetooth on your phone which will trigger to alarm music, which will cause an electric shock within a very safe range. This shock will not stop until RFID component touches the equipment, which can be put anywhere, on your doorstep, or in the washroom. Then it confirms that the user has waken up and get out of the bed.
## How we built it
We used bluetooth component, arduino boards, and RFID component to fit in the watch shell and edited the code to run the program
## Challenges we ran into
The safe current through human body should be the first consideration, and circuit design is very important to ensure users' safety.
It was also hard to test which kind of switch could be used and how the alarm tone could be processed by the arduino board.
Also, the program is a little bit difficult to realize, and needed many times of test and debugging.
## Accomplishments that we're proud of
We tried and all components could be fit in small watch shell. We figured out what bluetooth does during the communication of sender and receiver.
## What we learned
We learned more practical applications of what we learned at school, such as Fourier Transform and Arduino. Also, we learned how to use what we knew to do somthing realistic and solve practical problems.
## What's next for ShoClock
We could add more functionalities to it, such as show the current time and use's health condition. Use Fast Fourier Transform to allow users adding different songs they like to wake them up, and improve the accuracy of the watch, in case notifications come, and the shock starts. | ## Inspiration
Helping people who are visually and/or hearing impaired to have better and safer interactions.
## What it does
The sensor beeps when the user comes too close to an object or too close to a hot beverage/food.
The sign language recognition system translates sign language from a hearing impaired individual to english for a caregiver.
The glasses capture pictures of surroundings and convert them into speech for a visually imapired user.
## How we built it
We used Microsoft Azure's vision API,Open CV,Scikit Learn, Numpy, Django + REST Framework, to build the technology.
## Challenges we ran into
Making sure the computer recognizes the different signs.
## Accomplishments that we're proud of
Making a glove with a sensor that helps user navigate their path, recognizing sign language, and converting images of surroundings to speech.
## What we learned
Different technologies such as Azure, OpenCV
## What's next for Spectrum Vision
Hoping to gain more funding to increase the scale of the project. | losing |
## What it is 🕵️
MemoryLane is a unique and innovative mobile application designed to help users capture and relive their precious moments. Instead of being the one to curate an image for others, with a nostalgic touch, the app provides a personalized space for friends to document and remember shared memories ~ creating a digital journey through their life experiences.
Whether it's a cherished photo, an audio clip, a video or a special note, MemoryLane allows users to curate their memories in a visually appealing and organized manner. With its user-friendly interface and customizable features, MemoryLane aims to be a go-to platform for individuals seeking to celebrate, reflect upon, and share the meaningful moments that shape their lives.
## Inspiration ✨
The inspiration behind MemoryLane was born from a recognition of the impact that modern social media can have on our lives. While social platforms offer a convenient way to connect with others, they often come with the side effect of overwhelming timelines, constant notifications, and FOMO.
In an age where online interactions can sometimes feel fleeting and disconnected, MemoryLane seeks to offer a refuge—a space where users can curate and cherish their memories without the distractions of mainstream social media. The platform encourages users to engage in a more mindful reflection of their life experiences, fostering a sense of nostalgia and a deeper connection to the moments that matter.
## What it does 💻
**Home:**
* The Home section serves as the main dashboard where users can scroll through a personalized feed of their memories. This is displayed in chronological order of memories that haven't been viewed yet.
* It fetches and displays user-specific content, including photos, notes, and significant events, organized chronologically.
**Archive:**
* The Archive section provides users with a comprehensive repository of all their previously viewed memories
* It implements data retrieval mechanisms to fetch and display archived content in a structured and easily accessible format
* [stretch goal] include features such as search functionality and filtering options to enhance the user's ability to navigate through their extensive archive
**Create Memory:**
* The core feature of MemoryLane enables users to add new memories to share with other users
* Includes multi-media support
**Friends:**
* The Friends section focuses on social interactions, allowing users to connect with friends
* Unlike other social media, we do not support likes, comments or sharing in hopes of being motivation to reach out to the friend who shared a memory on other platforms
**Settings:**
* Incorporates user preferences, allowing adjustments to account settings including a filter for memories to be shared, incorporating Cohere's LLM to ensure topics marked as sensitive or toxic are not shown on Home feed
## How we built it 🔨
**Frontend:** React Native (We learned it during the hackathon!)
**Backend:** Node.js, AWS, Postgres
In this project, we utilized React Native for the frontend, embracing the opportunity to learn and apply it during the hackathon. On the backend, we employed Node.js, leveraged the power of AWS services (S3, AWS-RDS (postgres), AWS-SNS, EventBridge Scheduler).
Our AWS solution comprised the following key use-cases:
* AWS S3 (Simple Storage Service): Used to store and manage static assets, providing a reliable and scalable solution for handling images, videos, and other media assets in our application.
* AWS-RDS (Relational Database Service): Used to maintain a scalable and highly available postgres database backend.
* AWS-SNS (Amazon Simple Notification Service): Played a crucial role in enabling push notifications, allowing us to keep users informed and engaged with timely updates.
* AWS EventBridge Scheduler: Used to automate scheduled tasks and events within our application. This included managing background processes, triggering notifications, and ensuring seamless execution of time-sensitive operations, such as sending memories.
## Challenges we ran into ⚠️
* Finding and cleaning data set, and using Cohere API
* AWS connectivity
+ One significant challenge stemmed from configuring the AWS PostgreSQL database for optimal compatibility with Sequelize. Navigating the AWS environment and configuring the necessary settings, such as security groups, database credentials, and endpoint configurations, required careful attention to detail. Ensuring that the AWS infrastructure was set up to allow secure and efficient communication with our Node.js application became a pivotal aspect of the connectivity puzzle.
+ Furthermore, Sequelize, being a powerful Object-Relational Mapping (ORM) tool, introduced its own set of challenges. Mapping the database schema to Sequelize models, handling associations, and ensuring that Sequelize was configured correctly to interpret PostgreSQL-specific data types were crucial aspects. Dealing with intricacies in Sequelize's configuration, such as connection pooling and dialect-specific settings, added an additional layer of complexity.
* Native React Issues
+ There were many deprecated and altered libraries, so as a first time learner it was very hard to adjust
+ Expo Go's default error is "Keep text between the tags", but this would be non-descriptive and be related to whitespace. VSCode would not notice and extensive debugging.
* Deploying to Google Play (original plan)
+ :( what happened to free deployment to the Google Play store
+ After prepping our app for deployment we ran into a wall of a registration fee of $25, in the spirit of hackathons we decided this would not be a step we would take
## Accomplishments that we're proud of 🏆
Our proudest achievement lies in translating a visionary concept into reality. We embarked on a journey that started with a hand-drawn prototype and culminated in the development of a fully functional application ready for deployment on the Play Store. This transformative process showcases our dedication, creativity, and ability to bring ideas to life with precision and excellence.
## What we learned 🏫
* Nostalgia does not have to be sad
* Brain chemistry is unique! How do we form memories and why we may forget some? :)
## What's next for MemoryLane 💭
What's next for MemoryLane is an exciting journey of refinement and expansion. Discussing with our fellow hackers we already were shown interest in a social media platform that wasn't user-curating centric. As this was the team's first time developing using React Native, we plan to gather user feedback to enhance the user experience and implement additional features that resonate with our users. This includes refining the media scrolling functionality, optimizing performance, and incorporating more interactive and nostalgic elements. | ## Inspiration
While video-calling his grandmother, Tianyun was captivated by her nostalgic tales from her youth in China. It struck him how many of these cherished stories, rich in culture and emotion, remain untold or fade away as time progresses due to barriers like time constraints, lack of documentation, and the predominance of oral traditions.
For many people, however, it can be challenging to find time to hear stories from their elders. Along with limited documentation, and accessibility issues, many of these stories are getting lost as time passes.
**We believe these stories are *valuable* and *deserve* to be heard.** That’s why we created a tool that provides people with a dedicated team to help preserve these stories and legacies.
## What it does
Forget typing. Embrace voice. Our platform boasts a state-of-the-art Speech-To-Text interface. Leveraging cutting-edge LLM models combined with robust cloud infrastructures, we ensure swift and precise transcription. Whether your narration follows a structured storyline or meanders like a river, our bot Ivee skillfully crafts it into a beautiful, funny, or dramatic memoir.
## How we built it
Our initial step was an in-depth two-hour user experience research session. After crafting user personas, we identified our target audience: those who yearn to be acknowledged and remembered.
The next phase involved rapid setup and library installations. The team then split: the backend engineer dived into fine-tuning custom language models and optimizing database frameworks, the frontend designer focused on user authentication, navigation, and overall app structure, and the design team commenced the meticulous work of wireframing and conceptualization.
After an intense 35-hour development sprint, Ivee came to life. The designers brought to life a theme of nature into the application, symbolizing each story as a leaf, a life's collective memories as trees, and cultural groves of forests. The frontend squad meticulously sculpted an immersive onboarding journey, integrating seamless interactions with the backend, and spotlighting the TTS and STT features. Meanwhile, the backend experts integrated technologies from our esteemed sponsors: Hume.ai, Intel Developer Cloud, and Zilliz Vector Database.
Our initial segregation into Frontend, Design, Marketing, and Backend teams soon blurred as we realized the essence of collaboration. Every decision, every tweak was a collective effort, echoing the voice of the entire team.
## Challenges we ran into
Our foremost challenge was crafting an interface that exuded warmth, empathy, and familiarity, yet was technologically advanced. Through interactions with our relatives, we discovered overall negative sentiment toward AI, often stemming from dystopian portrayals in movies.
## Accomplishments that we're proud of
Our eureka moment was when we successfully demystified AI for our primary users. By employing intuitive metaphors and a user-centric design, we transformed AI from a daunting entity to an amiable ally. The intricate detailing in our design, the custom assets & themes, solving the challenges of optimizing 8 different APIs, and designing an intuitive & accessible onboarding experience are all highlights of our creativity.
## What we learned
Our journey underscored the true value of user-centric design. Conventional design principles had to be recalibrated to resonate with our unique user base. We created an AI tool to empower humanity, to help inspire, share, and preserve stories, not just write them. It was a profound lesson in accessibility and the art of placing users at the heart of every design choice.
## What's next for Ivee
The goal for Ivee was always to preserve important memories and moments in people's lives. Below are some really exciting features that our team would love to implement:
* Reinforcement Learning on responses to fit your narration style
* Rust to make everything faster
* **Multimodal** storytelling. We want to include clips of the most emotion-fueled clips, on top of the stylized and colour-coded text, we want to revolutionize the way we interact with stories.
* Custom handwriting for memoirs
* Use your voice and read your story in your voice using custom voices
In the future, we hope to implement additional features like photos and videos, as well as sharing features to help families and communities grow forests together. | ## Inspiration
The inspiration behind Memory Melody stemmed from a shared desire to evoke nostalgic feelings and create a sense of connection through familiar tunes and visual memories. We've all experienced moments of trying to recall a forgotten show, the sensation of déjà vu, or the longing to revisit simpler times. These shared sentiments served as the driving force behind the creation of Memory Melody.
## What it does
Memory Melody is a unique project that combines a nostalgia playlist and image generator. It allows users to relive the past by generating personalized playlists filled with iconic songs from different eras and creating nostalgic images reminiscent of a bygone time.
## How we built it
The project was built using a combination of front-end and back-end technologies. The playlist feature leverages music databases and user preferences to curate playlists tailored to individual tastes and memories. The image generator utilizes advanced algorithms to transform modern photos into nostalgic visuals, applying vintage filters and overlays.
## Challenges we ran into
Building Memory Melody presented its share of challenges. Integrating various APIs for music data, ensuring seamless user interactions, and optimizing the image generation process were among the technical hurdles we faced. Additionally, maintaining a cohesive user experience that captured the essence of nostalgia required thoughtful design considerations.
## Accomplishments that we're proud of
We're proud to have created a platform that successfully brings together music and visuals to deliver a unique nostalgic experience. The seamless integration of playlist curation and image generation reflects our commitment to providing users with a comprehensive journey down memory lane.
## What we learned
Throughout the development of Memory Melody, we learned valuable lessons about API integrations, image processing, and user experience design. Collaborating on a project that aims to tap into emotions and memories reinforced the significance of thoughtful development and design decisions.
## What's next for Memory Melody
Looking ahead, we plan to expand Memory Melody by incorporating user feedback, adding more customization options for playlists and images, and exploring partnerships with content creators. We envision Memory Melody evolving into a platform that continues to bring joy and nostalgia to users worldwide. | partial |
## Inspiration
Our inspiration is from just our everyday lives and seeing other students' pain points. As classes are transitioning back to in-person, students are having a harder time finding the discord/slack link without people pasting it into the zoom chat during lectures. Often it is not until the 5th week a student accidentally stumbles upon the link and realized they’ve been missing out on the community of classmates they’ve been relying on.
## What it does
SniffSniff helps sniff out the right discord/link with just a few clicks. If the channels have not been created for certain classes the user is trying to find, the user can create one and the link of the channel created will automatically upload it to our database so other classmates can sniff you out too.
Sniffodoo is a bot that helps sniff out important information from those slack/discord channels and allows you to skip all the hundreds of unread messages by summarizing important information such as homework tips, exam dates/prep, and the most critical information.
## How we built it
We first designed a wireframe for SniffSniff’s website through Figma, and then we built our website using Next.js and CSS tools. In order to garner data about classes and instructors for the new semester, we web-scraped berkeleytime.com using Selenium and Requests packages. Subsequently, we imported our data into the backend of our website.
The bot was built in python using Discord.py to scrape data from a channel and then using co:here’s NLP API to analyze, summarize and classify the data. We used a lot of different kind of models and endpoints from co:here's API to accomplish the functionality including text summarizers, information extractors, and sentiment analyzers. We had to construct the data for the prompts using real-life examples from the discord classes we're in.
## Challenges we ran into
We had a hard time figuring out new technology tools that were offered, such as co:here and cockroach. Furthermore, we had some doubts about what project we were going for in the beginning, but we quickly overcame that. Other challenges we also ran into include figuring out how to scrape the berkeleytime website, along with some design difficulties with CSS. The biggest challenge, however, was trying to stay sane and awake.
## Accomplishments that we're proud of
We are proud of everything we were able to accomplish in such a short amount of time. We have never done a project of such a scale in 24 hours, and we are also proud of how we were able to bond with one another and learn from each other during the process.
## What we learned
We learned that with time, good teamwork, and a whole lot of effort, amazing creations can be made within 24 hours. It really shows as long as you start and keep pushing, you can create anything!
## What's next for SniffSniff
Next for SniffSniff we definitely want to optimize Sniffodoo (our bot) to process information faster when it is called. We also want to create an email service using Sniffodoo to send out a summary of important information daily to ensure all students stay caught up without putting in too much effort. | ## Inspiration💡
As exam season concludes, we came up with the idea for our hackathon project after reflecting on our own experiences as students and discussing common challenges we face when studying. We found that searching through long textbooks and PDFs was a time-consuming and frustrating process, even with search tools such as CTRL + F. We wanted to create a solution that could simplify this process and help students save time. Additionally, we were inspired by the fact that our tool could be particularly useful for students with ADHD, dyslexia, or anyone who faces difficulty reading large pieces of text Ultimately, our goal was to create a tool that could help students focus on learning, efficiently at that, rather than spending unnecessary time on searching for information.
## What it does 🤨
[docuMind](https://github.com/cho4/PDFriend) is a web app that takes a PDF file as input and extracts text to train GPT-3.5. This allows for summaries and accurate answers according to textbook information.
## How we built it 👷♂️
To bring life to docuMind, we employed ReactJS as the frontend, while using Python with the Flask web framework and SQLite3 database. After extracting the text from the PDF file and doing some data cleaning, we used OpenAI to generate word embeddings that we used pickle to serialize and store in our database. Then we passed our prompts and data to langchain in order to provide a suitable answer to the user. In addition, we allow users to create accounts, login, and access chat history using querying and SQLite.
## Challenges we ran into 🏋️
One of the main challenges we faced during the hackathon was coming up with an idea for our project. We had a broad theme to work with, but it was difficult to brainstorm a solution that would be both feasible and useful. Another challenge we encountered was our lack of experience with Git, which at one point caused us to accidentally delete a source folder, spending a good chunk of time recovering it. This experience taught us the importance of backing up our work regularly and being more cautious when using Git. We also ran into some compatibility issues with the technologies we were using. Some of the tools and libraries we wanted to incorporate into our project were either not compatible with each other or presented problems, which required us to find workarounds or alternative solutions.
## Accomplishments that we're proud of 🙌
Each member on our team has different things we’re proud of, but generally we are all proud of the project we managed to put together despite our unfamiliarity with many technologies and concepts employed.
## What we learned 📚
We became much more familiar with the tools and techniques used in natural language processing, as well as frontend and backend development, connecting the two, and deploying an app. This experience has helped us to develop our technical skills and knowledge in this area and has inspired us to continue exploring this field further. Another important lesson we learned during the hackathon was the importance of time management. We spent a large portion of our time brainstorming and trying to come up with a project idea, which led to being slightly rushed when it came to the execution of our project. We also learned the importance of communication when working in a team setting. Since we were working on separate parts of the project at times, it was essential to keep each other updated on our progress and any changes we made. This helps prevent accidents like accidental code deletion or someone getting left behind so far they can’t push their code to our repository. Additionally, we learned the value of providing clear and concise documentation to help others understand our code and contributions to the project.
## What's next for docuMind 🔜
To enhance docuMind’s usability, we intend to implement features such as scanning handwritten PDFs, image and diagram recognition, multi-language support, audio input/output and cloud-based storage and collaboration tools. These additions could greatly expand the tool's utility and help users to easily organize and manage their documents. | ## Inspiration
In the midst of Cal Enrollment and Berkeley college life in general, getting advice on what classes we should take, what spots in Berkeley to explore and what cafe's are the best, are sometimes hard information to come by. Thankfully, the Berkeley Subreddit has tons of Berkeley students just like us, who have their own unique ideas and opinions. The best way to represent all of this information is through language models! This is why we created brOSKI, a chat bot who can be your friend and advisor about all things Berkeley.
## How We Built it
1. Scrape Reddit.
With recent updates Reddit's API no longer permits data scraping, so collecting data from reddit on the scale needed required a unique approach. There is only one comprehensive archive of Reddit's database, "Pushshift." The problem is the archive has been removed from the public by Reddit, so to get the data from Pushshift we had to find an old copy of the database and download it in fragments. Bear in mind, the Pushshift Reddit database contains a recorded history of all of reddit, not just the Berkeley subreddit, meaning we had to grapple with over 20 terabytes of text data, to get the data we needed from the Berkeley subreddit. The smallest fragments the database can be downloaded in is 450GBs. By selectively only downloading posts, not comments we trimmed down the downloaded fragment size to 150GBs. We then filtered the downloaded posts to contain only r/Berkeley posts. Now we also needed the comments for each post, so we queried the API by post ID to collect the comments for the r/Berkeley posts we downloaded. Repeated a few times we managed to download months worth of r/Berkeley data (posts and comments) for usage by our LLM algorithms.
1. Finetune LLM
The next step in the process, is to take the scraped and formatted data fine-tune our model of choice. For this project, we decided to use LLaMA 2 13b. We first tested smaller data samples, using both a small amount of epochs and high amount to see the difference in model "sound". We noticed that the model when overfit to the small dataset, became biased and uncensored, answering questions it probably shouldn't.
Once we had all our data scraped and formatted correctly, we passed the 3500 lines of Q&A style training data, and chose 2, 4, 6, 8, and 10 epochs. We discovered that mid to low epoch range seems to give the best results for our use case.
1. Front End
Initially, we planned to have an opening web page - quite similar to Chat GPT and bing AI with some example inputs along with few animations to make it seem more like iMessages or Instagram Direct Messages. However, due to time constraints, we are leaving that as future work. We designed our front end on Figma and primarily used HTML and CSS for our front end to put things in motion. At the end, we developed an API to connect our back end to the front end. | partial |
## Inspiration
We wanted to solve a unique problem we felt was impacting many people but was not receiving enough attention. With emerging and developing technology, we implemented neural network models to recognize objects and images, and converting them to an auditory output.
## What it does
XTS takes an **X** and turns it **T**o **S**peech.
## How we built it
We used PyTorch, Torchvision, and OpenCV using Python. This allowed us to utilize pre-trained convolutional neural network models and region-based convolutional neural network models without investing too much time into training an accurate model, as we had limited time to build this program.
## Challenges we ran into
While attempting to run the Python code, the video rendering and text-to-speech were out of sync and the frame-by-frame object recognition was limited in speed by our system's graphics processing and machine-learning model implementing capabilities. We also faced an issue while trying to use our computer's GPU for faster video rendering, which led to long periods of frustration trying to solve this issue due to backwards incompatibilities between module versions.
## Accomplishments that we're proud of
We are so proud that we were able to implement neural networks as well as implement object detection using Python. We were also happy to be able to test our program with various images and video recordings, and get an accurate output. Lastly we were able to create a sleek user-interface that would be able to integrate our program.
## What we learned
We learned how neural networks function and how to augment the machine learning model including dataset creation. We also learned object detection using Python. | ## Inspiration
snore or get pourd on yo pores
Coming into grade 12, the decision of going to a hackathon at this time was super ambitious. We knew coming to this hackathon we needed to be full focus 24/7. Problem being, we both procrastinate and push things to the last minute, so in doing so we created a project to help us
## What it does
It's a 3 stage project which has 3 phases to get the our attention. In the first stage we use a voice command and text message to get our own attention. If I'm still distracted, we commence into stage two where it sends a more serious voice command and then a phone call to my phone as I'm probably on my phone. If I decide to ignore the phone call, the project gets serious and commences the final stage where we bring out the big guns. When you ignore all 3 stages, we send a command that triggers the water gun and shoots the distracted victim which is myself, If I try to resist and run away the water gun automatically tracks me and shoots me wherever I go.
## How we built it
We built it using fully recyclable materials as the future innovators of tomorrow, our number one priority is the environment. We made our foundation fully off of trash cardboard, chopsticks, and hot glue. The turret was built using our hardware kit we brought from home and used 3 servos mounted on stilts to hold the water gun in the air. We have a software portion where we hacked a MindFlex to read off brainwaves to activate a water gun trigger. We used a string mechanism to activate the trigger and OpenCV to track the user's face.
## Challenges we ran into
Challenges we ran into was trying to multi-thread the Arduino and Python together. Connecting the MindFlex data with the Arduino was a pain in the ass, we had come up with many different solutions but none of them were efficient. The data was delayed, trying to read and write back and forth and the camera display speed was slowing down due to that, making the tracking worse. We eventually carried through and figured out the solution to it.
## Accomplishments that we're proud of
Accomplishments we are proud of is our engineering capabilities of creating a turret using spare scraps. Combining both the Arduino and MindFlex was something we've never done before and making it work was such a great feeling. Using Twilio and sending messages and calls is also new to us and such a new concept, but getting familiar with and using its capabilities opened a new door of opportunities for future projects.
## What we learned
We've learned many things from using Twilio and hacking into the MindFlex, we've learned a lot more about electronics and circuitry through this and procrastination. After creating this project, we've learned discipline as we never missed a deadline ever again.
## What's next for You snooze you lose. We dont lose
Coming into this hackathon, we had a lot of ambitious ideas that we had to scrap away due to the lack of materials. Including, a life-size human robot although we concluded with an automatic water gun turret controlled through brain control. We want to expand on this project with using brain signals as it's our first hackathon trying this out | ## Inspiration
We noticed a lot of stress among students around midterm season and wanted to utilize our programming skills to support them both mentally and academically. Our implementation was profoundly inspired by Jerry Xu's Simply Python Chatbot repository, which was built on a different framework called Keras. Through this project, we hoped to build a platform where students can freely reach out and find help whenever needed.
## What it does
Students can communicate their feelings, seek academic advice, or say anything else that is on their mind to the eTA. The eTA will respond with words of encouragement, point to helpful resources relevant to the student's coursework, or even make light conversation.
## How we built it
Our team used python as the main programming language including various frameworks, such as PyTorch for machine learning and Tkinter for the GUI. The machine learning model was trained by a manually produced dataset by considering possible user inputs and creating appropriate responses to given inputs.
## Challenges we ran into
It was difficult to fine tune the number of epochs of the machine learning algorithm in a way that it yielded the best final results. Using many of the necessary frameworks and packages generally posed a challenge as well.
## Accomplishments that we're proud of
We were impressed by the relative efficacy and stability of the final product, taking into account the fast-paced and time-sensitive nature of the event. We are also proud of the strong bonds that we have formed among team members through our collaborative efforts.
## What we learned
We discovered the versatility of machine learning algorithms but also their limitations in terms of accuracy and consistency under unexpected or ambiguous circumstances. We believe, however, that this drawback can be addressed with the usage of a more complex model, allotment of more resources, and a larger supply of training data.
## What's next for eTA
We would like to accommodate a wider variety of topics in the program by expanding the scope of the dataset--potentially through the collection of more diverse user inputs from a wider sample population at Berkeley. | partial |
# Picify
**Picify** is a [Flask](http://flask.pocoo.org) application that converts your photos into Spotify playlists that can be saved for later listening, providing a uniquely personal way to explore new music. The experience is facilitated by interacting and integrating a wide range of services. Try it [here](http://picify.net/).
## Workflow
The main workflow for the app is as follows:
1. The user uploads a photo to the Picify Flask server.
2. The image is passed onto the [Google Cloud Vision](https://cloud.google.com/vision/) API, where labels and entities are predicted/extracted. This information then gets passed back to the Flask server.
3. The labels and entities are filtered by a dynamic confidence threshold which is iteratively lowered until a large enough set of descriptors for the image can be formed.
4. Each of the descriptors in the above set are then expanded into associated "moods" using the [Datamuse API](https://www.datamuse.com/api/).
5. All of the descriptors and associated moods are filtered against a whitelist of "musically-relevant" terms compiled from sources such as [AllMusic](https://www.allmusic.com/moods) and [Every Noise at Once](http://everynoise.com/genrewords.html), excepting descriptors with extremely high confidence (for example, for a picture of Skrillex this might be "Skrillex").
6. Finally, the processed words are matched against existing Spotify playlists, which are sampled to form the final playlist.
## Contributors
* [Macguire Rintoul](https://github.com/mrintoul)
* [Matt Wiens](https://github.com/mwiens91)
* [Sophia Chan](https://github.com/schan27) | ## Inspiration
Music is something inherently personal to each and every one of us - our favorite tracks accompany us through our highs and lows, through tough workouts and relaxing evenings. Our aim is to encourage and capture that feeling of discovering that new song you just can't stop listening to. Music is an authentic expression of ourselves, and the perfect way to [meet new people](https://www.lovethispic.com/uploaded_images/206094-When-You-Meet-Someone-With-The-Same-Music-Taste-As-You.jpg) without the clichés of the typical social media platforms we're all sick of. We're both very passionate about reviving the soul of social media, so we were very excited to hear about this track and work on this project!
## What it does
Spotify keeps tabs on the tracks you can't get enough of. Why not make that data work *for you*? With one simple login, ensemble matches you with others who share your musical ear. Using our *state-of-the-art* machine learning algorithms, we show you other users who we think you'd like based on both their and your music taste. Love their tracks? Follow them and stay tuned. ensemble is a new way to truly connect on a meaningful level in an age of countless unoriginal social media platforms.
## How we built it
We wanted a robust application that could handle the complexities of a social network, while also providing us with an extensive toolkit to build out all the features we envisioned. Our frontend is built using [React](https://reactjs.org), a powerful and well-supported web framework that gives us the flexibility to build with ease. We utilized supporting frontend technologies like Bootstrap, HTML, and CSS to help create an attractive UI, the key aspect of any social media. For the backend, we used [Django](https://www.djangoproject.com) and [Django Rest Framework](https://www.django-rest-framework.org) to build a secure API that our frontend can easily interact with. For our recommendation algorithm, we used scikit-learn and numpy to power our machine learning needs. Finally, we used PostgreSQL for our DBMS and Heroku for deployment.
## Challenges we ran into
As with most social media platforms, users are key. Given the *very short* nature of hackathons, it obviously isn't feasible to attract a large number of users for development purposes. However, we needed a way to have users available for testing. Since ensemble is based on Spotify accounts and the Spotify API, this proved to be non-trivial. We took advantage of the Spotify API's recommendations endpoint to generate pseudo-data that resembles what a real person would have as their top tracks. With a fake name generator, we created as many fake profiles as we needed to flesh out our recommendation algorithm.
## Accomplishments that we're proud of
Our application is fully ready to use—it has all of the necessary authentication, authorization, and persistent storage. While we'd love to add even more features, we focused on implementing the core ones in their current state (if you use Spotify, feel free to log in and try it out!). You can find the live version [here](https://ensemble-dev.herokuapp.com). Despite all of the hassle of the deployment process, it was very fulfilling to see what we created, live and ready to be used by anyone in the world.
We're also proud of what we've accomplished in general! It's been a challenging yet immensely fulfilling day-and-a-half of ideation, design, and coding. Looking back at what we were able to create during this short time span, we're proud to have something to show for all the effort we've put into it.
## What we learned
We both learned a lot from working on this project. It's been a fast-paced weekend of continuously pushing new changes and features, and in doing so, we sharpened our skills in both React and Django. Additionally, utilizing the Spotify API was something neither of us had done before, and we learned a lot about OAuth 2.0 and web authentication in general.
## What's next for ensemble
Working on this project was a lot of fun, and we'd both love to keep it going in the future. There are a ton of features that we thought out but didn't have the time to implement in this time span. For example, we'd love to implement a direct messaging system, so you can directly contact and discuss your favorite songs/artists with the people you follow. The GitHub repository readme also contains complete and detailed instructions on how to set up your development environment to run the code, if anyone is interested in trying it out. Thanks for reading! | ## Inspiration
Our team was determined to challenge a major problem in society, and create a practical solution. It occurred to us early on that **false facts** and **fake news** has become a growing problem, due to the availability of information over common forms of social media. Many initiatives and campaigns recently have used approaches, such as ML fact checkers, to identify and remove fake news across the Internet. Although we have seen this approach become evidently better over time, our group felt that there must be a way to innovate upon the foundations created from the ML.
In short, our aspirations to challenge an ever-growing issue within society, coupled with the thought of innovating upon current technological approaches to the solution, truly inspired what has become ETHentic.
## What it does
ETHentic is a **betting platform** with a twist. Rather than preying on luck, you play against the odds of truth and justice. Users are given random snippets of journalism and articles to review, and must determine whether the information presented within the article is false/fake news, or whether it is legitimate and truthful, **based on logical reasoning and honesty**.
Users must initially trade in Ether for a set number of tokens (0.30ETH = 100 tokens). One token can be used to review one article. Every article that is chosen from the Internet is first evaluated using an ML model, which determines whether the article is truthful or false. For a user to *win* the bet, they must evaluate the same choice as the ML model. By winning the bet, a user will receive a $0.40 gain on bet. This means a player is very capable of making a return on investment in the long run.
Any given article will only be reviewed 100 times by any unique user. Once the 100 cap has been met, the article will retire, and the results will be published to the Ethereum blockchain. The results will include anonymous statistics of ratio of truth:false evaluation, the article source, and the ML's original evaluation. This data is public, immutable, and has a number of advantages. All results going forward will be capable of improving the ML model's ability to recognize false information, by comparing the relationship of assessment to public review, and training the model in a cost-effective, open source method.
To summarize, ETHentic is an incentivized, fun way to educate the public about recognizing fake news across social media, while improving the ability of current ML technology to recognize such information. We are improving the two current best approaches to beating fake news manipulation, by educating the public, and improving technology capabilities.
## How we built it
ETHentic uses a multitude of tools and software to make the application possible. First, we drew out our task flow. After sketching wireframes, we designed a prototype in Framer X. We conducted informal user research to inform our UI decisions, and built the frontend with React.
We used **Blockstack** Gaia to store user metadata, such as user authentication, betting history, token balance, and Ethereum wallet ID in a decentralized manner. We then used MongoDB and Mongoose to create a DB of articles and a counter for the amount of people who have viewed any given article. Once an article is added, we currently outsourced to Google's fact checker ML API to generate a true/false value. This was added to the associated article in Mongo **temporarily**.
Users who wanted to purchase tokens would receive a Metamask request, which would process an Ether transfer to an admin wallet that handles all the money in/money out. Once the payment is received, our node server would update the Blockstack user file with the correct amount of tokens.
Users who perform betting receive instant results on whether they were correct or wrong, and are prompted to accept their winnings from Metamask.
Everytime the Mongo DB updates the counter, it checks if the count = 100. Upon an article reaching a count of 100, the article is removed from the DB and will no longer appear on the betting game. The ML's initial evaluation, the user results, and the source for the article are all published permanently onto an Ethereum blockchain. We used IPFS to create a hash that linked to this information, which meant that the cost for storing this data onto the blockchain was massively decreased. We used Infuria as a way to get access to IPFS without needing a more heavy package and library. Storing on the blockchain allows for easy access to useful data that can be used in the future to train ML models at a rate that matches the user base growth.
As for our brand concept, we used a green colour that reminded us of Ethereum Classic. Our logo is Lady Justice - she's blindfolded, holding a sword in one hand and a scale in the other. Her sword was created as a tribute to the Ethereum logo. We felt that Lady Justice was a good representation of what our project meant, because it gives users the power to be the judge of the content they view, equipping them with a sword and a scale. Our marketing website, ethergiveawayclaimnow.online, is a play on "false advertising" and not believing everything you see online, since we're not actually giving away Ether (sorry!). We thought this would be an interesting way to attract users.
## Challenges we ran into
Figuring out how to use and integrate new technologies such as Blockstack, Ethereum, etc., was the biggest challenge. Some of the documentation was also hard to follow, and because of the libraries being a little unstable/buggy, we were facing a lot of new errors and problems.
## Accomplishments that we're proud of
We are really proud of managing to create such an interesting, fun, yet practical potential solution to such a pressing issue. Overcoming the errors and bugs with little well documented resources, although frustrating at times, was another good experience.
## What we learned
We think this hack made us learn two main things:
1) Blockchain is more than just a cryptocurrency tool.
2) Sometimes even the most dubious subject areas can be made interesting.
The whole fake news problem is something that has traditionally been taken very seriously. We took the issue as an opportunity to create a solution through a different approach, which really stressed the lesson of thinking and viewing things in a multitude of perspectives.
## What's next for ETHentic
ETHentic is looking forward to the potential of continuing to develop the ML portion of the project, and making it available on test networks for others to use and play around with. | partial |
## Inspiration
Our vision is to revolutionize the way students learn.
## What it does
It is a mobile android application that allows you to transcribe the words and sentences from a person speaking in real time.
## How we built it
Android Studio IDE to build the application and integrate Cloud SpeechToText API in order to build Hear My Prof
## Challenges we ran into
Setting up the entire work environment with Android Studio was long because the network was relatively slow.
## Accomplishments that we're proud of
We are proud of finishing it on time with a beautiful UI
## What we learned
We all learned how to use Android Studio, Kotlin, Git/Github and Cloud API
## What's next for Hear My Prof
* Text to Speech
* Computer vision to allow transcribing written text in a shot of a camera
* Feed API output into a Natural Language Engine for grammar and context analysis. | ## Inspiration
When attending crowded lectures or tutorials, it's fairly difficult to discern content from other ambient noise. What if a streamlined pipeline existed to isolate and amplify vocal audio while transcribing text from audio, and providing general context in the form of images? Is there any way to use this technology to minimize access to education? These were the questions we asked ourselves when deciding the scope of our project.
## The Stack
**Front-end :** react-native
**Back-end :** python, flask, sqlalchemy, sqlite
**AI + Pipelining Tech :** OpenAI, google-speech-recognition, scipy, asteroid, RNNs, NLP
## What it does
We built a mobile app which allows users to record video and applies an AI-powered audio processing pipeline.
**Primary use case:** Hard of hearing aid which:
1. Isolates + amplifies sound from a recorded video (pre-trained RNN model)
2. Transcribes text from isolated audio (google-speech-recognition)
3. Generates NLP context from transcription (NLP model)
4. Generates an associated image to topic being discussed (OpenAI API)
## How we built it
* Frameworked UI on Figma
* Started building UI using react-native
* Researched models for implementation
* Implemented neural networks and APIs
* Testing, Testing, Testing
## Challenges we ran into
Choosing the optimal model for each processing step required careful planning. Algorithim design was also important as responses had to be sent back to the mobile device as fast as possible to improve app usability.
## Accomplishments that we're proud of + What we learned
* Very high accuracy achieved for transcription, NLP context, and .wav isolation
* Efficient UI development
* Effective use of each tem member's strengths
## What's next for murmr
* Improve AI pipeline processing, modifying algorithms to decreasing computation time
* Include multi-processing to return content faster
* Integrate user-interviews to improve usability + generally focus more on usability | ## Inspiration
As a group of university students from across North America, COVID-19 has put into perspective the uncertainty and instability that comes with online education. To ease this transition, we were inspired to create Notate — an unparalleled speech-to-text transcription platform backed by the power of Google Cloud’s Machine Learning algorithms. Although our team has come from different walks of life, we easily related to each others’ values for accessibility, equality, and education.
## What it does
Notate is a multi-user web conferencing app which allows for students to create and access various study rooms to virtually interact with others worldwide and revolutionize the way notes are taken. It has the capacity to host up to 50 unique channels each with 100+ attendees so students can get help and advice from a multitude of sources. With the use of ML techniques, it allows for real time text-to-speech transcribing so lectures and conversations are stored and categorized in different ways. Our smart hub system makes studying more effective and efficient as we are able to sort and decipher conversations and extract relevant data which other students can use to learn all in real time.
## How we built it
For the front end, we found an open source gatsby dashboard template to embed our content and features into quickly and efficiently. We used the daily.co video APIs to embed real-time video conferencing into our application, allowing users to actually create and join rooms. For the real time speech-to-text note taker, we made use of Google’s Cloud Speech-to-Text Api, and decided to use Express and Node.js to be able to access the API from our gatsby and react front end. To improve the UI and UX, we used bootstrap throughout our app.
## Challenges we ran into
In the span of 36 hours, perfecting the functionality of a multi-faceted application was a challenge in and of itself. Our obstacles ranged from API complexities to unexpected bugs from various features. Integrating a video streaming API which suited our needs and leveraging Machine Learning techniques to transcribe speech was a new feature which challenged us all.
## Accomplishments that we're proud of
As a team we had successfully integrated all the features we planned out with good functionality while allowing room to scale for the future. Having thought extensively regarding our goal and purpose, it was clear we had to immerse ourselves with new technologies in order to be successful. Creating a video streaming platform was new to all of us and it taught us a lot about API’s and integrating them into modern frontend technologies. At the end we were able to deliver an application which we believe would be an essential tool for all students experiencing remote learning due to COVID-19.
## What we learned
Having access to mentors who were just a click away opened up many doors for us. We were effectively able to learn to integrate a variety of APIs (Google Cloud Speech-to-Text and Daily Co.) into Notate. Furthermore, we were exposed to a myriad of new programming languages, such as Bootstrap, Express, and Gatsby. As university students, we shared the frustration of a lack of web development tools provided in computer science courses. Hack the 6ix helped further our knowledge in front-end programming. We also learned a lot from one another because we all brought different skill sets to the team. Resources like informative workshops, other hackers, and online tutorials truly gave us long-lasting and impactful skills beyond this hackathon.
## What's next for Notate?
As society enters the digital era, it is evident Notate has the potential to grow and expand worldwide for various demographics. The market for education-based video conferencing applications has grown immensely over the past few months, and will likely continue to do so. It is crucial that students can adapt to this sudden change in routine and Notate will be able to mitigate this change and help students continue to prosper.
We’d like to add more functionality and integration into Notate. In particular, we’d like to embed the Google Vision API to detect and extract text from user’s physical notes and add them to the user’s database of notes. This would improve one of our primary goals of being the hub for student notes.
We also see Notate expanding across platforms, and becoming a mobile app. As well, as the need for Notate grows we plan to create a Notate browser extension - if the Notate extension is turned on, a user can have their Zoom call transcribed in real time and added to their hub of notes on Notate. | partial |
## Inspiration
After asking our peers about what they, fellow college students, needed most, we realized that it all came down to money, grades, and cheap tutoring. Thus, we sought out to create a hack that would combine these fields to give students the upmost opportunities in achieving success.
## What it does
The student user creates a profile by choosing their university, entering their name and email, and a weakness and strength. Their weakness would be the subject they need help in the most, and their strength would be what they would tutor their matching student with. The program matches the user with a student with opposite strengths and weaknesses so that both students exchange skills without any money involved! So, it is totally college-budget friendly, it allows for new student connections, and it creates opportunities for boosting grades.
## How we built it
We created a website using HTML and backend programs with PHP and MySQL. Our relational database is stored on Amazon Web Services.
## Challenges we ran into
During the early stages of the challenge, we had difficulty executing our ideas. We didn't know which route to pick on creating the website, as we were primarily experienced in Java but wanted to consider other options. We experienced a large learning curve, as this was our first time using HTML and PHP.
## Accomplishments that we're proud of
We learned HTML and PHP in one day! In addition, we recovered quickly after changing from a Java-based website to our final plan. We also learned about the power of the Terminal and various other languages. *It's safe to say that we've been inspired to learn more skills for the future!*
## What's next for Ploos Mloos
We plan on hosting Ploos Mloos on a server, and making it even more useful and accessible to students across the country! | ## Inspiration
💡 High school is stressful enough. Course choices are important decisions that high school students make that have the possibility to impact their entire lives. With university admissions getting increasingly more competitive, wise decisions regarding course selection is integral. Often high school students have to go scavenging to find information about courses making the process unnecessarily hard.
## What it does
📚 Our product allows students to have access to all available courses in one place. Schools can set up the courses they can offer by completing one simple form which, in turn allows students to use our product as a catalogue and “go shopping” for their education. Course Chekr creates a one-stop shop for all your course selection needs.
## How we built it
💻 This project was done using HTML and CSS. Figma was utilized for prototyping earlier on in the hackathon.
## Challenges we ran into
🕷 We did run into a few bugs while completing the project. We faced some difficulty in the constraints of the form and dropdown menu, as we found a small change would dramatically change where everything was positioned. In the start since we both had not touched web development in years, we had to review a lot of content that had slipped out of our mind. In retrospect, we are both glad that we took this decision, as taking a risk in doing something that we hadn’t done in years was quite the learning experience.
## Accomplishments that we're proud of
🏫 We are proud of creating a product that can be utilized by the people around us. Since we are both high school students, we have firsthand seen how stressful course selection can be. Creating a solution to a problem that not only helps us but the people around us is very fulfilling.
🎨 We are also proud of a clean UI design. Since neither of us have much experience in incorporating visually appealing UI into our websites, it was definitely a challenge to give the website a comforting feel. Luckily in the end we were able to gain some experience in creating engaging web user interfaces.
## What we learned
🖥 This project allowed us to review some of the fundamentals of web development that we have not touched for years. We also got the experience to create a full-fledged product in 36 hours, something which requires high levels of preparation and dedication. Additionally, we got experience in QA Testing through testing for bugs and project management by guiding each other.
## What's next for Course Chekr: The High School Course Manager
🛠 There are still a few features we would have liked to implement in Course Chekr if time had permitted. Firstly, we wanted to add filters that could make finding the courses you are interested in easier. Whether it be a search engine or a filter that could shorten a list of hundreds of courses, a feature like that would be really beneficial in a product like ours. In the future, we are interested in expanding this product and potentially having it tested in a few schools. A experience like this would give us insight into the positives and negatives of Course Chekr while also testing its success in the market. | ## Inspiration
Ideas for interactions from:
* <http://paperprograms.org/>
* <http://dynamicland.org/>
but I wanted to go from the existing computer down, rather from the bottom up, and make something that was a twist on the existing desktop: Web browser, Terminal, chat apps, keyboard, windows.
## What it does
Maps your Mac desktop windows onto pieces of paper + tracks a keyboard and lets you focus on whichever one is closest to the keyboard. Goal is to make something you might use day-to-day as a full computer.
## How I built it
A webcam and pico projector mounted above desk + OpenCV doing basic computer vision to find all the pieces of paper and the keyboard.
## Challenges I ran into
* Reliable tracking under different light conditions.
* Feedback effects from projected light.
* Tracking the keyboard reliably.
* Hooking into macOS to control window focus
## Accomplishments that I'm proud of
Learning some CV stuff, simplifying the pipelines I saw online by a lot and getting better performance (binary thresholds are great), getting a surprisingly usable system.
Cool emergent things like combining pieces of paper + the side ideas I mention below.
## What I learned
Some interesting side ideas here:
* Playing with the calibrated camera is fun on its own; you can render it in place and get a cool ghost effect
* Would be fun to use a deep learning thing to identify and compute with arbitrary objects
## What's next for Computertop Desk
* Pointing tool (laser pointer?)
* More robust CV pipeline? Machine learning?
* Optimizations: run stuff on GPU, cut latency down, improve throughput
* More 'multiplayer' stuff: arbitrary rotations of pages, multiple keyboards at once | losing |
GoDaddy Domain User Website:
<https://thetraffixperiment.info/>
## Inspiration
Our team was inspired by the noticeable yet unsolved modern challenge faced by millions of regular people everyday: Inefficient traffic. The more time we spend stalled on the road, exhaust fumes piling into the sky, the greater our digital blueprint and the worse our mood. Traffic signals are usually moderated by hard-coded interval times, which is not always intuitive; how many times have you driven to an empty intersection, only to be stopped by a red light despite the complete and absolute absence of cars on the road perpendicular? How much time could you save if you could just fly through those empty intersections (without breaking the law, of course!) That is how TraFFIX was born.
TraFFIX employs image recognition AI to count the number of cars present at an intersection. An AI model detects and labels different objects on a screen, as well as quantifies the number visible on the screen. This data can then be analyzed and the traffic light intervals are thus adjusted.
To test on a real intersection was not feasible; unfortunately, as regular University and High-school students, we did not have access to the traffic light control systems. As such, we created a miniature intersection model in order to demonstrate and execute our idea. Traffic light templates were 3D printed and LEDs were placed in the holes. An arduino was used and programmed to control each of the lights and the duration for which they were turned on.
Like aforementioned, we did not have access to a real life intersection to test and develop our idea. We solved this by creating a miniature model.
Integrating the AI model and extracting and analyzing the data was a challenging task that we overcame and succeeded in. Traffix leverages a convolutional neural network (CNN) trained to detect vehicles, pedestrians, and cyclists using image detection from cameras installed at intersections.
## What we learned
We gained experience and skills in working with AI databases as well as data analysis and management. Additionally, we explored the complexity of real world problems; the variables to consider in a real-time real life road situation was astounding. We learned to navigate these, managing what we could, and distributing tasks.
Moving forward, TraFFIX will work on making even more clever systems, and traffic lights that communicate with each other: we will begin engineering better ROADS, not just intersections. | ## Inspiration
We wanted to create a project that could help improve multiple aspects of society at once, while still being a simple and **practical** implementation, where the results can be understood by people who are not experts in technology. We believe we can help the City of London by showing, from period to period, what alterations need to be made in the traffic lights to help increase the flow of traffic, decrease **commute times**, improve the **economy** and decrease **emissions**.
## What it does
Our algorithm runs in a simulation and learns, through AI, how to **optimize the intervals** of the traffic lights to decrease the average number of cars stopped at any given point in the simulation.
We picked an intersection next to where we live (**Oxford St W & Wharncliffe Rd N**) and used data from the **City of London Open Data** website to populate and try to simulate the intersection. The Traffic Volume data was especially helpful. Also, we believe the similarity is fairly high, considering we had limited hours.
Average number of cars stopped at any given time **before** optimizing: 3.22088
Average number of cars stopped at any given time **after** optimizing: 1.653533
Improvement: about **96.9%**.
## How We built it
We used and modified an open-source implementation of the simulator, called CityFlow, using a docker container. There, we create our own model for an intersection similar to the one we live nearby.
The optimizer and gradients were created in python using machine learning (scikit-learn) using auxiliary libraries from calculating gradients.
## Challenges We ran into
Creating the **reinforcement learning model** was definitely the hardest challenge. We didn't have that much experience so we struggled. But in the end, we manage to create a solution that worked, so we are proud of that.
## Accomplishments that We're proud of
We are proud of having finished a hack that can be used in real life by cities.
## What We learned
We learned how to use docker, improve our c++ and python knowledge, and had fun working as a team.
## What's next for SmartCommute
We plan on making our simulations bigger, more realistic, and improving our reinforcement learning algorithm so it learns better.
Our vision was to be able to provide automated **monthly traffic advice** to the City of London on how to improve the traffic lights, **at a low cost**, without having to add additional hardware to the traffic lights.
Sincerely,
Caio and Matthew
Check our [Github Link](https://github.com/CaioCamatta/SmartCommute) for GIFs of before & after
#### Before

#### After
 | # Picify
**Picify** is a [Flask](http://flask.pocoo.org) application that converts your photos into Spotify playlists that can be saved for later listening, providing a uniquely personal way to explore new music. The experience is facilitated by interacting and integrating a wide range of services. Try it [here](http://picify.net/).
## Workflow
The main workflow for the app is as follows:
1. The user uploads a photo to the Picify Flask server.
2. The image is passed onto the [Google Cloud Vision](https://cloud.google.com/vision/) API, where labels and entities are predicted/extracted. This information then gets passed back to the Flask server.
3. The labels and entities are filtered by a dynamic confidence threshold which is iteratively lowered until a large enough set of descriptors for the image can be formed.
4. Each of the descriptors in the above set are then expanded into associated "moods" using the [Datamuse API](https://www.datamuse.com/api/).
5. All of the descriptors and associated moods are filtered against a whitelist of "musically-relevant" terms compiled from sources such as [AllMusic](https://www.allmusic.com/moods) and [Every Noise at Once](http://everynoise.com/genrewords.html), excepting descriptors with extremely high confidence (for example, for a picture of Skrillex this might be "Skrillex").
6. Finally, the processed words are matched against existing Spotify playlists, which are sampled to form the final playlist.
## Contributors
* [Macguire Rintoul](https://github.com/mrintoul)
* [Matt Wiens](https://github.com/mwiens91)
* [Sophia Chan](https://github.com/schan27) | losing |
## Inspiration
Globally, over 92 million tons of textile waste are generated annually, contributing to overflowing landfills and environmental degradation. What's more, the fashion industry is responsible for 10% of global carbon emissions, with fast fashion being a significant contributor due to its rapid production cycles and disposal of unsold items. The inspiration behind our project, ReStyle, is rooted in the urgent need to address the environmental impact of fast fashion. Witnessing the alarming levels of clothing waste and carbon emissions prompted our team to develop a solution that empowers individuals to make sustainable choices effortlessly. We believe in reshaping the future of fashion by promoting a circular economy and encouraging responsible consumer behaviour.
## What it does
ReStyle is a revolutionary platform that leverages AI matching to transform how people buy and sell pre-loved clothing items. The platform simplifies the selling process for users, incentivizing them to resell rather than contribute to the environmental crisis of clothing ending up in landfills. Our advanced AI matching algorithm analyzes user preferences, creating tailored recommendations for buyers and ensuring a seamless connection between sellers and buyers.
## How we built it
We used React Native and Expo to build the front end, creating different screens and components for the clothing matching, camera, and user profile functionality. The backend functionality was made possible using Firebase and the OpenAI API. Each user's style preferences are saved in a Firebase Realtime Database, as are the style descriptions for each piece of clothing, and when a user takes a picture of a piece of clothing, the OpenAI API is called to generate a description for that piece of clothing, and this description is saved to the DB. When the user is on the home page, they will see the top pieces of clothing that match with their style, retrieved from the DB and the matches generated using the OpenAI API.
## Challenges we ran into
* Our entire team was new to the technologies we utilized.
* This included React Native, Expo, Firebase, OpenAI.
## Accomplishments that we're proud of
* Efficient and even work distribution between all team members
* A visually aesthetic and accurate and working application!
## What we learned
* React Native
* Expo
* Firebase
* OpenAI
## What's next for ReStyle
Continuously refine our AI matching algorithm, incorporating machine learning advancements to provide even more accurate and personalized recommendations for users, enabling users to save clothing that they are interested in. | ## Inspiration
One of our groupmates, *ahem* Jessie *ahem*, has an addiction to shopping at a fast fashion retailer which won't be named. So the team collectively came together to come up with a solution: BYEBUY. Instead of buying hauls on clothing that are cheap and don't always look good, BYEBUY uses the power of stable diffusion and your lovely selfies to show you exactly how you would look like in those pieces of clothing you've been eyeing! This solution will not only save your wallet, but will help stop the cycle of mass buying fast fashion.
## What it does
It takes a picture of you (you can upload or take a photo in-app), it takes a link of the clothing item you want to try on, and combination! BYEBUY will use our AI model to show you how you would look with those clothes on!
## How we built it
BYEBUY is a Next.js webapp that:
1. Takes in a link and uses puppeteer and cheerio to web scrape data
2. Gets react webcam takes a picture of you
3. Takes your photo + the web data and uses AI inpainting with a stable diffusion model to show you how the garments would look on you
## Challenges we ran into
**Straying from our original idea**: Our original idea was to use AR to show you in live time how clothing would look on you. However, we soon realized how challenging 3d modelling could be, especially since we wanted to use AI to turn images of clothing into 3d models...maybe another time.
**Fetching APIs and passing data around**: This was something we were all supposed to be seasoned at, but turns out, an API we used lacked documentation, so it took a lot of trial and error (and caffeine) to figure out the calls and how to pass the endpoint of one API to .
## Accomplishments that we're proud of
We are proud we came up with a solution to a problem that affects 95% of Canada's population (totally not a made up stat btw).
Just kidding, we are proud that despite being a group who was put together last minute and didn't know each other before HTV9, we worked together, talked together, laughed together, made matching bead bracelets, and created a project that we (or at least one of the members in the group) would use!
Also proud that we came up with relatively creative solutions to our problems! Like combining web scraping with AI so that users will have (almost) no limitations to the e-commerce sites they want to virtually try clothes on from!
## What we learned
* How to use Blender
* How to call an API (without docs T-T)
* How to web scrape
* Basics of stable diffusion !
* TailwindCSS
## What's next for BYEBUY
* Incorporating more e-commerce sites | ## Inspiration
Our biggest inspiration came from our grandparents, who often felt lonely and struggled to find help. Specifically, one of us have a grandpa with dementia. He lives alone and finds it hard to receive help since most of his relatives live far away and he has reduced motor skills. Knowing this, we were determined to create a product -- and a friend -- that would be able to help the elderly with their health while also being fun to be around! Ted makes this dream a reality, transforming lives and promoting better welfare.
## What it does
Ted is able to...
* be a little cutie pie
* chat with speaker, reactive movements based on conversation (waves at you when greeting, idle bobbing)
* read heart rate, determine health levels, provide help accordingly
* Drives towards a person in need through using the RC car, utilizing object detection and speech recognition
* dance to Michael Jackson
## How we built it
* popsicle sticks, cardboards, tons of hot glue etc.
* sacrifice of my fingers
* Play.HT and Claude 3.5 Sonnet
* YOLOv8
* AssemblyAI
* Selenium
* Arudino, servos, and many sound sensors to determine direction of speaker
## Challenges we ran into
Some challenges we ran into during development was making sure every part was secure. With limited materials, we found that materials would often shift or move out of place after a few test runs, which was frustrating to keep fixing. However, instead of trying the same techniques again, we persevered by trying new methods of appliances, which eventually led a successful solution!
Having 2 speech-to-text models open at the same time showed some issue (and I still didn't fix it yet...). Creating reactive movements was difficult too but achieved it through the use of keywords and a long list of preset moves.
## Accomplishments that we're proud of
* Fluid head and arm movements of Ted
* Very pretty design on the car, poster board
* Very snappy response times with realistic voice
## What we learned
* power of friendship
* don't be afraid to try new things!
## What's next for Ted
* integrating more features to enhance Ted's ability to aid peoples' needs --> ex. ability to measure blood pressure | partial |
## Inspiration
As software engineers, we constantly seek ways to optimize efficiency and productivity. While we thrive on tackling challenging problems, sometimes we need assistance or a nudge to remember that support is available. Our app assists engineers by monitoring their states and employs Machine Learning to predict their efficiency in resolving issues.
## What it does
Our app leverages LLMs to predict the complexity of GitHub issues based on their title, description, and the stress level of the assigned software engineer. To gauge the stress level, we utilize a machine learning model that examines the developer’s sleep patterns, sourced from TerraAPI. The app provides task completion time estimates and periodically checks in with the developer, suggesting when to seek help. All this is integrated into a visually appealing and responsive front-end that fits effortlessly into a developer's routine.
## How we built it
A range of technologies power our app. The front-end is crafted with Electron and ReactJS, offering compatibility across numerous operating systems. On the backend, we harness the potential of webhooks, Terra API, ChatGPT API, Scikit-learn, Flask, NodeJS, and ExpressJS. The core programming languages deployed include JavaScript, Python, HTML, and CSS.
## Challenges we ran into
Constructing the app was a blend of excitement and hurdles due to the multifaceted issues at hand. Setting up multiple webhooks was essential for real-time model updates, as they depend on current data such as fresh Github issues and health metrics from wearables. Additionally, we ventured into sourcing datasets and crafting machine learning models for predicting an engineer's stress levels and employed natural language processing for issue resolution time estimates.
## Accomplishments that we're proud of
In our journey, we scripted close to 15,000 lines of code and overcame numerous challenges. Our preliminary vision had the front end majorly scripted in JavaScript, HTML, and CSS — a considerable endeavor in contemporary development. The pinnacle of our pride is the realization of our app, all achieved within a 3-day hackathon.
## What we learned
Our team was unfamiliar to one another before the hackathon. Yet, our decision to trust each other paid off as everyone contributed valiantly. We honed our skills in task delegation among the four engineers and encountered and overcame issues previously uncharted for us, like running multiple webhooks and integrating a desktop application with an array of server-side technologies.
## What's next for TBox 16 Pro Max (titanium purple)
The future brims with potential for this project. Our aspirations include introducing real-time stress management using intricate time-series models. User customization options are also on the horizon to enrich our time predictions. And certainly, front-end personalizations, like dark mode and themes, are part of our roadmap. | ## Inspiration
Our inspiration for this project came from newfound research stating the capabilities of models to perform the work of data engineers and provide accurate tools for analysis. We realized that such work is impactful in various sectors, including finance, climate change, medical devices, and much more. We decided to test our solution on various datasets to see the potential in its impact.
## What it does
A lot of things will let you know soon
## How we built it
For our project, we developed a sophisticated query pipeline that integrates a chatbot interface with a SQL database. This setup enables users to make database queries effortlessly through natural language inputs. We utilized SQLAlchemy to handle the database connection and ORM functionalities, ensuring smooth interaction with the SQL database. To bridge the gap between user queries and database commands, we employed LangChain, which translates the natural language inputs from the chatbot into SQL queries. To further enhance the query pipeline, we integrated Llama Index, which facilitates sequential reasoning, allowing the chatbot to handle more complex queries that require step-by-step logic. Additionally, we added a dynamic dashboard feature using Plotly. This dashboard allows users to visualize query results in an interactive and visually appealing manner, providing insightful data representations. This seamless integration of chatbot querying, sequential reasoning, and data visualization makes our system robust, user-friendly, and highly efficient for data access and analysis.
## Challenges we ran into
Participating in the hackathon was a highly rewarding yet challenging experience. One primary obstacle was integrating a large language model (LLM) and chatbot functionality into our project. We faced compatibility issues with our back-end server and third-party APIs, and encountered unexpected bugs when training the AI model with specific datasets. Quick troubleshooting was necessary under tight deadlines.
Another challenge was maintaining effective communication within our remote team. Coordinating efforts and ensuring everyone was aligned led to occasional misunderstandings and delays. Despite these hurdles, the hackathon taught us invaluable lessons in problem-solving, collaboration, and time management, preparing us better for future AI-driven projects.
## Accomplishments that we're proud of
We successfully employed sequential reasoning within the LLM, enabling it to not only infer the next steps but also to accurately follow the appropriate chain of actions that a data analyst would take. This advanced capability ensures that complex queries are handled with precision, mirroring the logical progression a professional analyst would utilize. Additionally, our integration of SQLAlchemy streamlined the connection and ORM functionalities with our SQL database, while LangChain effectively translated natural language inputs from the chatbot into accurate SQL queries. We further enhanced the user experience by implementing a dynamic dashboard with Plotly, allowing for interactive and visually appealing data visualizations. These accomplishments culminated in a robust, user-friendly system that excels in both data access and analysis.
## What we learned
We learned the skills in integrating various APIs along with the sequential process of actually being a data engineer and analyst through the implementation of our agent pipeline.
## What's next for Stratify
For our next steps, we plan to add full UI integration to enhance the user experience, making our system even more intuitive and accessible. We aim to expand our data capabilities by incorporating datasets from various other industries, broadening the scope and applicability of our project. Additionally, we will focus on further testing to ensure the robustness and reliability of our system. This will involve rigorous validation and optimization to fine-tune the performance and accuracy of our query pipeline, chatbot interface, and visualization dashboard. By pursuing these enhancements, we strive to make our platform a comprehensive, versatile, and highly reliable tool for data analysis and visualization across different domains. | ## Inspiration
We really are passionate about hardware, however many hackers in the community, especially those studying software-focused degrees, miss out on the experience of working on projects involving hardware and experience in vertical integration.
To remedy this, we came up with modware. Modware provides the toolkit for software-focused developers to branch out into hardware and/or to add some verticality to their current software stack with easy to integrate hardware interactions and displays.
## What it does
The modware toolkit is a baseboard that interfaces with different common hardware modules through magnetic power and data connection lines as they are placed onto the baseboard.
Once modules are placed on the board and are detected, the user then has three options with the modules: to create a "wired" connection between an input type module (LCD Screen) and an output type module (knob), to push a POST request to any user-provided URL, or to request a GET request to pull information from any user-provided URL.
These three functionalities together allow a software-focused developer to create their own hardware interactions without ever touching the tedious aspects of hardware (easy hardware prototyping), to use different modules to interact with software applications they have already built (easy hardware interface prototyping), and to use different modules to create a physical representation of events/data from software applications they have already built (easy hardware interface prototyping).
## How we built it
Modware is a very large project with a very big stack: ranging from a fullstack web application with a server and database, to a desktop application performing graph traversal optimization algorithms, all the way down to sending I2C signals and reading analog voltage.
We had to handle the communication protocols between all the levels of modware very carefully. One of the interesting points of communication is using neodymium magnets to conduct power and data for all of the modules to a central microcontroller. Location data is also kept track of as well using a 9-stage voltage divider, a series circuit going through all 9 locations on the modware baseboard.
All of the data gathered at the central microcontroller is then sent to a local database over wifi to be accessed by the desktop application. Here the desktop application uses case analysis to solve the NP-hard problem of creating optimal wire connections, with proper geometry and distance rendering, as new connections are created, destroyed, and modified by the user. The desktop application also handles all of the API communications logic.
The local database is also synced with a database up in the cloud on Heroku, which uses the gathered information to wrap up APIs in order for the modware hardware to be able to communicate with any software that a user may write both in providing data as well as receiving data.
## Challenges we ran into
The neodymium magnets that we used were plated in nickel, a highly conductive material. However magnets will lose their magnetism when exposed to high heat and neodymium magnets are no different. So we had to extremely careful to solder everything correctly on the first try as to not waste the magnetism in our magnets. These magnets also proved very difficult to actually get a solid data and power and voltage reader electricity across due to minute differences in laser cut holes, glue residues, etc. We had to make both hardware and software changes to make sure that the connections behaved ideally.
## Accomplishments that we're proud of
We are proud that we were able to build and integrate such a huge end-to-end project. We also ended up with a fairly robust magnetic interface system by the end of the project, allowing for single or double sized modules of both input and output types to easily interact with the central microcontroller.
## What's next for ModWare
More modules! | losing |
>
> `2023-10-10 Update`
>
> We've moved all of our project information to our GitHub repo so that it's up to date.
> Our project is completely open source, so please feel free to contribute if you want!
> <https://github.com/soobinrho/BeeMovr>
>
>
> | ## Inspiration
An individual living in Canada wastes approximately 183 kilograms of solid food per year. This equates to $35 billion worth of food. A study that asked why so much food is wasted illustrated that about 57% thought that their food goes bad too quickly, while another 44% of people say the food is past the expiration date.
## What it does
LetsEat is an assistant that comprises of the server, app and the google home mini that reminds users of food that is going to expire soon and encourages them to cook it in a meal before it goes bad.
## How we built it
We used a variety of leading technologies including firebase for database and cloud functions and Google Assistant API with Dialogueflow. On the mobile side, we have the system of effortlessly uploading the receipts using Microsoft cognitive services optical character recognition (OCR). The Android app is writen using RxKotlin, RxAndroid, Retrofit on a MVP architecture.
## Challenges we ran into
One of the biggest challenges that we ran into was fleshing out our idea. Every time we thought we solved an issue in our concept, another one appeared. We iterated over our system design, app design, Google Action conversation design, integration design, over and over again for around 6 hours into the event. During development, we faced the learning curve of Firebase Cloud Functions, setting up Google Actions using DialogueFlow, and setting up socket connections.
## What we learned
We learned a lot more about how voice user interaction design worked. | ## Inspiration
**DronAR** was inspired by a love of cool technology. Drones are hot right now, and the question is, why not combine it with VR? The result is an awesome product that allows for drone management to be more **visually intuitive** letting users interact with drones in ways never done before.
## What it does
**DronAR** allows users to view realtime information about their drones such as positional data and status. Using this information, users can make on the spot decisions of how to interact with their drown.
## How I built it
Unity + Vuforia for AR. Node + Socket.IO + Express + Azure for backend
## Challenges I ran into
C# is *beautiful*
## What's next for DronAR
Adding SLAM in order to make it easier to interact with the AR items. | winning |
## Inspiration
Rizzmo was heavily inspired by our dissatisfaction for Gizmo's poor-quality science simulations.
## What it does
Rizzmo is low-poly exploration game of a thriving coral reef region. Learn about the coral reef ecosystem by exploring around in a submarine and clicking on a various marine life for their descriptions and biological fun facts.
## How we built it
The project was built with Unity, and thus C#. Some of the 3D models were made using Blender.
## Challenges we ran into
Switching from Unreal to Unity / Setting up Unity Version Control / Pulling and Checking in with Unity Workspace / Underwater rendering (no tutorial)
## Accomplishments that we're proud of
Visually appealing / Completing the project / Quickly learning Unity & Blender
## What we learned
How to use Blender / How to use Unity 3D / How to do underwater rendering / How to animate in Unity / How to power nap
## What's next for Rizzmo
Implement more STEM related simulations and make it more accessible. | ## Inspiration
We wanted to take an ancient video game that jump-started the video game industry (Asteroids) and be able to revive it using Virtual Reality.
## What it does
You are spawned in a world that has randomly generated asteroids and must approach one of four green asteroids to beat the stage. Lasers are utilized to destroy asteroids after a certain amount of collisions. Forty asteroids are always present during gameplay and are attracted to you via gravity.
## How we built it
We utilized Unity (C#) alongside an HTC Vive libraries. The Asset Store was utilized to have celestial images for our skybox environment.
## Challenges we ran into
Our Graphical User Interface is not able to be projected to the HTC Vive. Each asteroid has an attractive force towards the player; it was difficult to optimize how all of these forces were rendered and preventing them from interfering with each other. Generalizing projectile functionality across game and menu scenes was also difficult.
## Accomplishments that we're proud of
For most of the group, this was the first time we had experienced using Unity and the first time using an HTC Vive for all members. Learning the Unity workflow and development environment made us all proud and solving the problem of randomly generated asteroids causing interference with any other game objects.
## What we learned
We understood how to create interactive 3D Video Games alongside with a Virtual Reality environment. We learned how to map inputs from VR Controllers to the program.
## What's next for Estar Guars
Applying a scoring system and upgrades menu would be ideal for the game. Improve controls and polishing object collision animations. Figuring out a GUI that displays on to the HTC Vive. Having an interactive story mode to create more dynamic objects and environments. | ## Inspiration
We're always reminded of how scary carbon emission can be on a larger scale, but we never know exactly how much we really contribute. Without individuals knowing about the effects they produce on the environment, we fall into the risk of following the model of "Tragedy of the Commons" - an idea that an individual will disregard environmental concerns for the sake of him or herself because of his or her perceived low contribution to environmental damage. With this regard, we aimed to work on a program that would provide more awareness and allow the user to become more "conscious" about his or her actions.
## What it does
The application can break down into multiple parts: AC Recommender, Overall Consumption Stats, and Acceleration Checker.
**AC Recommender**
The AC recommender will suggest on the head of the vehicle whether he or she should turn on AC, returning a recommendation that depends on whether the needs of the user are obtained. Given an input of the desired vehicle temperature and current vehicle speed, the AC Recommender will first calculate the temperature wind index to find the effects of what temperature can be reached in the vehicle if the windows are down. For instance, if the reduced temperature is below the desired temperature threshold and the vehicle speed below 55 mph, then AC Recommender will tell the user to roll down their windows. This 55 mph represents the max speed a vehicle can travel before AC is more fuel-efficient (due to the increased drag of the vehicle).
**Overall Consumption Stats**
This part of the application displays the total amount of fuel consumed during an entire trip. Furthermore, the application will report the total carbon footprint in kg emission of CO2. This type of emission data allows the user to be more aware of their trip during driving, seeing the immediate and profound effects of long-term travel.
**Acceleration Checker**
This part of the application keeps track of how much the user accelerates over a certain threshold throughout his or her entire trip. The knowledge of rapid accelerations will contribute towards providing user feedback on optimizing driving to overall reduce fuel consumption.
## How we built it
We utilized the Ford API to retrieve different vehicle data that we then used to output useful statistics and considerations on the program UI on the head of the vehicle. All of this was done in android studio.
## Challenges we ran into
Due to limited time and being first-time hackers, we did not have time to implement everything we wanted. Furthermore, understanding Ford's API was an initial challenge, but a fun experience nonetheless. We eventually overcame the initial challenge and appreciated the ease of data retrieval and the use of the API.
## Accomplishments that we're proud of
We are proud of the ideas and options we've made for the program. Additionally, we are proud that our program is able to provide information and motivation to the user to further continue themselves or inspire others about CO2 emissions.
## What we learned
We learned how to use an API and how profound vehicle emission truly is.
## What's next for Econscious
We want to add and modify two more program features to Econscious.
For the accelerator checker, we would extend its functionality to also include rapid braking checking. The acceleration and braking data will be used to provide an additional energy consumption statistic that reflects the effects of driving too aggressively. Further, the feature would better advise the user on how to drive more efficiently to save as much fuel as possible while maintaining a reasonable speed.
An additional feature we would like to add is the Idle-Time checker. This feature will measure the total time the user vehicle is idle, but running, during his or her entire trip. The feature would also include the total emission during this idle time to further provide the user information on the effects of idling to be more conscious of carbon output.
Lastly, we are aiming to implement the above features as a mobile android app that would retain all this data for future reference that will be useful for tracking progress. | partial |
## Inspiration
Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech.
## What it does
While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office.
## How I built it
We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box.
For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec).
## Challenges I ran into
Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours.
Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format.
Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement.
## Accomplishments that I'm proud of
We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome.
## What I learned
We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time.
## What's next for Knowtworthy Sentiment
Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/> | ## Inspiration
While caught in the the excitement of coming up with project ideas, we found ourselves forgetting to follow up on action items brought up in the discussion. We felt that it would come in handy to have our own virtual meeting assistant to keep track of our ideas. We moved on to integrate features like automating the process of creating JIRA issues and providing a full transcript for participants to view in retrospect.
## What it does
*Minutes Made* acts as your own personal team assistant during meetings. It takes meeting minutes, creates transcripts, finds key tags and features and automates the process of creating Jira tickets for you.
It works in multiple spoken languages, and uses voice biometrics to identify key speakers.
For security, the data is encrypted locally - and since it is serverless, no sensitive data is exposed.
## How we built it
Minutes Made leverages Azure Cognitive Services for to translate between languages, identify speakers from voice patterns, and convert speech to text. It then uses custom natural language processing to parse out key issues. Interactions with slack and Jira are done through STDLIB.
## Challenges we ran into
We originally used Python libraries to manually perform the natural language processing, but found they didn't quite meet our demands with accuracy and latency. We found that Azure Cognitive services worked better. However, we did end up developing our own natural language processing algorithms to handle some of the functionality as well (e.g. creating Jira issues) since Azure didn't have everything we wanted.
As the speech conversion is done in real-time, it was necessary for our solution to be extremely performant. We needed an efficient way to store and fetch the chat transcripts. This was a difficult demand to meet, but we managed to rectify our issue with a Redis caching layer to fetch the chat transcripts quickly and persist to disk between sessions.
## Accomplishments that we're proud of
This was the first time that we all worked together, and we're glad that we were able to get a solution that actually worked and that we would actually use in real life. We became proficient with technology that we've never seen before and used it to build a nice product and an experience we're all grateful for.
## What we learned
This was a great learning experience for understanding cloud biometrics, and speech recognition technologies. We familiarised ourselves with STDLIB, and working with Jira and Slack APIs. Basically, we learned a lot about the technology we used and a lot about each other ❤️!
## What's next for Minutes Made
Next we plan to add more integrations to translate more languages and creating Github issues, Salesforce tickets, etc. We could also improve the natural language processing to handle more functions and edge cases. As we're using fairly new tech, there's a lot of room for improvement in the future. | ## Inspiration
I got the inspiration from the Mirum challenge, which was to be able to recognize emotion in speech and text.
## What it does
It records speech from people for a set time, separating individual transcripts based on small pauses in between each person talking. It then transcribes this to a JSON string using the Google Speech API and passes this string into the IBM Watson Tone Analyzer API to analyze the emotion in each snippet.
## How I built it
I had to connect to the Google Cloud SDK and Watson Developer Cloud first, and learn some python that was necessary to get them working. I then wrote one script file, recording audio with pyaudio and using the APIs for the other two to get JSON data back.
## Challenges I ran into
I had trouble making a GUI, so I abandoned trying to make it. I didn't have enough practice with making GUIs in Python before this hackathon, and the use of the APIs were time-consuming already. Another challenge I ran into was getting the google-cloud-sdk to work on my laptop, as it seemed that there were conflicting files or missing files at times.
## Accomplishments that I'm proud of
I'm proud that I got the google-cloud-sdk set up and got the Speech API to work, as well as get an API which I had never heard of to work, the IBM Watson one.
## What I learned
To keep trying to get control of APIs, but ask for help from others who might've set theirs up already. I also learned to manage my time more effectively. This is my second hackathon, and I got a lot more work done than I did last time.
## What's next for Emotional Talks
I want to add a GUI that will make it easy for viewers to analyze their conversations, and perhaps also use some future Speech APIs to better process the speech part. This could potentially be sold to businesses for use in customer care calls. | winning |
## What it does
MemoryLane is an app designed to support individuals coping with dementia by aiding in the recall of daily tasks, medication schedules, and essential dates.
The app personalizes memories through its reminisce panel, providing a contextualized experience for users. Additionally, MemoryLane ensures timely reminders through WhatsApp, facilitating adherence to daily living routines such as medication administration and appointment attendance.
## How we built it
The back end was developed using Flask and Python and MongoDB. Next.js was employed for the app's front-end development. Additionally, the app integrates the Google Cloud speech-to-text API to process audio messages from users, converting them into commands for execution. It also utilizes the InfoBip SDK for caregivers to establish timely messaging reminders through a calendar within the application.
## Challenges we ran into
An initial hurdle we encountered involved selecting a front-end framework for the app. We transitioned from React to Next due to the seamless integration of styling provided by Next, a decision that proved to be efficient and time-saving. The subsequent challenge revolved around ensuring the functionality of text messaging.
## Accomplishments that we're proud of
The accomplishments we have achieved thus far are truly significant milestones for us. We had the opportunity to explore and learn new technologies that were previously unfamiliar to us. The integration of voice recognition, text messaging, and the development of an easily accessible interface tailored to our audience is what fills us with pride.
## What's next for Memory Lane
We aim for MemoryLane to incorporate additional accessibility features and support integration with other systems for implementing activities that offer memory exercises. Additionally, we envision MemoryLane forming partnerships with existing systems dedicated to supporting individuals with dementia. Recognizing the importance of overcoming organizational language barriers in healthcare systems, we advocate for the formal use of interoperability within the reminder aspect of the application. This integration aims to provide caregivers with a seamless means of receiving the latest health updates, eliminating any friction in accessing essential information. | ## Inspiration
COVID revolutionized the way we communicate and work by normalizing remoteness. **It also created a massive emotional drought -** we weren't made to empathize with each other through screens. As video conferencing turns into the new normal, marketers and managers continue to struggle with remote communication's lack of nuance and engagement, and those with sensory impairments likely have it even worse.
Given our team's experience with AI/ML, we wanted to leverage data to bridge this gap. Beginning with the idea to use computer vision to help sensory impaired users detect emotion, we generalized our use-case to emotional analytics and real-time emotion identification for video conferencing in general.
## What it does
In MVP form, empath.ly is a video conferencing web application with integrated emotional analytics and real-time emotion identification. During a video call, we analyze the emotions of each user sixty times a second through their camera feed.
After the call, a dashboard is available which displays emotional data and data-derived insights such as the most common emotions throughout the call, changes in emotions over time, and notable contrasts or sudden changes in emotion
**Update as of 7.15am: We've also implemented an accessibility feature which colors the screen based on the emotions detected, to aid people with learning disabilities/emotion recognition difficulties.**
**Update as of 7.36am: We've implemented another accessibility feature which uses text-to-speech to report emotions to users with visual impairments.**
## How we built it
Our backend is powered by Tensorflow, Keras and OpenCV. We use an ML model to detect the emotions of each user, sixty times a second. Each detection is stored in an array, and these are collectively stored in an array of arrays to be displayed later on the analytics dashboard. On the frontend, we built the video conferencing app using React and Agora SDK, and imported the ML model using Tensorflow.JS.
## Challenges we ran into
Initially, we attempted to train our own facial recognition model on 10-13k datapoints from Kaggle - the maximum load our laptops could handle. However, the results weren't too accurate, and we ran into issues integrating this with the frontend later on. The model's still available on our repository, and with access to a PC, we're confident we would have been able to use it.
## Accomplishments that we're proud of
We overcame our ML roadblock and managed to produce a fully-functioning, well-designed web app with a solid business case for B2B data analytics and B2C accessibility.
And our two last minute accessibility add-ons!
## What we learned
It's okay - and, in fact, in the spirit of hacking - to innovate and leverage on pre-existing builds. After our initial ML model failed, we found and utilized a pretrained model which proved to be more accurate and effective.
Inspiring users' trust and choosing a target market receptive to AI-based analytics is also important - that's why our go-to-market will focus on tech companies that rely on remote work and are staffed by younger employees.
## What's next for empath.ly
From short-term to long-term stretch goals:
* We want to add on AssemblyAI's NLP model to deliver better insights. For example, the dashboard could highlight times when a certain sentiment in speech triggered a visual emotional reaction from the audience.
* We want to port this to mobile and enable camera feed functionality, so those with visual impairments or other difficulties recognizing emotions can rely on our live detection for in-person interactions.
* We want to apply our project to the metaverse by allowing users' avatars to emote based on emotions detected from the user. | # ReFriender
## Inspiration
The theme of restoration is usually connotated with tangible things such as the environment. In this project, though, we wanted to take a creative approach by restoring one of the most meaningful things in our life: our social connections. Everyone has old friends that they lost their touch with, or people they could not really get to know. ReFriender has born as a solution to this. With its advanced algorithm to find the user's old connections using their current ones, ReFriender is an app for everyone that wants to get back in touch with their old friends to see what they are up to. Especially during the pandemic where people may feel lonely, ReFriender is THE app to spark some new (or reborn the old) connections.
## What it does
Our algorithm to find the old friend's of the user was an essential component of this project that we are proud of. The algorithm works by first getting the user's friend (in other words, first degree of separation) either through Facebook API (had it allowed developer's to get user's friend list without Facebook reviewing our app for 5 days) or through getting the user's location and making them confirm their friend(s) on the app. Then, the algorithm checks the friends of the user's friends (second degree of separation), and we consider these people to be "potential old friends (or never got to meet in detail)" of the user. The more shared friends the user has with those that are separated by two degrees (friends' friends), the higher weight those people have. Hence, our algorithm displays the people with most mutual friends the user has, but those that are not actually a friend of the user. The user then decides whether the person actually is an old friend of them, and the algorithm updates accordingly.
When the user chooses a person as their old friend, they get to start a conversation with that person to see what they have been up to using the app's chat function. The user is also able to edit their profile to update their information, such as the school they go to.
## How we built it
### Backend
The backend was designed using Python Flask and the Firebase Firestore Admin SDK, in order to provide a REST API for interacting with the Cloud Firestore Database. The Cloud Firestore database was used to store user data and chat history, and the Flask API will interact with this NoSQL database to provide useful functions for the frontend. Facebook API has been limited due to Facebook's new policy that requires a review by Facebook, which may take 5 days and not possible during our Hackathon.
### Frontend
The frontend is developed in React. Using hooks and components, the React frontend uses react-router-dom to navigate through pages and provide different types of functionality to the site. It also features dynamic content and live updating, using a connection to the backend.
## Challenges we ran into
The whole project composed of little challenges in which we improved ourselves tremendously within the span of 36 hours. Trying to think of an algorithm that would accurately find a user's old friend was one challenge, trying to implement it first to Python and then to JavaScript was another. Basically, using Python as a backend and React as the frontend itself was one of the biggest challenges we had, but overcoming that was the biggest satisfaction we had as well.
Finally, recent changes in Facebook and Instagram API has hindered and limited our project. Facebook API, in order to get the list of friends a user has, requires the app to be reviewed, which takes at least 5 days so not possible within this hackathon. If we were able to use the Facebook API, we would be able to get the first contact of the users immediately, which would make the finding old friend algorithm much more accurate.
## Accomplishments that we're proud of
We are proud of integrating Python Flask backend with React frontend, a concept that we will be using from now on as well. We are also proud of actually coming up with an accurate algorithm that can find the old friends of a user, and making this algorithm self-updating (using knowledge obtained by connections between the friends of the user with other people) made the accuracy only go higher.
## What we learned
Since half of our team consisted of Python developers and the other half with React / JavaScript developers, we learnt to integrate a Python backend with React frontend, something none of us ever attempted to try before. Now that we can integrate backend and frontend with different languages, a new window of opportunity has opened to us in future projects we will make, whether it's in Hackathon or not. Basically, this project allowed us to overcome the biggest challenge between programmers: the difference in languages that we have expertise in. Adding to our Log-in and Sign-in knowledge, we learnt to how to make a Chat app and how to make a person suggesting app like Tinder (although our concept is completely different) within the scope of 36 hours, so it was tiresome yet rewarding. Finally, although limited by its full capability due to its recent privacy term conditions, we were still able to implement Facebook API to our project, and that was something new to us. We learnt to get user information through Facebook, and after being approved by Facebook to use their advanced API, we have a clear idea of how to improve our app.
## What's next for ReFriender
We are very proud of what we have accomplished within this Hackathon's time limit. We have login/signin/profile sections, a functioning chat part, and our algorithm works as intended. The fact that advanced Facebook API functions require a review by Facebook that takes 5 minutes has limited our app's old-friend-finder accuracy, as we couldn't get acces to the user's friend list. What's next for ReFriender, then, is to implement this feature after our app is approved by Facebook. Styling improvements are possible as always. | winning |
## Inspiration
Callista's nephew is on the spectrum of ADHD, her cousin has found it increasingly difficult to keep him engaged with the learning material he's been given. He enjoys playing games on his iPad and loves space. We wanted to create a solution to this problem by using technology.
## What it does
This app intersects learning with augmented reality. We used the study of the sun, earth, and moon for our initial prototype. Upon launching the app, you are able to pick a study module which will then bring you to the interactive AR experience.
## How we built it
We built the majority of the app on Xcode using Swift programming language and AR Kit.
## Challenges we ran into
We've never dealt with AR technology and have limited experience in mobile app development. Initially, we tried building 3d models with Blender but found that we were able to achieve drawing out the planets using AR kit instead.
## Accomplishments that we're proud of
Being able to successfully launch the AR part of the app and fully translating the design to the app.
## What we learned
Ideation is actually one of the most difficult parts of the hackathon. Being able to break down problems into sizable chunks is key when trying to create a product.
## What's next for Ignite.ed
We're looking to conduct user testings to see how children interact with the app, and what features would benefit them better. We hope to also work with teachers and developmental child psychologist to see how we can better suit this app and its technology to children with special needs. | ## Inspiration
Many young kids, especially children with learning disabilities, have difficulties staying focused and engaged with learning. In order to solve this problem, we decided to create a learning tool for children, designed to gamify the process of learning vocabulary for new languages in a visual, interactive way.
## What it does
ARlingo is a mobile application that encourages children to participate in active learning, enhancing their language learning by allowing them to visualize the words they are learning (such as animals in French or playground objects in Spanish), while also providing an interactive interface. After generating 3D models of objects in the user's environment, the user is prompted to select what the object's name is in their language of choice. Rather than learning new vocabulary through a piece of paper, kids can now interact with virtual models and stay engaged throughout their learning process.
## How we built it
We used Android Studio and Java to create our mobile app, as well as Google's ARCore SDK and Sceneform to implement AR and generate 3D models. Once a user logs in to through the main menu, they can select a language and which difficulty level they would like to learn. Once the user's phone camera opens, we used ARCore and Sceneform to detect surfaces in order to place a 3D model in the user's real-world environment. Each 3D object has a multiple choice question where the user can select what the name of the language is in the language of their choice.
## Challenges we ran into
Since our team had zero experience and knowledge on AR, it was a challenge for us to learn a tool that was completely new to us while also creating a working product with it. We had to do a lot of research to decide which tools to use, as well as for troubleshooting and understanding AR concepts.
## Accomplishments that we're proud of
There were members in the team who are part of this project and are experiencing a hackathon for the first time. We all worked hard together as a team and practiced pair programming, ensuring we were constantly communicating with each other. This is an accomplishment we're proud of because we successfully completed a working prototype using a tool that was entirely new to us, and had fun learning something new!
## What we learned
We not only learned how to use Android Studio again after so long, but we also learned about AR and how to integrate it into a mobile application for the first time. We wanted to take on a challenge and try something new, so we decided to go outside of our comfort zone and use a tool that we had no experience with.
## What's next for ARlingo
We would like to improve the ARlingo radio buttons for the multiple choice questions and implement them fully, so that once an answer is selected, the app will tell the user whether their answer was correct. We would also like to add more 3D models into a database, as well as more difficulty levels. Our application could also potentially be integrated into school curriculum, with each level corresponding to a different grade level. | ## Inspiration
We wanted to bring financial literacy into being a part of your everyday life while also bringing in futuristic applications such as augmented reality to really motivate people into learning about finance and business every day. We were looking at a fintech solution that didn't look towards enabling financial information to only bankers or the investment community but also to the young and curious who can learn in an interesting way based on the products they use everyday.
## What it does
Our mobile app looks at company logos, identifies the company and grabs the financial information, recent company news and company financial statements of the company and displays the data in an Augmented Reality dashboard. Furthermore, we allow speech recognition to better help those unfamiliar with financial jargon to better save and invest.
## How we built it
Built using wikitude SDK that can handle Augmented Reality for mobile applications and uses a mix of Financial data apis with Highcharts and other charting/data visualization libraries for the dashboard.
## Challenges we ran into
Augmented reality is very hard, especially when combined with image recognition. There were many workarounds and long delays spent debugging near-unknown issues. Moreover, we were building using Android, something that none of us had prior experience with which made it harder.
## Accomplishments that we're proud of
Circumventing challenges as a professional team and maintaining a strong team atmosphere no matter what to bring something that we believe is truly cool and fun-to-use.
## What we learned
Lots of things about Augmented Reality, graphics and Android mobile app development.
## What's next for ARnance
Potential to build in more charts, financials and better speech/chatbot abilities into out application. There is also a direction to be more interactive with using hands to play around with our dashboard once we figure that part out. | losing |
Hello and thank you for judging my project. I am listing below two different links and an explanation of what the two different videos are. Due to the time constraints of some hackathons, I have a shorter video for those who require a lower time. As default I will be placing the lower time video up above, but if you have time or your hackathon allows so please go ahead and watch the full video at the link below. Thanks!
[3 Minute Video Demo](https://youtu.be/8tns9b9Fl7o)
[5 Minute Demo & Presentation](https://youtu.be/Rpx7LNqh7nw)
For any questions or concerns, please email me at [[email protected]](mailto:[email protected])
## Inspiration
Resource extraction has tripled since 1970. That leaves us on track to run out of non-renewable resources by 2060. To fight this extremely dangerous issue, I used my app development skills to help everyone support the environment.
As a human and a person residing in this environment, I felt that I needed to used my technological development skills in order to help us take care of the environment better, especially in industrial countries such as the United States. In order to do my part in the movement to help sustain the environment. I used the symbolism of the LORAX to name LORAX app; inspired to help the environment.
\_ side note: when referencing firebase I mean firebase as a whole since two different databases were used; one to upload images and the other to upload data (ex. form data) in realtime. Firestore is the specific realtime database for user data versus firebase storage for image uploading \_
## Main Features of the App
To start out we are prompted with the **authentication panel** where we are able to either sign in with an existing email or sign up with a new account. Since we are new, we will go ahead and create a new account. After registering we are signed in and now are at the home page of the app. Here I will type in my name, email and password and log in. Now if we go back to firebase authentication, we see a new user pop up over here and a new user is added to firestore with their user associated data such as their **points, their user ID, Name and email.** Now lets go back to the main app. Here at the home page we can see the various things we can do. Lets start out with the Rewards tab where we can choose rewards depending on the amount of points we have.
If we press redeem rewards, it takes us to the rewards tab, where we can choose various coupons from companies and redeem them with the points we have. Since we start out with zero points, we can‘t redeem any rewards right now.
Let's go back to the home page.
The first three pages I will introduce are apart of the point incentive system for purchasing items that help the environment
If we press the view requests button, we are then navigated to a page where we are able to view our requests we have made in the past. These requests are used in order to redeem points from items you have purchased that help support the environment. Here we would we able to **view some details and the status of the requests**, but since we haven’t submitted any yet, we see there are none upon refreshing. Let’s come back to this page after submitting a request.
If we go back, we can now press the request rewards button. By pressing it, we are navigated to a form where we are able to **submit details regarding our purchase and an image of proof to ensure the user truly did indeed purchase the item**. After pressing submit, **this data and image is pushed to firebase’s realtime storage (for picture) and Firestore (other data)** which I will show in a moment.
Here if we go to firebase, we see a document with the details of our request we submitted and if we go to storage we are able to **view the image that we submitted**. And here we see the details. Here we can review the details, approve the status and assign points to the user based on their requests. Now let’s go back to the app itself.
Now let’s go to the view requests tab again now that we have submitted our request. If we go there, we see our request, the status of the request and other details such as how many points you received if the request was approved, the time, the date and other such details.
Now to the Footprint Calculator tab, where you are able to input some details and see the global footprint you have on the environment and its resources based on your house, food and overall lifestyle. Here I will type in some data and see the results. **Here its says I would take up 8 earths, if everyone used the same amount of resources as me.** The goal of this is to be able to reach only one earth since then Earth and its resources would be able to sustain for a much longer time. We can also share it with our friends to encourage them to do the same.
Now to the last tab, is the savings tab. Here we are able to find daily tasks we can simply do to no only save thousands and thousands of dollars but also heavily help sustain and help the environment. \**Here we have some things we can do to save in terms of transportation and by clicking on the saving, we are navigated to a website where we are able to view what we can do to achieve these savings and do it ourselves. \**
This has been the demonstration of the LORAX app and thank you for listening.
## How I built it
For the navigation, I used react native navigation in order to create the authentication navigator and the tab and stack navigators in each of the respective tabs.
## For the incentive system
I used Google Firebase’s Firestore in order to view, add and upload details and images to the cloud for reviewal and data transfer. For authentication, I also used **Google Firebase’s Authentication** which allowed me to create custom user data such as their user, the points associated with it and the complaints associated with their **user ID**. Overall, **Firebase made it EXTREMELY easy** to create a high level application. For this entire application, I used Google Firebase for the backend.
## For the UI
for the tabs such as Request Submitter, Request Viewer I used React-native-base library to create modern looking components which allowed me to create a modern looking application.
## For the Prize Redemption section and Savings Sections
I created the UI from scratch trialing and erroring with different designs and shadow effects to make it look cool. The user react-native-deeplinking to navigate to the specific websites for the savings tab.
## For the Footprint Calculator
I embedded the **Global Footprint Network’s Footprint Calculator** with my application in this tab to be able to use it for the reference of the user of this app. The website is shown in the **tab app and is functional on that UI**, similar to the website.
I used expo for wifi-application testing, allowing me to develop the app without any wires over the wifi network.
For the Request submission tab, I used react-native-base components to create the form UI elements and firebase to upload the data.
For the Request Viewer, I used firebase to retrieve and view the data as seen.
## Challenges I ran into
Some last second challenges I ran to was the manipulation of the database on Google Firebase. While creating the video in fact, I realize that some of the parameters were missing and were not being updated properly. I eventually realized that the naming conventions for some of the parameters being updated both in the state and in firebase got mixed up. Another issue I encountered was being able to retrieve the image from firebase. I was able to log the url, however, due to some issues with the state, I wasnt able to get the uri to the image component, and due to lack of time I left that off. Firebase made it very very easy to push, read and upload files after installing their dependencies.
Thanks to all the great documentation and other tutorials I was able to effectively implement the rest.
## What I learned
I learned a lot. Prior to this, I had not had experience with **data modelling, and creating custom user data points. \*\*However, due to my previous experience with \*\*firebase, and some documentation referencing** I was able to see firebase’s built in commands allowing me to query and add specific User ID’s to the the database, allowing me to search for data base on their UIDs. Overall, it was a great experience learning how to model data, using authentication and create custom user data and modify that using google firebase.
## Theme and How This Helps The Environment
Overall, this application used **incentives and educates** the user about their impact on the environment to better help the environment.
## Design
I created a comprehensive and simple UI to make it easy for users to navigate and understand the purposes of the Application. Additionally, I used previously mentioned utilities in order to create a modern look.
## What's next for LORAX (Luring Others to Retain our Abode Extensively)
I hope to create my **own backend in the future**, using **ML** and an **AI** to classify these images and details to automate the submission process and **create my own footprint calculator** rather than using the one provided by the global footprint network. | ## Inspiration
In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other.
## What it does
**Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?)
Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective.
After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity.
## How we built it
**Tech Stack**
* Front-end/UI
+ Flutter and dart
+ Adobe XD
* Backend
+ Firebase
- Cloud Firestore
- Cloud Storage
- Firebase Authentication
**Details**
* Front end was built after developing UI mockups/designs
* Heavy use of advanced widgets and animations throughout the app
* Creation of multiple widgets that are reused around the app
* Backend uses gmail authentication with firebase.
* Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages.
* Images are stored in firebase storage to keep the source files together.
## Challenges we ran into
* Initially connecting Firebase to the front-end
* Managing state while implementing multiple complicated animations
* Designing backend and mapping users with each other and allowing them to chat.
## Accomplishments that we're proud of
* The user interface we made and animations on the screens
* Sign up and login using Firebase Authentication
* Saving user info into Firestore and storing images in Firebase storage
* Creation of beautiful widgets.
## What we're learned
* Deeper dive into State Management in flutter
* How to make UI/UX with fonts and colour palates
* Learned how to use Cloud functions in Google Cloud Platform
* Built on top of our knowledge of Firestore.
## What's next for Change My Mind
* More topics and User settings
* Implementing ML to match users based on maturity and other metrics
* Potential Monetization of the app, premium analysis on user conversations
* Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC. | ## Inspiration 💡
"The first step toward change is awareness". This inspired us to make the younger generation visualize that each and every choice we make leaves a significant impact on the Earth. With a little more awareness and consciousness, thus ecoTrackAR aims in bringing that awareness to the younger generation through an innovative way by using AR and IoT integration.
## What it does ⚙️
ecoTrackAR is a web app that aims to provide the younger generation and students an innovative way to view how their choices are impacting the environment. The app essentially helps users visualize their carbon footprint in accordance with their everyday choices by accumulating points and presenting them in an AR format. The user would also be able to connect their devices such as a Smart watch which would track and send information to the app using IoT integration. The user would be able to see their forest grow every time they make better choices for the environment and would be able to redeem their points for gift cards.
## How we built it 🛠️
The frontend and design for the app was made with Figma. The backend and IoT integration was done using NodeJS for server management, Express was used as a web framework on top of NodeJS, Javascript, as the baseline programming language, Firebase functions for deployment, Firebase real-time database to host our database, and Postman for API testing. The AR/VR aspect was done using echoAR, Vectary and Unity.
## Challenges we ran into 🚧
One of the biggest challenges we ran into was when the echoAR platform server was down almost 12 hours before our submission period. This is why opted to use a different platform for temporary visualization called Vectary. Moreover, this was the first time everyone from our team worked on AR/VR so there was a lot of research involved.
Another one of the challenges we faced was how to think like an innovator, at the beginning of the project, we had an unclear idea of what we wanted to make, but had the tools we wanted to use. However, with the help of the workshops and mentors, we were able to brainstorm and eventually come up with a clear, concise idea for our hack, something that the team is very proud of.
## Accomplishments that we're proud of 🌟
We are proud of creating the idea that we had originally envisioned. We came into this hackathon with an open mind and were able to put our learning to the test and create something impactful for the community.
## What we learned 📖
We were able to learn and work with some new technologies for the first time. Munazza and Angel worked on echoAR, Unity and modelling for the first time. Ritvik and Piyush worked with firebase and node-red for the first time.
## What's next for ecoTrackAR 🚀
ecoTrackAR would like to include more integrations with common IoT devices, we are also looking to expand into more apps that would be accessible to everyone. Along with that, we would also like to implement more valuable offers that users would be able to redeem their points such as donating to a local charity. | winning |
# For the Kidz
The three of us were once part of an organization that provided after-school coding classes and lab-assisting services to local under-resourced middle and high schools. While we all enjoyed our individual experiences, we agreed that the lack of adequate hardware (ie. broken Chromebooks) as well as software that assumed certain basic tech skills (ie. typing) were not the best for teaching the particular group of under-resourced students in our classrooms.
Thus, this hackathon, we strived to meet two goals:
(a) Move the learning platform to a mobile device, which is more accessible to students.
(b) Make the results of the code more visual and clear via gamification and graphics.
We designed our app on Figma, then prototyped it on Invision Studio. In parallel, we used Android Studio, Java, and ButterKnife to develop the app.
Our largest challenges were in the development of the app, initially trying to leverage Google's Blockly Library in order to provide functional code blocks without reinventing the wheel. However, due to the lack of documentation and resources of individuals who used Blockly for Android applications (support for Android Dev. was deprecated), we decided to switch towards building everything from the ground up to create a prototype with basic functionality. On our coding journey, we learned about MVP architecture, modularizing our code with fragments, and implementing drag-and-drop features.
Our Invision Studio Demo:
[link](https://www.youtube.com/watch?v=upxkcIj16j0) | ## Inspiration
Software engineering and development have been among the most dynamically changing fields over the years. Our voice is one of our most powerful tools, and one which we often take for granted - we share ideas and inspiration and can communicate instantly on a global scale. However, according to recent statistics, 3 out of 1000 children are born with a detectable level of hearing loss in one or both ears. While many developers have pursued methods of converting sign language to English, we identified a need for increased accessibility in the opposite way. In an effort to propel forth the innovators of tomorrow, we wanted to develop a learning tool for children with hearing impairments to accelerate their ability to find their own voice.
## What it does
Our Sign Language Trainer is primarily designed to teach children sign language by introducing them to the ASL alphabet with simple words and effective visuals. Additionally, the application implements voice-to-text to provide feedback on word pronunciation to allow children with a hearing impairment to practice speaking from a young age.
## How we built it
The program was built using node.js as well as HTML. We also made a version of the web app for Android using Java.
## Challenges we ran into
The majority of our team was not familiar with HTML nor node.js, which caused many roadblocks during the majority of the hack. Throughout the competition, although we had an idea of what we wanted to accomplish, our plans continued to change as we learned new things about the available software. The main challenge that we dealt with was the large amount of learning involved with creating a website from scratch as we kept running into new problems that we had never seen before. We also had to overcome issues with hosting the domain using Firebase.
## Accomplishments that we proud of
While also our largest challenge, the amount of content we had to learn about web development was our favorite accomplishment. We were really happy to have completed a product from scratch with almost no prior knowledge of the tools we used to build it. The addition of creating a phone app for our project was the cherry on top.
## What I learned
1. Learned about node.js, HTML, and CSS
2. The ability to understand and structure code through analysis and documentation
3. How to cooperate as a team and brainstorm effectively
4. GitHub was new for some members of the team, but by the end, we were all using it like a pro
5. How to implement android applications with android studio
6. How to fully develop our own website
## What's next for floraisigne
If we were to continue this project, we would try and add additional hardware to exponentially increase the accuracy of the solution. We would also implement a neural network system to allow the product to convert sign language into English Language. | ## Inspiration
The moment we formed our team, we all knew we wanted to create a hack for social good which could even aid in a natural calamity. According to the reports of World Health Organization, there are 285 million people worldwide who are either blind or partially blind. Due to the relevance of this massive issue in our world today, we chose to target our technology towards people with visual impairments. After discussing several ideas, we settled on an application to ease communication and navigation during natural calamities for people with visual impairment.
## What it does
It is an Android application developed to help the visually impaired navigate through natural calamities using peer to peer audio and video streaming by creating a mesh network that does not rely on wireless or cellular connectivity in the event of a failure. In layman terms, it allows people to talk with and aid each other during a disastrous event in which the internet connection is down.
## How we built it
We decided to build an application on the Android operating system. Therefore, we used the official integrated development environment: Android Studio. By integrating Google's nearby API into our java files and using NFC and bluetooth resources, we were able to establish a peer to peer communication network between devices requiring no internet or cellular connectivity.
## Challenges we ran into
Our entire concept of communication without an internet or cellular network was a tremendous challenge. Specifically, the two greatest challenges were to ensure the seamless integration of audio and video between devices. Establishing a smooth connection for audio was difficult as it was being streamed live in real time and had to be transferred without the use of a network. Furthermore, transmitting a video posed to be even a greater challenge. Since we couldn’t transmit data of the video over the network either, we had to convert the video to bytes and then transmit to assure no transmission loss. However, we persevered through these challenges and, as a result, were able to create a peer to peer network solution.
## Accomplishments that we're proud of
We are all proud of having created a fully implemented application that works across the android platform. But we are even more proud of the various skills we gained in order to code a successful peer to peer mesh network through which we could transfer audio and video.
## What we learned
We learned how to utilize Android Studio to create peer to peer mesh networks between different physical devices. More importantly, we learned how to live stream real time audio and transmit video with no internet connectivity.
## What's next for Navig8.
We are very proud of all that we have accomplished so far. However, this is just the beginning. Next, we would like to implement location-based mapping. In order to accomplish this, we would create a floor plan of a certain building and use cameras such as Intel’s RealSense to guide visually impaired people to the nearest exit point in a given building. | losing |
## Inspiration and What it does
We often go out with a lot of amazing friends for trips, restaurants, tourism, weekend expedition and what not. Every encounter has an associated messenger groupchat. We want a way to split money which is better than to discuss on the groupchat, ask people their public key/usernames and pay on a different platform. We've integrated them so that we can do transactions and chat at a single place.
We (our team) believe that **"The Future of money is Digital Currency "** (-Bill Gates), and so, we've integrated payment with Algorand's AlgoCoins with the chat. To make the process as simple as possible without being less robust, we extract payment information out of text as well as voice messages.
## How I built it
We used Google Cloud NLP and IBM Watson Natural Language Understanding apis to extract the relevant information. Voice messages are first converted to text using Rev.ai speech-to-text. We complete the payment using the blockchain set up with Algorand API. All scripts and database will be hosted on the AWS server
## Challenges I ran into
It turned out to be unexpectedly hard to accurately find out the payer and payee. Dealing with the blockchain part was a great learning experience.
## Accomplishments that I'm proud of
that we were able to make it work in less than 24 hours
## What I learned
A lot of different APIs
## What's next for Mess-Blockchain-enger
Different kinds of currencies, more messaging platforms | ## Inspiration
With the excitement of blockchain and the ever growing concerns regarding privacy, we wanted to disrupt one of the largest technology standards yet: Email. Email accounts are mostly centralized and contain highly valuable data, making one small breach, or corrupt act can serious jeopardize millions of people. The solution, lies with the blockchain. Providing encryption and anonymity, with no chance of anyone but you reading your email.
Our technology is named after Soteria, the goddess of safety and salvation, deliverance, and preservation from harm, which we believe perfectly represents our goals and aspirations with this project.
## What it does
First off, is the blockchain and message protocol. Similar to PGP protocol it offers \_ security \_, and \_ anonymity \_, while also **ensuring that messages can never be lost**. On top of that, we built a messenger application loaded with security features, such as our facial recognition access option. The only way to communicate with others is by sharing your 'address' with each other through a convenient QRCode system. This prevents anyone from obtaining a way to contact you without your **full discretion**, goodbye spam/scam email.
## How we built it
First, we built the block chain with a simple Python Flask API interface. The overall protocol is simple and can be built upon by many applications. Next, and all the remained was making an application to take advantage of the block chain. To do so, we built a React-Native mobile messenger app, with quick testing though Expo. The app features key and address generation, which then can be shared through QR codes so we implemented a scan and be scanned flow for engaging in communications, a fully consensual agreement, so that not anyone can message anyone. We then added an extra layer of security by harnessing Microsoft Azures Face API cognitive services with facial recognition. So every time the user opens the app they must scan their face for access, ensuring only the owner can view his messages, if they so desire.
## Challenges we ran into
Our biggest challenge came from the encryption/decryption process that we had to integrate into our mobile application. Since our platform was react native, running testing instances through Expo, we ran into many specific libraries which were not yet supported by the combination of Expo and React. Learning about cryptography and standard practices also played a major role and challenge as total security is hard to find.
## Accomplishments that we're proud of
We are really proud of our blockchain for its simplicity, while taking on a huge challenge. We also really like all the features we managed to pack into our app. None of us had too much React experience but we think we managed to accomplish a lot given the time. We also all came out as good friends still, which is a big plus when we all really like to be right :)
## What we learned
Some of us learned our appreciation for React Native, while some learned the opposite. On top of that we learned so much about security, and cryptography, and furthered our beliefs in the power of decentralization.
## What's next for The Soteria Network
Once we have our main application built we plan to start working on the tokens and distribution. With a bit more work and adoption we will find ourselves in a very possible position to pursue an ICO. This would then enable us to further develop and enhance our protocol and messaging app. We see lots of potential in our creation and believe privacy and consensual communication is an essential factor in our ever increasingly social networking world. | ## Inspiration
On October 17, 2018 the government of Canada legalized marijuana at the federal level. While this has its benefits, it has one big drawback; increasing the risk of accidents caused by intoxicated/high driving. According to MADD Canada, an estimated 55.4% of road crash deaths were as a result of alcohol and/or drugs with cannabis being the most common drug. While breathalyzers exist to detect drunk drivers, there are currently no accurate roadside tests for detection of cannabis use in drivers. That is why we wanted to develop an application that uses state of the art technology to accurately determine whether an individual is intoxicated or not from cannabis use.
## What it does
Using 3 roadside tests, law enforcement will be able to accurately test for cannabis use in a driver. The three tests include one for common facial expressions associated with cannabis use, such as red eyes and relaxed face muscles, eye pupil dilation changes to light, and a balance test. After the three assessments, D.U.High will give a percentage confidence for whether or not the individual is high or not.
## How we built it
Our team created an Android and iOS mobile application using react-native, which communicates with a NodeJs and Flask API, to get data from custom Azure machine learning models including computer vision and video analytics.
## Challenges we ran into
Training the model to accurately predict the three different tests took quite a while, as the data we used was personally gathered from the internet to insure its quality. Also, hosting the flask REST API proved to be a challenge, as nobody in our team had worked with deployable REST APIs before. For our mobile engineer, it was a challenge to learn all that was needed, as she had attended a hackathon before. This was an incredible learning opportunity for the both of us.
## Accomplishments that we're proud of
We're proud of having created an MVP, from concept to production, in just 24 hours! Both of us set out to create a robust and difficult to make application, so that we could use it as a learning opportunity while solving a critical issue in society. In particular, we're proud of:
* Building a fully functional dual platform mobile application developed using React-Native
* Using several custom Microsoft Azure APIs and Machine Learning Models for facial, video and emotion detection.
* Developing a custom machine learning recommendation engine using custom picked data for categorization .
## What we learned
We learned how to connect mobile applications to custom built REST APIs, to host said REST APIs online, how to get past CORS, and how to train custom vision AI models.
## What's next for D.U.High
We wish to integrated D.U.High in real law enforcement sobriety tests, in order to both improve the machine learning model, and to make sure that the roads are safer from intoxicated drivers. We would also like to develop more tests, including speech analytics and dexterity tests. With more data in our model, we hope to one day make D.U.High an accurate testing tool for law enforcement. | partial |
**Finding a problem**
Education policy and infrastructure tend to neglect students with accessibility issues. They are oftentimes left on the backburner while funding and resources go into research and strengthening the existing curriculum. Thousands of college students struggle with taking notes in class due to various learning disabilities that make it difficult to process information quickly or write down information in real time.
Over the past decade, Offices of Accessible Education (OAE) have been trying to help support these students by hiring student note-takers and increasing ASL translators in classes, but OAE is constrained by limited funding and low interest from students to become notetakers.
This problem has been particularly relevant for our TreeHacks group. In the past year, we have become notetakers for our friends because there are not enough OAE notetakers in class. Being note writers gave us insight into what notes are valuable for those who are incredibly bright and capable but struggle to write. This manual process where we take notes for our friends has helped us become closer as friends, but it also reveals a systemic issue of accessible notes for all.
Coming into this weekend, we knew note taking was an especially interesting space. GPT3 had also been on our mind as we had recently heard from our neurodivergent friends about how it helped them think about concepts from different perspectives and break down complicated topics.
**Failure and revision**
Our initial idea was to turn videos into transcripts and feed these transcripts into GPT-3 to create the lecture notes. This idea did not work out because we quickly learned the transcript for a 60-90 minute video was too large to feed into GPT-3.
Instead, we decided to incorporate slide data to segment the video and use slide changes to organize the notes into distinct topics. Our overall idea had three parts: extract timestamps the transcript should be split at by detecting slide changes in the video, transcribe the text for each video segment, and pass in each segment of text into a gpt3 model, fine-tuned with prompt engineering and examples of good notes.
We ran into challenges every step of the way as we worked with new technologies and dealt with the beast of multi-gigabyte video files. Our main challenge was identifying slide transitions in a video so we could segment the video based on these slide transitions (which signified shifts in topics). We initially started with heuristics-based approaches to identify pixel shifts. We did this by iterating through frames using OpenCV and computing metrics such as the logarithmic sum of the bitwise XORs between images. This approach resulted in several false positives because the compressed video quality was not high enough to distinguish shifts in a few words on the slide. Instead, we trained a neural network using PyTorch on both pairs of frames across slide boundaries and pairs from within the same slide. Our neural net was able to segment videos based on individual slides, giving structure and organization to an unwieldy video file. The final result of this preprocessing step is an array of timestamps where slides change.
Next, this array was used to segment the audio input, which we did using Google Cloud’s Speech to Text API. This was initially challenging as we did not have experience with cloud-based services like Google Cloud and struggled to set up the various authentication tokens and permissions. We also ran into the issue of the videos taking a very long time, which we fixed by splitting the video into smaller clips and then implementing multithreading approaches to run the speech to text processes in parallel.
**New discoveries**
Our greatest discoveries lay in the fine-tuning of our multimodal model. We implemented a variety of prompt engineering techniques to coax our generative language model into producing the type of notes we wanted from it. In order to overcome the limited context size of the GPT-3 model we utilized, we iteratively fed chunks of the video transcript into the OpenAI API at once. We also employed both positive and negative prompt training to incentivize our model to produce output similar to our desired notes in the output latent space. We were careful to manage the external context provided to the model to allow it to focus on the right topics while avoiding extraneous tangents that would be incorrect. Finally, we sternly warned the model to follow our instructions, which did wonders for its obedience.
These challenges and solutions seem seamless, but our team was on the brink of not finishing many times throughout Saturday. The worst was around 10 PM. I distinctly remember my eyes slowly closing, a series of crumpled papers scattered nearby the trash can. Each of us was drowning in new frameworks and technologies. We began to question, how could a group of students, barely out of intro-level computer science, think to improve education.
The rest of the hour went in a haze until we rallied around a text from a friend who sent us some amazing CS notes we had written for them. Their heartfelt words of encouragement about how our notes had helped them get through the quarter gave us the energy to persevere and finish this project.
**Learning about ourselves**
We found ourselves, after a good amount of pizza and a bit of caffeine, diving back into documentation for react, google text to speech, and docker. For hours, our eyes grew heavy, but their luster never faded. More troubles arose. There were problems implementing a payment system and never-ending CSS challenges. Ultimately, our love of exploring technologies we were unfamiliar with helped fuel our inner passion.
We knew we wanted to integrate Checkbook.io’s unique payments tool, and though we found their API well architectured, we struggled to connect to it from our edge-compute centric application. Checkbook’s documentation was incredibly helpful, however, and we were able to adapt the code that they had written for a NodeJS server-side backend into our browser runtime to avoid needing to spin up an entirely separate finance service. We are thankful to Checkbook.io for the support their team gave us during the event!
Finally, at 7 AM, we connected the backend of our website with the fine-tuned gpt3 model. I clicked on CS106B and was greeted with an array of lectures to choose from. After choosing last week’s lecture, a clean set of notes were exported in LaTeX, perfect for me to refer to when working on the PSET later today!
We jumped off of the couches we had been sitting on for the last twelve hours and cheered. A phrase bounced inside my mouth like a rubber ball, “I did it!”
**Product features**
Real time video to notes upload
Multithreaded video upload framework
Database of lecture notes for popular classes
Neural network to organize video into slide segments
Multithreaded video to transcript pipeline | ## Inspiration
Cliff is dyslexic, so reading is difficult and slow for him and makes school really difficult.
But, he loves books and listens to 100+ audiobooks/yr. However, most books don't have an audiobook, especially not textbooks for schools, and articles that are passed out in class. This is an issue not only for the 160M people in the developed world with dyslexia but also for the 250M people with low vision acuity.
After moving to the U.S. at age 13, Cliff also needed something to help him translate assignments he didn't understand in school.
Most people become nearsighted as they get older, but often don't have their glasses with them. This makes it hard to read forms when needed. Being able to listen instead of reading is a really effective solution here.
## What it does
Audiobook maker allows a user to scan a physical book with their phone to produced a digital copy that can be played as an audiobook instantaneously in whatever language they choose. It also lets you read the book with text at whatever size you like to help people who have low vision acuity or are missing their glasses.
## How we built it
In Swift and iOS using Google ML and a few clever algorithms we developed to produce high-quality scanning, and high quality reading with low processing time.
## Challenges we ran into
We had to redesign a lot of the features to make the app user experience flow well and to allow the processing to happen fast enough.
## Accomplishments that we're proud of
We reduced the time it took to scan a book by 15X after one design iteration and reduced the processing time it took to OCR (Optical Character Recognition) the book from an hour plus, to instantaneously using an algorithm we built.
We allow the user to have audiobooks on their phone, in multiple languages, that take up virtually no space on the phone.
## What we learned
How to work with Google ML, how to work around OCR processing time. How to suffer through git Xcode Storyboard merge conflicts, how to use Amazon's AWS/Alexa's machine learning platform.
## What's next for Audiobook Maker
Deployment and use across the world by people who have Dyslexia or Low vision acuity, who are learning a new language or who just don't have their reading glasses but still want to function. We envision our app being used primarily for education in schools - specifically schools that have low-income populations who can't afford to buy multiple of books or audiobooks in multiple languages and formats.
## Treehack themes
treehacks education Verticle > personalization > learning styles (build a learning platform, tailored to the learning styles of auditory learners) - I'm an auditory learner, I've dreamed a tool like this since the time I was 8 years old and struggling to learn to read. I'm so excited that now it exists and every student with dyslexia or a learning difference will have access to it.
treehacks education Verticle > personalization > multilingual education ( English as a second-language students often get overlooked, Are there ways to leverage technology to create more open, multilingual classrooms?) Our software allows any book to become polylingual.
treehacks education Verticle > accessibility > refugee education (What are ways technology can be used to bring content and make education accessible to refugees? How can we make the transition to education in a new country smoother?) - Make it so they can listen to material in their mother tongue if needed. or have a voice read along with them in English. Make it so that they can carry their books wherever they go by scanning a book once and then having it for life.
treehacks education Verticle >language & literacy > mobile apps for English literacy (How can you build mobile apps to increase English fluency and literacy amongst students and adults?) -One of the best ways to learn how to read is to listen to someone else doing it and to follow yourself. Audiobook maker lets you do that. From a practical perspective - learning how to read is hard and it is difficult for an adult learning a new language to achieve proficiency and a high reading speed. To bridge that gap Audiobook Maker makes sure that every person can and understand and learn from any text they encounter.
treehacks education Verticle >language & literacy > in-person learning (many people want to learn second languages) - Audiobook maker allows users to live in a foreign countrys and understand more of what is going on. It allows users to challenge themselves to read or listen to more of their daily work in the language they are trying to learn, and it can help users understand while they studying a foreign language in the case that the meaning of text in a book or elsewhere is not clear.
We worked a lot with Google ML and Amazon AWS. | ## Inspiration
Many students have learning disabilities that negatively impact their education. After doing some research, we learned that there were nearly 150,000 students in Ontario alone!
UofT offers note-taking services to students that are registered with accessibility services, but the service relies on the quality and availability of fellow student volunteers. After learning about Cohere, we realized that we could create a product that could more reliably provide this support to students.
## What it does
Scribes aims to address these issues by providing a service that simplifies and enhances accessibility in the note-taking process. The web app was built with accessibility as the main priority, delivered through high contrast, readable fonts and clear hierarchies.
To start saving lecture sessions, the user can simply sign up for free with their school email! Scribes allows students to record live lectures on either their phone or laptop, and then receive a live transcript that can be highlighted and commented on in real-time. Once class is done, the annotated transcript is summarized by Cohere's advanced NLP algorithms to provide an easily digestible overview of the session material.
The student is also presented with definitions and additional context to better understand key terms and difficult concepts. The recorded sessions and personalized notes are always accessible through the student's tailored dashboard, where they can organize their study material through the selection of tags and folders.
## How we built it
*Designers:*
* Conducted research to gain a better understanding of our target demographic, their pain points, and our general project scope.
* Produced wireframes to map out the user experience.
* Gathered visual inspiration and applied our own design system toward creating a final design for the web app.
*Devs:*
* Used Python and AssemblyAI API to convert audio files into text transcription
* Used Cohere to summarize text, and adjusted hyperparameters in order to generate accurate and succinct summarizations of the input text
* Used Flask to create a backend API to send data from the frontend to the backend and to retrieve data from the Cohere API
*Team:*
* Brainstormed potential project ideas and features specific to this flow.
* Shared ideas and portions of the project to combine into one overall project.
## Challenges we ran into
* Troubleshooting with code: some of our earlier approaches required components that were out of date/no longer hosted, and we had to adjust by shifting the input type
* The short timeframe was a barrier preventing us from implementing stylized front-end code. To make the most out of our time, we designed the interface on Figma, developed the back-end to transcribe the sessions, and created a simple front-end document to showcase the functionality and potential methods of integration.
* Figuring out which platform and which technologies to use such that our project would be reflective of our original idea, easy and fast to develop, and also extensible for future improvements
## Accomplishments that we're proud of
Over the course of 36 hours, we’ve managed to work together and create an effective business pitch, a Figma prototype for our web app, and a working website that transcribes and summarizes audio files.
## What we learned
Our team members learned a great deal this weekend, including: creating pitches, working in a tight timeframe, networking, learning about good accessibility practices in design when designing for those with learning needs, how to work with and train advanced machine learning models, python dependencies, working with APIs and Virtual Environments.
## What's next for Scribes
If provided with more time, we plan to implement other features — such as automatically generated cue cards, a bot to answer questions regarding session content, and collaborative notes. As we prioritize accessibility and ease-of-use, we would also conduct usability testing to continue ensuring that our users are at the forefront of our product.
To cover these additional goals, we may apply to receive funding dedicated to accessibility, such as the Government of Canada’s Enabling Education Fund. We could also partner with news platforms, wiki catalogs, and other informational websites to receive more funding and bridge the gap in accessing more knowledge online. We believe that everyone is equally deserving of receiving proper access to education and the necessary support it takes to help them make the most of it. | winning |
## Inspiration
RoomRival drew inspiration from the desire to revolutionize indoor gaming experiences. We wanted to create an innovative and dynamic platform that transforms ordinary spaces into competitive battlegrounds, bringing people together through technology, strategy, and social interaction while integrating the website with MappedIn API.
## What it does
RoomRival is a fast-paced indoor game where players strategically claim rooms using QR codes (corresponds to each room), and tries to claim as many as possible. Players are able to steal rooms that have already been claimed by other players. The real-time leaderboard shows the ranking of players and the points they have gained. In addition, the real-time map also shows the room status of the map - whether a room is claimed (the claimed room is coloured according to the players colour) or not claimed.
## How we built it
Crafting RoomRival involved using React for dynamic UI, Socket.io for real-time communication, JavaScript for versatile functionality, QR-Scanner for efficient room claiming once a player scans, Node.js for a robust backend, and MappedIn for precise mapping and visualization. This comprehensive tech stack allowed us to seamlessly integrate innovative features, overcoming challenges and delivering an immersive indoor gaming experience.
## Challenges we ran into
Getting Socket.io up and running was tough, and we struggled with creating a real-time mapping system using MappedIn. Hosting and making the website work on phones posed some challenges, as well as making sure the dynamic look of the website worked well with the technical stuff behind it. Overcoming these issues needed us to work together and stick to making RoomRival awesome.
## Accomplishments that we're proud of
We find satisfaction in successfully blending technology and real-world gaming, transforming RoomRival into an engaging and dynamic platform that captivates players. Our accomplishment lies in creating an immersive gaming experience that seamlessly fuses innovation with entertainment.
## What we learned
Optimizing QR functionality with React, real-time communication with Socket.io, and backend operations with Node.js were integral lessons. Challenges in setting up Socket.io and real-time mapping with MappedIn, along with hosting dynamics, provided valuable insights. These learnings contributed to the successful creation of RoomRival.
## What's next for RoomRival
RoomRival's future includes ongoing refinement, feature enhancements, and expanding the gaming community. We aim to introduce new challenges and expanding the map to not just UBC campus but anywhere the user wants, as well as explore opportunities for partnerships to continually elevate the RoomRival experience. | ## Inspiration
We were inspired by websites such as backyard.co which allow users to have video chats and play various games together. However one of the main issues with websites such as these or any video chat rooms such as zoom is that people are reluctant to put on their video. So to combat this issue, we wanted to create a similar website which encouraged people to turn on their video cameras, by making games that heavily relied on or used videos to function.
## What it does
Right now, the website only allows for the creation of multiple rooms, each room allowing up to 200 participants to join and share screen, use the chat box, and of course, share video and audio.
## How we built it
We used a combination of javascript api, react frontend, and node express backend. We connected to the cockroachDB in hopes of storing active user sessions. We also used heroku to deploy the site. To get the videos to work we used the Daily.co API.
## Challenges we ran into
One of the earliest challenges we ran into was learning how to use the Daily.co API. Connecting it to the express server we created and connecting the server to the front end took a good portion of our time. The biggest challenge we ran into however was using cockroachDB. We had many issues just connecting to the database and seeing as neither of us had any prior knowledge or experience with cockroachDB were we unable to get more use out of the database in the time given.
## Accomplishments that we're proud of
Setting up the video call and chat system using the Daily.co API. We also set up the infrastructure to expand our app with games.
## What we learned
As this was our first time using cockroachDB, react, and express we learned a lot about developing a full stack project and using APIs. We learned how to connect a backend server to the front end and how to connect to the database as well as heroku deployment.
## What's next for bestdomaingetsfree.tech
Our next steps would be configuring the database to store active sessions and to implement the games we have created.
We purchased the domain name but domain.com had a review process we have to wait for so at the time of submission the domain is not working. | ## Inspiration
We were inspired by the topic of UofTHacks X theme **Exploration**. The game itself requires **players to explore** all the clues inside the rooms. We also want **"explore" ourselves** in UofTHacks X. Since our team does not have experience in game development, we also witness our 24-hour-accomplishment in this new area.
We were inspired by **Metaverse** AND **Virtual Reality (VR)**. We believe that Metaverse will be the next generation of the Internet. Metaverse is a collective virtual sharing space. Metaverse is formed by a combination of physical reality, augmented reality (AR), and virtual reality (VR) to enable users to interact virtually. VR is widely used in game development. Therefore, we decided to design a VR game.
## What it does
Escape room is a first-person, multiplayer VR game that allows users to discover clues, solve puzzles, and accomplish tasks in rooms in order to accomplish a specific goal in a limited amount of time.
## How we built it
We found the 3D models, and animations from the Unity asserts store and import the models to different scenes in our project. We used **Unity** as our development platform and used **GitHub** to manage our project. In order to allow multiplay in our game, we used the **photon engine**. For our VR development, we used the **OpenXR** plug-in in Unity.
## Challenges we ran into
One of the challenges we ran into was setting up the VR devices. We used **Valve Index** as our VR devices. As Valve Index only supports DisplayPort output, but our laptop only supports HDMI input. We spent lots of time looking for the adapter and could not find one. After asking for university laptops, and friends, we found a DisplayPort input-supportive device.
Another challenge we have is that we are not experienced in game development. And we start our project by script. However, we find awesome tutorials on YouTube and learn game development in a short period of time.
## Accomplishments that we're proud of
We are proud of allowing multiplayer in our game, we learned the photon engine within one hour and applied it in our project. We are also proud of creating a VR game using the OpenXR toolkit with no previous experience in game development.
## What we learned
We learned about Unity and C# from YouTube. We also learned the photon engine that allows multiuser play in our game. Moreover, we learned the OpenXR plug-in for our VR development. To better manage our project, we also learned more about GitHub.
## What's next for Escape Room
We want to allow users to self-design their rooms and create puzzles by themselves.
We plan to design more puzzles in our game.
We also want to improve the overall user experience by allowing our game runs smoothly. | partial |
# Trusty Paws
## Inspiration
We believe that every pet deserves to find a home. Animal shelters have often had to euthanize animals due to capacity issues and lack of adoptions. Our objective with Trusty Paws is to increase the rate of adoptions of these animals by increasing the exposure of shelters to potential pet owners as well as to provide an accessible platform for users to browse and adopt their next best friend.
Trusty Paws also aims to be an online hub for all things related to your pet, from offering an online marketplace for pet products to finding a vet near you.
## What it does
Our vision for Trusty Paws is to create a platform that brings together all the players that contribute to the health and happiness of our pawed friends, while allowing users to support local businesses. Each user, shelter, seller, and veterinarian contributes to a different aspect of Trusty Paws.
**Users**:
Users are your everyday users who own pets or are looking to adopt pets. They will be able to access the marketplace to buy items for their pets, browse through each shelter's pet profiles to fill out adoption requests, and find the nearest pet clinic.
**Shelters**:
Shelters accounts will be able to create pet profiles for each of their animals that are up for adoption! Each pet will have its own profile that can be customized with pictures and other fields providing further information on them. The shelter receives an adoption request form each time an user applies to adopt one of their pets.
**Sellers**:
Sellers will be able to set up stores showing all of their product listings, which include, but are not limited to, food, toys, accessories, and many other. Our marketplace will provide the opportunity for local businesses that have been affected by Covid-19 to reach their target audience while abiding by health and safety guidelines. For users, it will be a convenient way to satisfy all the needs of their pet in one place. Finally, our search bar will allow users to search for specific items for a quick and efficient shopping experience.
**Veterinarians**:
Veterinarians will be able to set up a profile for their clinic, with all the pertinent information such as their opening hours, services provided, and location.
## How we built it
For the front-end, React., Bootstrap and Materialized CSS were used to acquire the visual effects of the current website. In fact, the very first step we undertook was to draft an initial prototype of the product on Figma to ensure all the requirements and required features were met. After a few iterations of redesigning, we each dove into developing the necessary individual components, forms, and pages for the website. Once all components were completed, the next step was to route pages together in order to achieve a seamless navigation of the website.
We used Firebase within Node.Js to implement a framwork for the back-end. Using Firebase, we implemented a NoSQL database using Cloud Firestore. Data for users (all types), pets, products, and adoption forms along with their respective fields were stored as documents in their respective collections.
Finally, we used Google's Distance Matrix API to compute distances between two addresses and find the nearest services when necessary, such as the closest vet clinics or the closest shelters.
## Challenges we ran into
Although we were successful at accomplishing the major features of the website, we encountered many challenges throughout the weekend. As we started working on Trusty Paws, we realized that the initial implementation was not as user-friendly as we wanted it to be. We then decided to take a step back and to return to the initial design phase. Another challenge we ran into was that most of the team was unfamiliar with the development tools necessary for this project, such as Firebase, Node.Js, bootstrap, and redux.
## Accomplishments that we're proud of
We are proud that our team learned so much over the course of a few days.
## What's next for Trusty Paws
We want to keep improving our users' experience by optimizing the current features. We also want to improve the design and user friendliness of the interface. | ## Inspiration
One of our own member's worry about his puppy inspired us to create this project, so he could keep an eye on him.
## What it does
Our app essentially monitors your dog(s) and determines their mood/emotional state based on their sound and body language, and optionally notifies the owner about any changes in it. Specifically, if the dog becomes agitated for any reasons, manages to escape wherever they are supposed to be, or if they fall asleep or wake up.
## How we built it
We built the behavioral detection using OpenCV and TensorFlow with a publicly available neural network. The notification system utilizes the Twilio API to notify owners via SMS. The app's user interface was created using JavaScript, CSS, and HTML.
## Challenges we ran into
We found it difficult to identify the emotional state of the dog using only a camera feed. Designing and writing a clean and efficient UI that worked with both desktop and mobile platforms was also challenging.
## Accomplishments that we're proud of
Our largest achievement was determining whether the dog was agitated, sleeping, or had just escaped using computer vision. We are also very proud of our UI design.
## What we learned
We learned some more about utilizing computer vision and neural networks.
## What's next for PupTrack
KittyTrack, possibly
Improving the detection, so it is more useful for our team member | ## Inspiration
We encounter dozens of potential furry friends every day when we browse the Internet. We believe that each one of these interactions has the potential to lead to a successful adoption of an animal in need. We also believe in the power of human emotions as a call to action. We built Pupper to leverage your facial expression information to suggest the adoption of pets nearby, which an important component of Phase I of the hurdles that Best Friends identifies which currently deter the adoption of animals in need.
## What it does
Pupper shows you a feed of puppies while recording your facial expression via the webcam. Using facial landmark information, we are able to guess what type of emotional response you have to the puppy. We can track the likelihood of joy, sorrow, anger and surprise. We calculate a weighted score of these emotions to determine whether or not you should adopt a pet of the same breed.
## How I built it
React and Redux with calls to a variety of APIs.
## Challenges I ran into
Integrating and testing the emotion recognition API that we used.
## Accomplishments that I'm proud of
It works!
## What I learned
How awesome React is when developing web apps.
## What's next for Pupper
We want to make it much more flexible so that this can be used as an extension to Instagram, Google Search and basically any other website that shows you pets or puppies so that we maximize the number of succesful adoptions that Pupper encourages. | winning |
Yeezy Vision is a Chrome extension that lets you view the web from the eyes of the 21st century's greatest artist - Kanye West. Billions of people have been deprived of the life changing perspective that Mr. West has cultivated. This has gone on for far too long. We had to make a change.
Our extension helps the common man view the world as Yeezus does. Inferior text is replaced with Kanye-based nouns, less important faces are replaced by Kanye's face and links are replaced by the year of Ye - #2020.
## How we built it
We split the functionality to 4 important categories: Image replacement, text and link replacement, website and chrome extension. We did the image and text replacement using http post and gets from the microsoft azure server, bootstrap for the website and wrote a replacement script in javascript.
## What's next for Yeezy Vision
Imma let you finish, but Yeezy Vision will be THE GREATEST CHROME EXTENSION OF ALL TIME. Change your world today at yeezyvision.tech. | ## Inspiration
Nowadays, large corporations are spending more and more money nowadays on digital media advertising, but their data collection tools have not been improving at the same rate. Nike spent over $3.03 billion on advertising alone in 2014, which amounted to approximately $100 per second, yet they only received a marginal increase in profits that year. This is where Scout comes in.
## What it does
Scout uses a webcam to capture facial feature data about the user. It sends this data through a facial recognition engine in Microsoft Azure's Cognitive Services to determine demographics information, such as gender and age. It also captures facial expressions throughout an Internet browsing session, say a video commercial, and applies sentiment analysis machine learning algorithms to instantaneously determine the user's emotional state at any given point during the video. This is also done through Microsoft Azure's Cognitive Services. Content publishers can then aggregate this data and analyze it later to determine which creatives were positive and which creatives generated a negative sentiment.
Scout follows an opt-in philosophy, so users must actively turn on the webcam to be a subject in Scout. We highly encourage content publishers to incentivize users to participate in Scout (something like $100/second) so that both parties can benefit from this platform.
We also take privacy very seriously! That is why photos taken through the webcam by Scout are not persisted anywhere and we do not collect any personal user information.
## How we built it
The platform is built on top of a Flask server hosted on an Ubuntu 16.04 instance in Azure's Virtual Machines service. We use nginx, uWSGI, and supervisord to run and maintain our web application. The front-end is built with Google's Materialize UI and we use Plotly for complex analytics visualization.
The facial recognition and sentiment analysis intelligence modules are from Azure's Cognitive Services suite, and we use Azure's SQL Server to persist aggregated data. We also have an Azure Chatbot Service for data analysts to quickly see insights.
## Challenges we ran into
**CORS CORS CORS!.** Cross-Origin Resource Sharing was a huge pain in the head for us. We divided the project into three main components: the Flask backend, the UI/UX visualization, and the webcam photo collection+analysis. We each developed our modules independently of each other, but when we tried to integrate them together, we ran into a huge number of CORS issues with the REST API endpoints that were on our Flask server. We were able to resolve this with a couple of extra libraries but definitely a challenge figuring out where these errors were coming from.
SSL was another issue we ran into. In 2015, Google released a new WebRTC Policy that prevented webcam's from being accessed on insecure (HTTP) sites in Chrome, with the exception of localhost. This forced us to use OpenSSL to generate self-signed certificates and reconfigure our nginx routes to serve our site over HTTPS. As one can imagine, this caused havoc for our testing suites and our original endpoints. It forced us to resift through most of the code we had already written to accommodate this change in protocol. We don't like implementing HTTPS, and neither does Flask apparently. On top of our code, we had to reconfigure the firewalls on our servers which only added more time wasted in this short hackathon.
## Accomplishments that we're proud of
We were able to multi-process our consumer application to handle the massive amount of data we were sending back to the server (2 photos taken by the webcam each second, each photo is relatively high quality and high memory).
We were also able to get our chat bot to communicate with our REST endpoints on our Flask server, so any metric in our web portal is also accessible in Messenger, Skype, Kik, or whatever messaging platform you prefer. This allows marketing analysts who are frequently on the road to easily review the emotional data on Scout's platform.
## What we learned
When you stack cups, start with a 3x3 base and stack them in inverted directions.
## What's next for Scout
You tell us! Please feel free to contact us with your ideas, questions, comments, and concerns! | ## Inspiration
Being a regular online shopper, I always have this problem of deciding whether the product suits me or not. Stock photos on e-commerce site can be different from how it looks in real life. Shoppers need other visual sources of the product to know if they really like the product.
## What it does
Gap.It is a Chrome extension that allows online shopper to see how other people are wearing/using a product through User Generated Content (UGC) on social media by just one-click away. Base on the current product page that he/she is on, Gap.It will do a search through Twitter to see if there is any submitted content related to the product.
## How I built it
I used JavaScript to do a on-demand search on Twitter to see if there is any photos that matches with the product on the e-commerce site. The Chrome extension is built to search for products on e-commerce sites of Gap Inc. and its brands.
## Challenges I ran into
The original idea was to pull data from Instagram because of they have more photos as compared to Twitter. However, the current Instagram API does not support searching of photo caption. It was also hard to retrieve product photos based on hashtags.
## Accomplishments that I'm proud of
Had zero knowledge of building a Chrome extension and I am glad to be able to finish the project independently.
## What I learned
Chrome extension, Instagram and Twitter APIs.
## What's next for Gap.It
Better UI and cater for more e-commerce sites. | partial |
## Inspiration
Since the start of the pandemic, the government has been creating simulations to predict the events of how the disease would progress. Although we can't create something to the degree of accuracy they were able to due to a lack of data and expertise in virology, we tried to create a logical model for diseases and how exactly methods of restoration such as vaccines and cures would work within the population.
## What it does
Starting with a population of healthy people (represented in green) and an infected patient (represented in red), the simulation will take you through each step of how the disease spreads, kills, and is prevented. An infected patient will attempt to spread the disease each iteration based on distance, and each infected patient will have a chance to either die or recover from the disease. Once recovered, they'll be shown with a blue border as they've obtained phase one of "immunity" which can only be amplified to phase two by a successfully developed vaccine. This works because when your body is exposed to the virus, it develops an antibody for it, which has a similar purpose as a vaccine.
## How we built it
We build this web application using HTML, JavaScript, and CSS. We used Google Cloud Compute to host this onto our Domain.com-powered website, [openids.tech](https://openids.tech). After spending many minutes debating which function to use to calculate the probability of a person to be infected when at a certain distance, we finally came up with the following function: f(x) = (-x^2/d^2) + 1 where x is the distance between an infected patient and a healthy human, and d is the maximum distance at which infection can occur. This is probably not an accurate function, but we worked with what we could to try to come up with one that was as realistic as possible.
## Challenges we ran into
We ran into many challenges such as even the most basic of challenges: for example, using Google cloud to host our website and using the wrong variable name. Aside from that, we struggled on making an accurate yet efficient way of determining the distance between **all** people that are alive and on the canvas. In the end, we settled for Manhattan distance, which is less accurate but faster than Euclidean distance.
## Accomplishments that we're proud of
We are proud of being able to being able to figure out how to code a half-decent web application in a language that none of us were initially familiar with. We also managed to accomplish integrating graphs into webpages with newly-generated data.
## What we learned
We have learned that virtual private servers are quite difficult to set up and deploy into a production environment. Moreover, we have learned much more about asynchronous programming languages and how order of execution is very important.
## What's next for Open-IDS
In the future, we are planning to implement more realistic movement into our project, such as when a person gets infected, they move to a location where they are away from other people like self-isolation. We hope that we can continue to improve this simulator, to the point where it can be used practically in order to make wise and informed decisions. | ## Introduction
Hey! I'm William. I care about doing efficient Computer Vision and I think that this project highlights some great potential ideas for ensuring we can stay efficient.
## Inspiration
I've recently been very interested in making ML (and specifically Computer Vision) more efficient.
There exist [many](https://ai.googleblog.com/2022/02/good-news-about-carbon-footprint-of.html) [recent](https://arxiv.org/abs/1906.02243) [analyses](https://arxiv.org/abs/2104.10350) of making ML more efficient. These papers showcase a variety of different approaches to improve model efficiency and reduce global impact. If you're not sold on why model efficiency matters, I encourage you to skim over some of those papers.
Some key takeaways are as follows:
* A majority of energy is spent serving models than training (90%-10%) (Patterson et al.)
* There are three major focuses we can use to improve model implementation efficiency
* More Efficient Architectures
--Model inference cost (in FLOPs), as well as model size (number of parameters), is growing exponentially. It is necessary to design and utilize algorithms that learn and infer more efficiently. The authors highlight two promising approaches, Mixture of Expert models and approaches which perform more efficient attention (BigBird, LinFormer, Nystromformer).
* More Efficient Hardware
--The only reason we are able to train these larger models is because of more complex hardware. Specialized hardware, such as TPUs or Cerebras' WSE shows promising results in performance per watt. Authors find that specialized hardware can improve efficiency 2-5x.
* More Efficient Energy (Datacenter/Energy Generation)
--The location that these models are run on also has a significant impact on the efficiency of inference time. By computing locally, as well as in a place where we can efficiently generate energy, we can reduce the impact of our models.
These goals, however, conflict with many current approaches in Machine Learning implementation.
In the NLP space, we are quickly moving towards models that are being trained for longer on larger, multi-lingual datasets. Recent SOTA works (Megatron-LM) are nearly 10 Trillion parameters. Training GPT-3 (once!) takes the CO2 equivalent of three round trip flights from SF-NY (1,287 MWh).
In Computer Vision, a push towards using attention-based architectures as well as focusing on higher-resolution or videos (rather than images) has led to a sharp increase in the cost to train and perform inference in a model.
### We propose a new workflow to assist with the implementation of efficient Computer Vision.
Rather than training a single model from scratch, we separate the encoding and fine-tuning aspects of a task.
Specifically, we train a single, large, self-supervised model on an unlabelled, high-resolution dataset. After building a strong encoding, we can fine-tune small, efficient prediction heads on those encodings to solve a specific task. We utilize Scale.ai's platform to automate labeling.
## What it does

We use [UIUC's Almacam](https://www.youtube.com/watch?v=h6hzVOwaN_4) as a proof of concept for this topic.
We want to show that you can take some arbitrary data source and build out an extremely efficient, data-drift resistant, (mostly) task-agnostic inference pipeline.
Our contribution has x major components:
1. Automatic data collection pipeline
-- We design a small server that automatically collects data from the UIUC AlmaCam and uploads it to the Scale.ai server as well as uploading to some data centers.
2. Self Supervision for Image Encoding
-- Instead of training a model end-to-end on local/inefficient hardware, we train in a (theoretical) data center. This allows us to still leverage extremely large models on extremely large data while maintaining long-term stability and efficiency.
-- We train a self-supervised ViT/Nystromformer hybrid model on high resolution\* images. We require the use of efficient (Nystromformer) attention and we have large (48x48) image patches to accommodate for the high resolution images.
3. Fine Tuned Heads
-- We showcase that this approach maintains accuracy by fine-tuning very small prediction heads on the image encoding alone (NONE of the original images is passed to the heads).
-- We show that these prediction heads can be quickly and easily trained on a new task. This means we do not have to retrain the larger encoding model when we want to apply it to a new task!
\*We train on 576x960, normally people train on 224x224 so this is ~11x larger. We can train on the raw (1920x1080, almost 4x larger) resolution without adverse effects, but not over a weekend!
## How we built it
Everything is written in Python. I use PyTorch for ML things and Flask for server things.
#### Automatic data collection pipeline:
This was relatively straightforward. We have Flask automatically execute some bash scripts to first grab the .m3u8 playlist URL from YouTube. Then, we have ffmpeg scrape frames from that m3u8 file into a separate directory. After a certain number of images are downloaded, we hit the Scale.ai API to upload these images and associated metadata (time of day) to their database.
Originally, I had configured this server to automatically create a batch (a new set of images to be labeled) but I found it easier to manually do it (since creating batches can be expensive and I only got $250 of credit).
#### Self Supervision for Image Encoding
So, the data collection pipeline is feeding new data into some files and we want to train on that in a self-supervised way (since this allows the encoder model to remain task agnostic).
If you're interested, here's the [wandb logs](https://wandb.ai/weustis/treehacks/overview?workspace=user-weustis).
##### Model Architecture
We train a [Vision Transformer](https://arxiv.org/pdf/2010.11929.pdf)-based architecture on 576x960 resolution images. We replace normal attention (which scales poorly with many tokens) with [Nystrom-based attention](https://arxiv.org/abs/2102.03902). This allows us to approximate self-attention with O(n) memory complexity (rather than O(n^2) with normal attention). This is necessary since we do not know our downstream task and so we **must** maintain high-resolution images.
Our best model has a token dimension of 1024, a depth of 10, 10 heads, and 256 landmarks. If you'd like to compare it to existing models, it's a bit in between ViT-S and ViT-B
##### Training Scheme
We use the [Masked Autoencoders Are Scalable Vision Learners](https://arxiv.org/pdf/2111.06377.pdf) paper to establish our approach for self-supervision. As seen in the graphic below:

This approach masks regions of the image and provides the image encoder with the non-masked regions. The image encoder builds a strong encoding then passes those tokens to a decoder, which predicts the masked regions.
We mask 70% of our patches.
The best model was around 80M parameters, as well as an extra 30M for the decoder part (which is only needed when training the self-supervised model).
##### Logistics
We train on 4xA100 for 1.5 hours. Our model was still improving but you gotta move fast during a hackathon so I spent less time on hyperparameter tuning.
#### Fine Tuned Heads
The fine-tuned head can be adjusted depending on the task. We only train on predicting the number of people in an image as well as predicting the time of day. These heads were about 2M parameters.
We did not have time to ablate the head size but I strongly suspect it could be reduced drastically (5% of current size or less) for simple tasks like those above.
## Challenges we ran into
In no particular order:
* I did not know bash or ffmpeg very well, so I was a bit of a struggle to get that to download files from a YouTube live stream.
* My original uploads to Scale.ai did not include the original image path in the metadata, so when I downloaded the labeled results, I was unable to link them back to my local images :(. I created a new dataset and included the image path in the metadata.
-There is an extremely fine balance between the size of patches and the number of patches. Generally, more (and thus smaller) patches are better (since you can represent more complexity). However, this still makes our memory and training footprint prohibitively larger. On the other hand, if the patches are too big we also run the risk of masking out an entire person, which would be impossible for the model to reconstruct and lead to poor encoding representation. It took a lot of fine-tuning.
-My lack of desire to tune hyperparameters early meant I wasted a very long time with a very bad learning rate
-I planned to feed these encodings into a [DETR](https://arxiv.org/pdf/2005.12872.pdf) architecture to perform object/bounding box detection but I didn't have time. It isn't hard, but the DETR paper is more time-consuming to implement than I originally expected.
## Accomplishments that we're proud of
I truly believe this idea holds significant merit for a future of performing efficient inference. It has its faults- relying on some data center to serve model encodings introduces latency that may not be possible in some applications (see: self-driving vehicles). However, the benefit of executing upwards of 99% of your FLOPs in a location where it can be 40x more efficient should not be understated.
This model shows extremely high generalization capacity.
The two tasks we do fine-tune on are time (of day) prediction and a number of people (a stand-in for bound box prediction).
For time of day, we train on 300 samples and are able to generally predict the time of day within 15 minutes (though, all of the data was collected over the weekend, so there's probably some overfitting occurring since the test and train set are from the same day.).
For a number of people, we train on only 30 samples and are able to get an average error of about .7 people! This is absolutely amazing considering how well the embedding of only 512 numbers was able to describe the 1.6M pixel values in an original image.
The automated data pipeline is extremely smooth. It felt very nice to be working on other tasks while collecting data, creating batches, and getting those batches labeled at the same time.
## What we learned
A ton about Bash and Flask. I learned what Scale.ai actually does, which was really fun. I learned a significant amount about the efficiency of models and, maybe more importantly, the inefficiencies. It was quite shocking for me to read that usually only 10% of the energy spent on a model was during training.
## What's next for Fine Tuned Heads
I really want to see how well the object detection works with only encodings. Theoretically, it should be fine, since these embeddings are strong enough to (almost perfectly) reconstruct the original image, so the information on where people are in the image is clearly present.
Automating more of the process would be the next major step. Currently, the 'datacenter' is a DGX A100, which honestly is a pretty killer data center but it could be on a TPU, further increasing efficiency.
Looking extremely long-term, a major question becomes how to keep the self-supervised model up to date. Consistent 'fine-tuning' with new data should prevent too much data drift, but every time you fine-tune the self-supervised model you need to retrain all of the prediction heads (which, granted, is quite easy as fast). I believe this can be solved with some codebook solutions like that seen in VQ-VAE2, but I'd have to think about it more. | ## Inspiration
Our team identified two intertwined health problems in developing countries:
1) Lack of easy-to-obtain medical advice due to economic, social and geographic problems, and
2) Difficulty of public health data collection in rural communities.
This weekend, we built SMS Doc, a single platform to help solve both of these problems at the same time. SMS Doc is an SMS-based healthcare information service for underserved populations around the globe.
Why text messages? Well, cell phones are extremely prevalent worldwide [1], but connection to the internet is not [2]. So, in many ways, SMS is the *perfect* platform for reaching our audience in the developing world: no data plan or smartphone necessary.
## What it does
Our product:
1) Democratizes healthcare information for people without Internet access by providing a guided diagnosis of symptoms the user is experiencing, and
2) Has a web application component for charitable NGOs and health orgs, populated with symptom data combined with time and location data.
That 2nd point in particular is what takes SMS Doc's impact from personal to global: by allowing people in developing countries access to medical diagnoses, we gain self-reported information on their condition. This information is then directly accessible by national health organizations and NGOs to help distribute aid appropriately, and importantly allows for epidemiological study.
**The big picture:** we'll have the data and the foresight to stop big epidemics much earlier on, so we'll be less likely to repeat crises like 2014's Ebola outbreak.
## Under the hood
* *Nexmo (Vonage) API* allowed us to keep our diagnosis platform exclusively on SMS, simplifying communication with the client on the frontend so we could worry more about data processing on the backend. **Sometimes the best UX comes with no UI**
* Some in-house natural language processing for making sense of user's replies
* *MongoDB* allowed us to easily store and access data about symptoms, conditions, and patient metadata
* *Infermedica API* for the symptoms and diagnosis pipeline: this API helps us figure out the right follow-up questions to ask the user, as well as the probability that the user has a certain condition.
* *Google Maps API* for locating nearby hospitals and clinics for the user to consider visiting.
All of this hosted on a Digital Ocean cloud droplet. The results are hooked-through to a node.js webapp which can be searched for relevant keywords, symptoms and conditions and then displays heatmaps over the relevant world locations.
## What's next for SMS Doc?
* Medical reports as output: we can tell the clinic that, for example, a 30-year old male exhibiting certain symptoms was recently diagnosed with a given illness and referred to them. This can allow them to prepare treatment, understand the local health needs, etc.
* Epidemiology data can be handed to national health boards as triggers for travel warnings.
* Allow medical professionals to communicate with patients through our SMS platform. The diagnosis system can be continually improved in sensitivity and breadth.
* More local language support
[1] <http://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/>
[2] <http://www.internetlivestats.com/internet-users/> | partial |
## Inspiration
The field of Insurtech is one that has long been overlooked by many. However, with the increase of economic loss annually, and the divide between household and insurance company, this field is becoming more important than ever. Studies show that the annual natural disaster caused economic loss tallies up to several hundred billion US dollars. Yet only a third of this economic loss has been insured. As many of this loss affects families and households, we felt strongly that, especially in a year like 2020, we need to make something that could streamline the disaster insurance process, as well as something that lets households know that they need disaster insurance. That's where Insurguard comes in.
## What it does
Our solution, Insurguard, is a Web Application that enables users to compare, contrast, and even apply for natural disaster insurance plans all in one simple and streamlined platform. In addition, Insurguard will notify users, recommending them to apply for disaster insurance, if their household is detected to be in harm's way.
Using a proprietary web scraping algorithm, Insurguard will present the user with a multitude of insurance plans for every natural disaster type imaginable. Users can compare and contrast plans until they find one that is best suited to them.. Once users find a favorable insurance plan, they can apply and pay for it right through the app. Additionally, Insurguard will constantly check if the user's household is within the path of a natural disaster. If it is, it will instantly send them a notification, advising them to apply for disaster insurance as soon as possible!
## How we built it
For our tech stack, we used Python for the backend, Flask servers for the middle, and Reactjs on the frontend. We used API calls and Web Scraping on the python backend to retrieve our information, and then we create API endpoints with our Flask server. We used HTTP Requests to allow these to process, and we called them with our Reactjs frontend. We used user information in the request body, and then processed and displayed the response.
## Challenges we ran into
We had a ton of issues in accordance with time. It was difficult to fit everything into 36 hours of work, especially for such a complex web application. Other than that, finding the website to scrape insurance quote information was also very difficult. We had difficulties finding a website directly linked to pricing natural disaster insurance, and hopefully, in the future, it will be much easier.
## Accomplishments that we're proud of
We found what it took to work as a team. Coming into this project, we were not familiar with each other at all. We were able to learn about each other's strengths and weaknesses and use them to our advantage. Everyone took on their own role, and it allowed our team to flourish with almost no problems at all working together.
## What we learned
We learned a lot about flask and python. It helped us in solidifying an understanding of our open API calls. Other than that, we became really well verses in Reactjs, and most importantly working as a team. Our teamwork skills have improved exponentially, and it is no doubt due to the fact that we were strangers before, and now we are great friends.
## What's next for Insurguard
Going forward, the team plans to implement an insurance quote system right into the app. Using satellite imaging, Insurguard will be able to use computer vision to estimate the financial loss. The estimate will then be directly sent to the user's insurance company, which after review, will directly compensate the user through the app itself. | ## 💡Inspiration
Every year, around 40 million tons of food is wasted in the United States. That is estimated to be around 35 percent, of the entire US food supply. When we waste food, we also waste all the energy and water it takes to grow, harvest, transport, and package it. And if food goes to the landfill and rots, it produces methane—a greenhouse gas that is 26x more potent than carbon dioxide.
Drilling down further into the problem of food waste, research shows that 43% of all waste in United States actually comes from homes, which is more than food services and grocery stores combined.
Reducing food waste is one of the easiest and most, powerful actions anyone can take to help sustainability, so this weekend, our team chose to build a IoT and mobile solution that tackles food waste at homes.
## 🤔 What it does
**GreenFridge** is a end-to-end IoT, data and mobile application solution that tracks what users have in their fridge, remind users of food that is about to go bad, and educate users on ways they can do their part in minimizing their carbon footprint. Its key features include:
* Mobile feature 1: Displays live view of food items currently in the fridge and their shelf life
* Mobile feature 2: Displays fridge power usage, temperature and humidity data, CO2 emissions per week, correlation between number of times fridge door was open and power.
* Mobile feature 3: Recommends sustainable alternatives and way to offset carbon dioxide emissions of food items currently in fridge
* Integration 1: Twilio messages with reminders of food that is going to go bad today and recommends ways to preserve them
[](https://postimg.cc/cgGtS2dL)
[](https://postimg.cc/Z9MjxTPD)
## 🦾 How we built it
* **IoT:** Raspberry Pi Camera Local Object Detection, Humidity Sensor, Temperature Sensor
* **Backend:** Postman, Google Cloud Platform, Flask
* **Web Scraping:** Python
* **Google Cloud Services:** AutoML, Vision, Vertex AI, Compute Engine, Maps
* **Database:** Cockcroach DB, Cloud Storage, FireStore
* **Twilio:** Auto SMS
* **Frontend:** React, JS, HTML, CSS, Figma
[](https://postimg.cc/5Yyr3QWv)
## 👨🏻🤝👨🏽 (PennApps Track) Sustainability Prize
We believe, and a lot of research also points to that, reducing food waste is the easiest and most powerful way to make a dent in sustainability. Like the Nest Thermostat that saves users ~13% on energy bills (usage), we envision GreenFridge to be in every home, helping users on food waste reduction and educating them on carbon footprint reduction, thus bringing outsized impact on sustainability.
## 🛠 (Sponsored by Citadel & Penn Detkin Lab) Best Hardware Hack
Thank you Penn Detkin Lab for lending us Raspberry Pi camera and other tools late at night, so that we can use local object detection to enable the rest of our solution!
## ☁️ Best Use of Google Cloud - MLH
Our application uses Vertex AI (Auto ML) for Custom ML model, Compute Engine to deploy flask server, Cloud Storage to store images for training, FireStore to update data of items in fridge in real time, Connectors AI.
## 🤖 Most Creative Use of Twilio - MLH
We used Twilio SMS to serve reminders of food that is going to go bad on that day and recommendations on ways to preserve them. Twilio was so intuitive to use, even for someone who is completely new!
[](https://postimg.cc/p5D9rZXJ)
## Challenges we ran into
* We tried hard and followed documentation, but could not get Cockroach DB network setup
## Accomplishments that we're proud of
* Finding an object detection algorithm that is efficient on Raspberry Pi
* Interfacing humidity and temperature sensors, modeling surge in power consumption over time
* Training custom computer vision model and deploying for real time predictions
* Setting up Google Cloud infrastructure to enable efficient use of their tech stack - compute engine, API services, ML toolkit, data storage
## What we learned
* We were surprised by technology development that allow even learners to deploy complex technologies
* We were surprised by how big the opportunity it is to make a dent in sustanability through reducing food waste | ## Problem
Our planet faces a critical environmental crisis driven by:
High carbon footprints from daily human activities such as transportation, energy consumption, and waste generation.
These issues contribute to:
Significant greenhouse gas emissions, pollution of our oceans, harm to wildlife and accelerating climate change.
Despite growing awareness, individuals and communities often lack:
Tools and knowledge to effectively reduce their environmental impact.
## What it does
Purpose: A user-friendly tool designed to help individuals understand and reduce their carbon footprint.
Functionality: Calculates users' carbon emissions based on their daily activities and provides personalized recommendations to lower their impact.
User Interaction: Accessible via voice input, making it easy for users to engage with Carbon Groot.
## How we built it
We utilized: Javascript, Node.js, Postman Typescript, React, HumeAI, and You.com.
## Challenges we ran into
Integrating HumeAI's text input function and you.com's API.
## Accomplishments that we're proud of
We created a well-rounded climate tech project. We set up various functionalities to keep users engaged and track their carbon footprint.
## What we learned
Using various APIs that complement each other.
## What's next for Carbon Groot
Future enhancements: Text input option, social media sharing, sync with smart appliances, push notifications, app development, and community events page. | partial |
## Inspiration
In today's always-on world, we are more connected than ever. The internet is an amazing way to connect to those close to us, however it is also used to spread hateful messages to others. Our inspiration was taken from a surprisingly common issue among YouTubers and other people prominent on social media: That negative comments (even from anonymous strangers) hurts more than people realise. There have been cases of YouTubers developing mental illnesses like depression as a result of consistently receiving negative (and hateful) comments on the internet. We decided that this overlooked issue deserved to be brought to attention, and that we could develop a solution not only for these individuals, but the rest of us as well.
## What it does
Blok.it is a Google Chrome extension that analyzes web content for any hateful messages or content and renders it unreadable to the user. Rather than just censoring a particular word or words, the entire phrase or web element is censored. The HTML and CSS formatting remains, so nothing funky happens to the layout and design of the website.
## How we built it
The majority of the app is built in JavaScript and jQuery, with some HTML and CSS for interaction with the user.
## Challenges we ran into
Working with Chrome extensions was something very new to us and we had to learn some new JS in order to tackle this challenge. We also ran into the issue of spending too much time deciding on an idea and how to implement it.
## Accomplishments that we're proud of
Managing to create something after starting and scraping multiple different projects (this was our third or fourth project and we started pretty late)
## What we learned
Learned how to make Chrome Extensions
Improved our JS ability
learned how to work with a new group of people (all of us are first time hackathon-ers and none of us had extensive software experience)
## What's next for Blok.it
Improving the censoring algorithms. Most hateful messages are censored, but some non-hateful messages are being inadvertently marked as hateful and being censored as well. Getting rid of these false positives is first on our list of future goals. | ## Inspiration
During our brainstorming phase, we cycled through a lot of useful ideas that later turned out to be actual products on the market or completed projects. After four separate instances of this and hours of scouring the web, we finally found our true calling at QHacks: building a solution that determines whether an idea has already been done before.
## What It Does
Our application, called Hack2, is an intelligent search engine that uses Machine Learning to compare the user’s ideas to products that currently exist. It takes in an idea name and description, aggregates data from multiple sources, and displays a list of products with a percent similarity to the idea the user had. For ultimate ease of use, our application has both Android and web versions.
## How We Built It
We started off by creating a list of websites where we could find ideas that people have done. We came up with four sites: Product Hunt, Devpost, GitHub, and Google Play Store. We then worked on developing the Android app side of our solution, starting with mock-ups of our UI using Adobe XD. We then replicated the mock-ups in Android Studio using Kotlin and XML.
Next was the Machine Learning part of our solution. Although there exist many machine learning algorithms that can compute phrase similarity, devising an algorithm to compute document-level similarity proved much more elusive. We ended up combining Microsoft’s Text Analytics API with an algorithm known as Sentence2Vec in order to handle multiple sentences with reasonable accuracy. The weights used by the Sentence2Vec algorithm were learned by repurposing Google's word2vec ANN and applying it to a corpus containing technical terminology (see Challenges section). The final trained model was integrated into a Flask server and uploaded onto an Azure VM instance to serve as a REST endpoint for the rest of our API.
We then set out to build the web scraping functionality of our API, which would query the aforementioned sites, pull relevant information, and pass that information to the pre-trained model. Having already set up a service on Microsoft Azure, we decided to “stick with the flow” and build this architecture using Azure’s serverless compute functions.
After finishing the Android app and backend development, we decided to add a web app to make the service more accessible, made using React.
## Challenges We Ran Into
From a data perspective, one challenge was obtaining an accurate vector representation of words appearing in quasi-technical documents such as Github READMEs and Devpost abstracts. Since these terms do not appear often in everyday usage, we saw a degraded performance when initially experimenting with pretrained models. As a result, we ended up training our own word vectors on a custom corpus consisting of “hacker-friendly” vocabulary from technology sources. This word2vec matrix proved much more performant than pretrained models.
We also ran into quite a few issues getting our backend up and running, as it was our first using Microsoft Azure. Specifically, Azure functions do not currently support Python fully, meaning that we did not have the developer tools we expected to be able to leverage and could not run the web scraping scripts we had written. We also had issues with efficiency, as the Python libraries we worked with did not easily support asynchronous action. We ended up resolving this issue by refactoring our cloud compute functions with multithreaded capabilities.
## What We Learned
We learned a lot about Microsoft Azure’s Cloud Service, mobile development and web app development. We also learned a lot about brainstorming, and how a viable and creative solution could be right under our nose the entire time.
On the machine learning side, we learned about the difficulty of document similarity analysis, especially when context is important (an area of our application that could use work)
## What’s Next for Hack2
The next step would be to explore more advanced methods of measuring document similarity, especially methods that can “understand” semantic relationships between different terms in a document. Such a tool might allow for more accurate, context-sensitive searches (e.g. learning the meaning of “uber for…”). One particular area we wish to explore are LSTM Siamese Neural Networks, which “remember” previous classifications moving forward. | ## Inspiration
One of the most exciting parts about Hackathons is the showcasing of the final product, well-earned after hours upon hours of sleep-deprived hacking. Part of the presentation work lies in the Devpost entry. I wanted to build an application that can rate the quality of a given entry to help people write better Devpost posts, which can help them better represent their amazing work.
## What it does
The Chrome extension can be used on a valid Devpost entry web page. Once the user clicks "RATE", the extension will automatically scrap the relevant text and send it to a Heroku Flask server for analysis. The final score given to a project entry is an aggregate of many factors, such as descriptiveness, the use of technical vocabulary, and the score given by an ML model trained against thousands of project entries. The user can use the score as a reference to improve their entry posts.
## How I built it
I used UiPath as an automation tool to collect, clean, and label data across thousands of projects in major Hackathons over the past few years. After getting the necessary data, I trained an ML model to predict the probability for a given Devpost entry to be amongst the winning projects. I also used the data to calculate other useful metrics, such as the distribution of project entry lengths, average amount of terminologies used, etc. These models are then uploaded on a Heroku cloud server, where I can get aggregated ratings for texts using a web API. Lastly, I built a Javascript Chrome extension that detects Devpost web pages, scraps data from the page, and present the ratings to the user in a small pop-up.
## Challenges I ran into
Firstly, I am not familiar with website development. It took me a hell of a long time to figure out how to build a chrome extension that collects data and uses external web APIs. The data collection part is also tricky. Even with great graphical automation tools at hand, it was still very difficult to do large-scale web-scraping for someone relatively experienced with website dev like me.
## Accomplishments that I'm proud of
I am very glad that I managed to finish the project on time. It was quite an overwhelming amount of work for a single person. I am also glad that I got to work with data from absolute scratch.
## What I learned
Data collection, hosting ML model over cloud, building Chrome extensions with various features
## What's next for Rate The Hack!
I want to refine the features and rating scheme | winning |
## Inspiration
No one likes being stranded at late hours in an unknown place with unreliable transit as the only safe, affordable option to get home. Between paying for an expensive taxi ride yourself, or sharing a taxi with random street goers, the current options aren't looking great. WeGo aims to streamline taxi ride sharing, creating a safe, efficient and affordable option.
## What it does
WeGo connects you with people around with similar destinations who are also looking to share a taxi. The application aims to reduce taxi costs by splitting rides, improve taxi efficiency by intelligently routing taxi routes and improve sustainability by encouraging ride sharing.
### User Process
1. User logs in to the app/web
2. Nearby riders requesting rides are shown
3. The user then may choose to "request" a ride, by entering a destination.
4. Once the system finds a suitable group of people within close proximity, the user will be send the taxi pickup and rider information. (Taxi request is initiated)
5. User hops on the taxi, along with other members of the application!
## How we built it
The user begins by logging in through their web browser (ReactJS) or mobile device (Android). Through API calls to our NodeJS backend, our system analyzes outstanding requests and intelligently groups people together based on location, user ratings & similar destination - all in real time.
## Challenges we ran into
A big hurdle we faced was the complexity of our ride analysis algorithm. To create the most cost efficient solution for the user, we wanted to always try to fill up taxi cars completely. This, along with scaling up our system to support multiple locations with high taxi request traffic was definitely a challenge for our team.
## Accomplishments that we're proud of
Looking back on our work over the 24 hours, our team is really excited about a few things about WeGo. First, the fact that we're encouraging sustainability on a city-wide scale is something really important to us. With the future leaning towards autonomous vehicles & taxis, having a similar system like WeGo in place we see as something necessary for the future.
On the technical side, we're really excited to have a single, robust backend that can serve our multiple front end apps. We see this as something necessary for mass adoption of any product, especially for solving a problem like ours.
## What we learned
Our team members definitely learned quite a few things over the last 24 hours at nwHacks! (Both technical and non-technical!)
Working under a time crunch, we really had to rethink how we managed our time to ensure we were always working efficiently and working towards our goal. Coming from different backgrounds, team members learned new technical skills such as interfacing with the Google Maps API, using Node.JS on the backend or developing native mobile apps with Android Studio. Through all of this, we all learned the persistence is key when solving a new problem outside of your comfort zone. (Sometimes you need to throw everything and the kitchen sink at the problem at hand!)
## What's next for WeGo
The team wants to look at improving the overall user experience with better UI, figure out better tools for specificially what we're looking for, and add improved taxi & payment integration services. | ## Inspiration
During last year's World Wide Developers Conference, Apple introduced a host of new innovative frameworks (including but not limited to CoreML and ARKit) which placed traditionally expensive and complex operations such as machine learning and augmented reality in the hands of developers such as myself. This incredible opportunity was one that I wanted to take advantage of at PennApps this year, and Lyft's powerful yet approachable API (and SDK!) struck me as the perfect match for ARKit.
## What it does
Utilizing these powerful technologies, Wher integrates with Lyft to further enhance the process of finding and requesting a ride by improving on ease of use, safety, and even entertainment. One issue that presents itself when using overhead navigation methods is, quite simply, a lack of the 3rd dimension. A traditional overhead view tends to complicate on foot navigation more than it may help, and even more importantly, requires the user to bury their face in their phone. This pulls attention from the users surroundings, and poses a threat to their safety- especially in busy cities. Wher resolves all of these concerns by bringing the experience of Lyft into Augmented Reality, which allows users to truly see the location of their driver and destination, pay more attention to where they are going, and have a more enjoyable and modern experience in the process.
## How I built it
I built Wher using several of Apple's Frameworks including ARKit, MapKit, CoreLocation, and UIKit, which allowed me to build the foundation for the app and the "scene" necessary to create and display an Augmented Reality plane. Using the Lyft API I was able to gather information regarding available drivers in the area, including their exact position (real time), cost, ETA, and the service they offered. This information was used to populate the scene and deep link into the Lyft app itself to request a ride and complete the transaction.
## Challenges I ran into
While both Apple's well documented frameworks and Lyft's API simplified the learning required to take on the project, there were still several technical hurdles that had to be overcome to complete the project. The first issue that I faced was Lyft's API itself; While it was great in many respects, Lyft has yet to create a branch fit for use with Swift 4 and iOS 11 (required to use ARKit), which meant I had to rewrite certain portions of their LyftURLEncodingScheme and LyftButton classes in order to continue with the project. Another challenge was finding a way to represent a variance in coordinates and 'simulate distance', so to make the AR experience authentic. This, similar to the first challenge, became manageable with enough thought and math. One of the last significant challenges I encountered and overcame was with drawing driver "bubbles" in the AR Plane without encountering graphics glitches.
## Accomplishments that I'm proud of
Despite the many challenges that this project presented, I am very happy that I persisted and worked to complete it. Most importantly, I'm proud of just how cool it is to see something so simple represented in AR, and how different it is from a traditional 2D View. I am also very proud to say that this is something I can see myself using any time I need to catch a Lyft.
## What I learned
With PennApps being my first Hackathon, I was unsure what to expect and what exactly I wanted to accomplish. As a result, I greatly overestimated how many features I could fit into Wher and was forced to cut back on what I could add. As a result, I learned a lesson in managing expectations.
## What's next for Wher (with Lyft)
In the short term, adding a social aspect and allowing for "friends" to organize and mark designated meet up spots for a Lyft, to greater simply the process of a night out on the town. In the long term, I hope to be speaking with Lyft! | ## Inspiration
We want a more sustainable way for individuals to travel from point A to point B, by sharing a ride.
## What it does
It allows users to post their routes to a carpool request board for other users (drivers) to see and potentially accept. We decided that it should be aimed at students because of security reasons and to build a more tight-knit user base to begin with.
## How we built it
We brainstormed to create our Figma prototype and then started building the front-end. Once we were done that we started on the backend connections.
## Challenges we ran into
Connecting the backend and the frontend
## What if we had more time?
We wanted to add a chat system so that the users can communicate within the app.
Allow users to filter the requests page based on the current location area. | partial |
## Inspiration
Struggling with a hackathon concept made our team realize the importance of gathering inspiration and connecting ideas. This led us to further research products in the market that are aimed towards this space, and we found many pain points that made the process less efficient, including an influx of onboarding processes, new tools, and distracting interfaces.
In many cases, people use multiple platforms to gather various kinds of inspiration ranging from Pinterest, Notes, etc. This inspired our team to create a product that addresses these pain points and make a more user-friendly experience for collecting ideas on one platform.
## What it does
Myos is a very efficient moodboarding app which lets users attach various files ranging from images, audios, and drawings, all through flexible formats such as drag-and-drop, copy+paste, and uploading local files. Users can organize the files into groups to create a “gallery” of inspiration for your various projects or moodboarding-related needs.
Myos adopts a DO NOW, WORRY LATER framework. This means no UI distractions, no templates, and no onboarding pop-ups in your face every few seconds. Myos emphasizes focus, structure, and reduce in clutter through a clean and intuitive UI, letting users get to their work right away.
## How we built it
We designed our product on Figma and prototyped it’s key interactions. A ReactJS frontend was made to implement a file upload function and a display gallery.
## Challenges we ran into
It was hard for our team to think of an idea for the hackathon, and due to time constraints, the end product wasn’t the plan we went in with. We weren’t ready for the lack of energy that comes with in-person hackathons! That being said, we really tried to hone in on any pain points that we personally deal with to understand how we can make them more efficient.
## Accomplishments that we're proud of
Despite our challenges, our team pulled through and managed to submit a project. For many of us, this was also our first in-person hackathon! Numerous skills were gained and put to use, with everyone learning a lot and getting the most out of HTN.
## What we learned
Thinking of an idea can be very challenging and it is best to think of an idea beforehand. For some of us, this was the first in-person or the first ever hackathon we attended, and it was a really eye-opening and rewarding experience.
## What's next for Myos
Next, we’d finish the functionalities for creating new folders/groups, sorting the content by tags, and saving everything with a database or using localstorage. | ## Inspiration
University can be very stressful at times and sometimes the resources you need are not readily available to you. This web application can help students relax by completing positive "mindfulness" tasks to beat their monsters. We decided to build a web application specifically for mobile since we thought that the app would be most effective if it is very accessible to anyone. We implemented a social aspect to it as well by allowing users to make friends to make the experience more enjoyable. Everything in the design was carefully considered and we always had mindfulness in mind, by doing some research into how colours and shapes can affect your mood.
## What it does
Mood prompts you to choose your current mood on your first login of every day, which then allows us to play music that matches your mood as well as create monsters which you can defeat by completing tasks meant to ease your mind. You can also add friends and check on their progress, and in theory be able to interact with each other through the app by working together to defeat the monsters, however, we haven't been able to implement this functionality yet.
## How we built it
The project uses Ruby on Rails to implement the backend and JavaScript and React Bootstrap on the front end. We also used GitHub for source control management.
## Challenges we ran into
Starting the project, a lot of us had issues downloading the relevant software as we started by using a boilerplate but through collaboration between us we were all able to figure it out and get started on the project. Most of us were still beginning to learn about web development and thus had limited experience programming in JavaScript and CSS, but again, through teamwork and the help of some very insightful mentors, we were able to pick up these skills very quickly, each of us being able to contribute a fair portion to the project.
## Accomplishments that we are proud of
Every single member in this group has learned a lot from this experience, no matter their previous experience going into the hackathon. The more experienced members did an amazing job helping those who had less experience, and those with less experience were able to quickly learn the elements to creating a website. The thing we are most proud of is that we were able to accomplish almost every one of our goals that we had set prior to the hackathon, building a fully functioning website that we hope will be able to help others. Every one of us is proud to have been a part of this amazing project, and look forward to doing more like it.
## What's next for Mood
Our goal is to give the help to those who need it through this accessible app, and to make it something user's would want to do. And thus to improve this app we would like to: add more mindfulness tasks, implement a 'friend collaboration' aspect, as well as potentially make a mobile application (iOS and Android) so that it is even more accessible to people. | Demo: <https://youtu.be/cTh3Q6a2OIM?t=2401>
## Inspiration
Fun Mobile AR Experiences such as Pokemon Go
## What it does
First, a single player hides a virtual penguin somewhere in the room. Then, the app creates hundreds of obstacles for the other players in AR. The player that finds the penguin first wins!
## How we built it
We used AWS and Node JS to create a server to handle realtime communication between all players. We also used Socket IO so that we could easily broadcast information to all players.
## Challenges we ran into
For the majority of the hackathon, we were aiming to use Apple's Multipeer Connectivity framework for realtime peer-to-peer communication. Although we wrote significant code using this framework, we had to switch to Socket IO due to connectivity issues.
Furthermore, shared AR experiences is a very new field with a lot of technical challenges, and it was very exciting to work through bugs in ensuring that all users have see similar obstacles throughout the room.
## Accomplishments that we're proud of
For two of us, it was our very first iOS application. We had never used Swift before, and we had a lot of fun learning to use xCode. For the entire team, we had never worked with AR or Apple's AR-Kit before.
We are proud we were able to make a fun and easy to use AR experience. We also were happy we were able to use Retro styling in our application
## What we learned
-Creating shared AR experiences is challenging but fun
-How to work with iOS's Multipeer framework
-How to use AR Kit
## What's next for ScavengAR
* Look out for an app store release soon! | losing |
## Inspiration
It took us a while to think of an idea for this project- after a long day of zoom school, we sat down on Friday with very little motivation to do work. As we pushed through this lack of drive our friends in the other room would offer little encouragements to keep us going and we started to realize just how powerful those comments are. For all people working online, and university students in particular, the struggle to balance life on and off the screen is difficult. We often find ourselves forgetting to do daily tasks like drink enough water or even just take a small break, and, when we do, there is very often negativity towards the idea of rest. This is where You're Doing Great comes in.
## What it does
Our web application is focused on helping students and online workers alike stay motivated throughout the day while making the time and space to care for their physical and mental health. Users are able to select different kinds of activities that they want to be reminded about (e.g. drinking water, eating food, movement, etc.) and they can also input messages that they find personally motivational. Then, throughout the day (at their own predetermined intervals) they will receive random positive messages, either through text or call, that will inspire and encourage. There is also an additional feature where users can send messages to friends so that they can share warmth and support because we are all going through it together. Lastly, we understand that sometimes positivity and understanding aren't enough for what someone is going through and so we have a list of further resources available on our site.
## How we built it
We built it using:
* AWS
+ DynamoDB
+ Lambda
+ Cognito
+ APIGateway
+ Amplify
* React
+ Redux
+ React-Dom
+ MaterialUI
* serverless
* Twilio
* Domain.com
* Netlify
## Challenges we ran into
Centring divs should not be so difficult :(
Transferring the name servers from domain.com to Netlify
Serverless deploying with dependencies
## Accomplishments that we're proud of
Our logo!
It works :)
## What we learned
We learned how to host a domain and we improved our front-end html/css skills
## What's next for You're Doing Great
We could always implement more reminder features and we could refine our friends feature so that people can only include selected individuals. Additionally, we could add a chatbot functionality so that users could do a little check in when they get a message. | ## Inspiration
While using ridesharing apps such as Uber and Lyft, passengers, particularly those of marginalized identities, have reported feeling unsafe or uncomfortable being alone in a car. From user interviews, every woman has mentioned personal safety as one of their top concerns within a rideshare. About 23% of American women have reported a driver for inappropriate behavior. Many apps have attempted to mitigate this issue by creating rideshare services that may hire only female drivers. However, these apps have quickly gotten shut down due to discrimination laws. Additionally, around 40% of Uber and Lyft drivers are white males, possibly due to the fact that many minorities may feel uncomfortable in certain situations as a driver. We aimed to create a rideshare app which would provide the same sense of safeness and comfort that the aforementioned apps aimed to provide while making sure that all backgrounds are represented and accounted for.
## What it does
Our app, Driversity (stylized DRiveristy), works similarly to other ridesharing apps, with features put in place to assure that both riders and drivers feel safe. The most important feature we'd like to highlight is a feature that allows the user to be alerted if a driver goes off the correct path to the destination designated by the rider. The app will then ask the user if they would like to call 911 to notify them of the driver's actions. Additionally, many of the user interviews we conducted stated that many women prefer to walk around, especially at night, while waiting for a rideshare driver to pick them up for safety concerns. The app provides an option for users to select in order to allow them to walk around while waiting for their rideshare, also notifying the driver of their dynamic location. After selecting a destination, the user will be able to select a driver from a selection of three drivers on the app. On this selection screen, the app details both identity and personality traits of the drivers, so that riders can select drivers they feel comfortable riding with. Users also have the option to provide feedback on their trip afterward, as well as rating the driver on various aspects such as cleanliness, safe driving, and comfort level. The app will also use these ratings to suggest drivers to users that users similar to them rated highly.
## How we built it
We built it using Android Studio in Java for full-stack development. We used the Google JavaScript Map API to display the map for the user when selecting destinations and tracking their own location on the map. We used Firebase to store information and for authentication of the user. We used DocuSign in order for drivers to sign preliminary papers. We used OpenXC to calculate if a driver was traveling safely and at the speed limit. In order to give drivers benefits, we are giving them the choice to take 5% of their income and invest it, and it will grow naturally as the market rises.
## Challenges we ran into
We weren't very familiar with Android Studio, so we first attempted to use React Native for our application, but we struggled a lot implementing many of the APIs we were using with React Native, so we decided to use Android Studio as we originally intended.
## What's next for Driversity
We would like to develop more features on the driver's side that would help the drivers feel more comfortable as well. We also would like to include the usage of the Amadeus travel APIs. | ## Inspiration
Going home for winter break and seeing friends was a great time, but throughout all the banter, I realized that our conversations often took a darker turn, and I worried for my friends' mental health. During the school year, it was a busy time and I wasn't able to stay in touch with my friends as well as I had wanted to. After this realization, I also began to question my own mental health - was I neglecting my health?
We were inspired to build a web app that would increase awareness about how friends were doing mentally that could also provide analytics for ourselves. We thought there was good potential in text due to the massive volumes of digital communication and how digital messages can often reveal some the that may be hidden in everyday communication.
## What it does
It parses user text input into a sentiment score, using Microsoft Azure, where 0 is very negative and 1 is very positive. Over a day, it averages the input for a specific user and logs the text files. Friends of the user can view the weekly emoji graphs, and receive a text message if it seems like the user is going through a rough spot and needs someone to talk to.
We also have an emoji map for displaying sentiments of senders around the world, allowing us to see events that invoke emotional responses in a particular area. We hope that this is useful data for increased global and cultural awareness.
## How we built it
We used React.js for the front-end and used Flask with Python for the backend. We used Azure for the sentiment analysis and Twillio to send text messages.
## Challenges we ran into
One of the biggest bottlenecks was connecting our front-end and back-end. Additionally, we had security concerns regarding cross origin resource sharing that made it much more difficult to interface with all the different databases. We had too many APIs that we wanted to connect that made things difficult too.
## Accomplishments that we're proud of
We were able to create a full-stack app web app on our own, despite the challenges. Some of the members of the team had never worked on front-end before and it was a great, fun experience learning how to use JS, Flask, and HTML.
## What we learned
We learned about full stack web app development and the different languages required. We also became more aware of the moving parts behind a web app, how they communicate with each other, and the challenges associated with that.
## What's next for ment.ally
Our original idea was actually a Chrome extension that could detect emotionally charged messages the user types in real-time and offer alternatives in an attempt to reduce miscommunication potentially hurtful to both sides. We would like to build off of our existing sentiment analysis capabilities to do this. Our next step would be to set up a way to parse what the user is typing and underline any overly strong phrases (similar to how Word underlines misspelt words in red). Then we could set up a database that maps some common emotionally charged phrases with some milder ones and offers those as suggestions, possibly along with the reason (e.g. "words in this sentence can trigger feelings of anger!). | partial |
## Inspiration
All three of us working on the project love traveling and want to fly to new places in the world on a budget. By combining Google computer vision to recognize interesting places to go as well as JetBlue's flight deals to find the best flights- we hope we've created a product that college students can use to explore the world!
## What it does
*Envision* parses through each picture on websites the user is on and predicts a destination airport from the image given entities tracked through computer vision. It finds the best JetBlue flight deal based on current location, price, and similarity in destination and returns a hover over chrome extension best deal recommendation. It links to a website that shows more information about the flight, including flight type, fare type, etc, as well as pictures of the destination found through Google Places API. *Envision* travel effortlessly today! :))
## How we built it
We build *Envision* using JetBlue's Deals data and the Google Cloud Platform- Google Vision API, Google Places API. First, we scraped images from Google Image Search of every JetBlue airport location. Running every image through Google Vision API, we received a list of entities that are found in the most common images. By creating a chrome extension to track images in every webpage, each picture is then translate through computer vision into an entity list, and that list is used to find the best location to recommend / most similar destination. Using JetBlue's Deals data, we found the best deal flight from the closest airport based on current location, and routed to our target airport destination, using Google Places API Place Nearby Search, Text Search, Place Details combined to find the most suitable flight to take. Using Heroku and Flask to host our web services, we created a workflow that leads to a React website with more information about the recommended flights and photos of the destination similar to that of the original images on the browsed website.
## Challenges we ran into
There are many steps in our processing pipeline- which includes receiving images from the chrome extension, parsing the image to an entity list on Google Cloud Vision, finding a recommended location and the best JetBlue flight that leads to a region, as well as similar images from the area that links to the original image shown in a separate website. To connect every part together through endpoints took a lot of figuring out near the end!
## Accomplishments that we're proud of
Creating a working product!
## What we learned
Lots of web & API calling.
## What's next for Envision
Creating a more user intuitive interface for *Envision* on the chrome extension as well as the website. | ## Inspiration
Despite the fact that there are a greater number of airlines than most people can count, very few seem to differentiate themselves in regards to pre-purchase interaction with the customer. This results in the rise of middle-man sales dealers (expedia, travelocity, etc) willing to fill this void.
We believed that by designing a search methodology that is built from the ground up, predicated on how consumers approach planning a vacation, we could yield a win-win scenario; first, being able to enlighten travellers to the many options they have based on their personalized constraints (schedule, price) and preferences (general geographic location, type of vacation, specific city preference), and second, to allow airlines to better market their reduced fare, last minute getaway packages in an attempt to minimize the cost of empty seats on flights.
## What it does
On the topic of impacts, North Star does two things. It helps consumers plan travel in a more intuitive and natural way than ever before, and it helps prevent unnecessary losses for airlines.
Airlines participate by allowing us access to their data or APIs (JetBlue and priceline supplied us with data sets here), and consumers benefit by using our site.
\**TL;DR: People can take more vacations. Airlines lose less money. \**
## How we built it
So many of these things were done concurrently, but if we were to break them up into individual actions, they'd be as follows.
1. Downloaded data sets from JetBlue and Priceline
1b. Parse wikipedia for accessory data (Flight code --> Airport Name --> Airport City --> City Longitude/Latitude)
2. Set up server on Azure
3. Populate SQL server
4. Integrate facebook
5. Write search algorithm / build front end for user inputs
5b. Design all static images (radio button icons for flight properties, company logo, etc).
6. Designed map interface, planned how the data would influence dynamic changes in the map
6b. Used beaker and Priceline data to show analytics on charts so that consumers can have a better idea of how the current flight prices compare to the historical values, or to compare prices by airline.
7. Built the 'infinite scroll' functionality and polished up front end
## Challenges we ran into
Not having an API to dynamically call information makes for a lot of wasted time when we consistently need to remake/repopulate the database.
We messed up with how we imported the dates and times of flights in our SQL database. This was huge, since times and dates are 2 out of the 3 most crucial pieces of information (after location).
Python really really **really** doesn't like unicode. Our parser broke 2-3 times when it encountered hyphenated airport names or accented letters.
## Accomplishments that we're proud of
**1. Scalability \*\*(Database layout, algorithm efficiency)
This is our team's third consecutive data hack submission at a hackathon, so concepts like efficiency and resource utilization are things that are prioritized whenever possible. We believe that after multiple projects that utilized 'bandaid' fixes under the 'Fuck it, ship it' mentality, we've finally started to do things the 'right' way, and in a way that is scalable.
\*\*2. Innovation**
We were able to attack a seemingly stale methodology from a fresh perspective. Disruption is cool, and being the new kid on the block is always fun.
\**3. We learned a lot. \**
Beaker, Facebook API, Microsoft Azure, and tons of SQL troubleshooting. Despite the fact that we all have very distinct and unique skills with a bit of overlap, we all ended up doing far more work that was outside of our respective comfort zones than ever before for this multi-leveled hack.
this memorable quote summarizes it pretty well:
Brian Chen: *"Please, no more SQL. I can't handle any more."*
Nick: *"You gon' learn today, you gon' learn real good about SQL, inside and out".*
## What we learned
Take care of your most important data points/structures first. Data is love, data is life - except when it takes a combined **6 hours** to repeatedly repopulate your database after improper planning. ALWAYS think your database through ahead of time!
Beaker is *awesome*. Nick loves working with data, and for him, Beaker is a gift from the heavens. He wishes it could work with MATlab too, but he doesn't want to push his luck.
## What's next for North Star
**1. Increase the amount of data that we have!**
We were given finite data sets by sponsors, so by having live access to either APIs or more time to parse websites, the data would only further serve to increase the effectiveness of North Star.
\**2. Continued Brand Differentiation \**
Despite the fact that many other travel sites supply tons of functionality with their seemingly endless filters, it all comes across in a very cold and emotionless way. For most people, travelling is a fun (or at least emotional) experience, and we believe that the process leading up to it should be no different. If North Star were to be continued, it should embrace its different style of approach by getting user feedback and continuing to make things 'shiny' and 'more polished'. Although it's not frequently spoken about, user experience on a travel site could be the difference between a consumer packing their bags for an overnight getaway or sitting home on their couch to Netflix and chill.
**3. Innovative Data Processing/Empowerment**
By incorporating Priceline data (which contains data from a multitude of airlines) and combining it with Beaker, we have the power to offer a platform with a stronger price analysis tool than any other competitor.
**4. Integrate Consumer Behavioral Science**
Beyond conscious preferences of the consumer, a multitude of market research papers reveal trends that differentiate spending patterns within the airline industry. By having access to more data from our users (geographic location, approximate income level by zip code, race, age), we could use these demographic values to alter the meta-rankings generated by our database search algorithm, showing results that are more in line with what the consumer ultimately will be interested in considering. One example of this, is that millenials are 60% more likely to book a flight that is discounted than nonmillenials. Another, comparing the same two groups, is that millenials are significantly more likely to pay a premium to upgrade to an airline seat with more leg room, while also disproportionately prioritizing budget-oriented airlines. We successfully integrated the Facebook API to easily get demographic information about our consumer, however, without a significant amount of time (or money) to access these market research studies, we didn't have much to work with during the time of the hackathon to implement justified meta rankings. We were mindful of this though, and set up our database in such a way that we could easily implement these factors as multipliers/scalars if continued. | ## Inspiration
At first, we were motivated to use JetBlue's dataset to generate models to predict flight patterns in the future. When we found out the specific hack that we were being asked to implement, we realized that it would be an excellent opportunity to create a web application that serves as both a simple interface for consumers and an effective sales technique to fill distressed packages.
## What it does
**Dibs** displays a neatly-organized grid of potential destinations for the wanderlust traveler. Beautiful photographs taken from relevant Instagram posts that illustrate the scenery of the destination in question are overlaid with the name of the location. If the consumer is interested, they click and are provided with the bare minimum information: the ground covered by their flight, a photograph of the resort they will be staying at, and a stream of photos that display the activities that take place at this location. If they decide that the time and trip are right, they click "Book Now", which takes them to JetBlue's website to book their getaway.
## How I built it
Since this is a mockup, we carefully selected the most appealing images we could find to serve as front-page eye-catchers. These components are generated by **react.js**. The followup page uses **Google Maps**' API to illustrate the flight path on a map, while the streaming images are animated by **react.js** (these images are also curated; more on that in the next section). **Flask** was used as a web framework and **Heroku** was used to host the application.
## Challenges I ran into
The most significant challenges we had to deal with were API issues. For instance, Instagram's API is neither thorough nor reliable: posts older than 7 days cannot be fetched, and direct media retrieval is only possible through a pipeline of API calls. As a result, we were only able to retrieve posts that were only geographically relevant: content was random and often inappropriate. After recognizing these issues, we attempted to move forward and use Twitter's API to retrieve the same information in analogous ways. This proved to be challenging by virtue of Twitter's strict limitations of use: we were unable to retrieve the information required before getting locked out.
## Accomplishments that I'm proud of
The design is sleek, simple, and arguably quite effective. If marketed towards those who tend to take vacations on a whim, they are provided with only as much as they need to make their decision: pictures, encouragement, and scope of travel. Individuals who are likely to impulsively travel are not interested in the particulars, such as (but not limited to) price, specific activities/itinerary, and departure time.
## What I learned
We learned a lot about cross-compatibility of API's and how things don't always work out as well as you'd like them to. Less-experienced members of the team were provided an opportunity to learn about **react.js**.
## What's next for Dibs
*Dibs* boasts an incredibly simple interface that could be implemented in a very short amount of time. If refined and developed by means of substantial research, Dibs could plausibly reduce the loss incurred by JetBlue when getaway packages are left unclaimed. | partial |
## Inspiration
Technology in schools today is given to those classrooms that can afford it. Our goal was to create a tablet that leveraged modern touch screen technology while keeping the cost below $20 so that it could be much cheaper to integrate with classrooms than other forms of tech like full laptops.
## What it does
EDT is a credit-card-sized tablet device with a couple of tailor-made apps to empower teachers and students in classrooms. Users can currently run four apps: a graphing calculator, a note sharing app, a flash cards app, and a pop-quiz clicker app.
-The graphing calculator allows the user to do basic arithmetic operations, and graph linear equations.
-The note sharing app allows students to take down colorful notes and then share them with their teacher (or vice-versa).
-The flash cards app allows students to make virtual flash cards and then practice with them as a studying technique.
-The clicker app allows teachers to run in-class pop quizzes where students use their tablets to submit answers.
EDT has two different device types: a "teacher" device that lets teachers do things such as set answers for pop-quizzes, and a "student" device that lets students share things only with their teachers and take quizzes in real-time.
## How we built it
We built EDT using a NodeMCU 1.0 ESP12E WiFi Chip and an ILI9341 Touch Screen. Most programming was done in the Arduino IDE using C++, while a small portion of the code (our backend) was written using Node.js.
## Challenges we ran into
We initially planned on using a Mesh-Networking scheme to let the devices communicate with each other freely without a WiFi network, but found it nearly impossible to get a reliable connection going between two chips. To get around this we ended up switching to using a centralized server that hosts the apps data.
We also ran into a lot of problems with Arduino strings, since their default string class isn't very good, and we had no OS-layer to prevent things like forgetting null-terminators or segfaults.
## Accomplishments that we're proud of
EDT devices can share entire notes and screens with each other, as well as hold a fake pop-quizzes with each other. They can also graph linear equations just like classic graphing calculators can.
## What we learned
1. Get a better String class than the default Arduino one.
2. Don't be afraid of simpler solutions. We wanted to do Mesh Networking but were running into major problems about two-thirds of the way through the hack. By switching to a simple client-server architecture we achieved a massive ease of use that let us implement more features, and a lot more stability.
## What's next for EDT - A Lightweight Tablet for Education
More supported educational apps such as: a visual-programming tool that supports simple block-programming, a text editor, a messaging system, and a more-indepth UI for everything. | ## Inspiration:
We wanted to combine our passions of art and computer science to form a product that produces some benefit to the world.
## What it does:
Our app converts measured audio readings into images through integer arrays, as well as value ranges that are assigned specific colors and shapes to be displayed on the user's screen. Our program features two audio options, the first allows the user to speak, sing, or play an instrument into the sound sensor, and the second option allows the user to upload an audio file that will automatically be played for our sensor to detect. Our code also features theme options, which are different variations of the shape and color settings. Users can chose an art theme, such as abstract, modern, or impressionist, which will each produce different images for the same audio input.
## How we built it:
Our first task was using Arduino sound sensor to detect the voltages produced by an audio file. We began this process by applying Firmata onto our Arduino so that it could be controlled using python. Then we defined our port and analog pin 2 so that we could take the voltage reading and convert them into an array of decimals.
Once we obtained the decimal values from the Arduino we used python's Pygame module to program a visual display. We used the draw attribute to correlate the drawing of certain shapes and colours to certain voltages. Then we used a for loop to iterate through the length of the array so that an image would be drawn for each value that was recorded by the Arduino.
We also decided to build a figma-based prototype to present how our app would prompt the user for inputs and display the final output.
## Challenges we ran into:
We are all beginner programmers, and we ran into a lot of information roadblocks, where we weren't sure how to approach certain aspects of our program. Some of our challenges included figuring out how to work with Arduino in python, getting the sound sensor to work, as well as learning how to work with the pygame module. A big issue we ran into was that our code functioned but would produce similar images for different audio inputs, making the program appear to function but not achieve our initial goal of producing unique outputs for each audio input.
## Accomplishments that we're proud of
We're proud that we were able to produce an output from our code. We expected to run into a lot of error messages in our initial trials, but we were capable of tackling all the logic and syntax errors that appeared by researching and using our (limited) prior knowledge from class. We are also proud that we got the Arduino board functioning as none of us had experience working with the sound sensor. Another accomplishment of ours was our figma prototype, as we were able to build a professional and fully functioning prototype of our app with no prior experience working with figma.
## What we learned
We gained a lot of technical skills throughout the hackathon, as well as interpersonal skills. We learnt how to optimize our collaboration by incorporating everyone's skill sets and dividing up tasks, which allowed us to tackle the creative, technical and communicational aspects of this challenge in a timely manner.
## What's next for Voltify
Our current prototype is a combination of many components, such as the audio processing code, the visual output code, and the front end app design. The next step would to combine them and streamline their connections. Specifically, we would want to find a way for the two code processes to work simultaneously, outputting the developing image as the audio segment plays. In the future we would also want to make our product independent of the Arduino to improve accessibility, as we know we can achieve a similar product using mobile device microphones. We would also want to refine the image development process, giving the audio more control over the final art piece. We would also want to make the drawings more artistically appealing, which would require a lot of trial and error to see what systems work best together to produce an artistic output. The use of the pygame module limited the types of shapes we could use in our drawing, so we would also like to find a module that allows a wider range of shape and line options to produce more unique art pieces. | We wanted to change the education industry with new ways to share ideas through online social media. We have an online e-poem creator tool for kids to use and at the end to share their poem through SMS/text. We wanted to make it easier for teachers to help students learn from each other while simplifying the teachers task as well as encouraging teamwork in a class room environment.
Our program allows students to enter their desired poem line by line and submit it. After they are done submitting they can share their post with friends using text. Once you click the share button, our software will text a friend's phone number the exact same poem you just created.
We used JavaScript for taking in input,HTML5,and CSS for the design of the web pages. We modified Standard Library's API to send the users input through text to a specific number.
As beginners to programming our group ran into a bunch of challenges such adapting to the absence of a group member,learning new languages,and having only two programmers for our project.
We are very proud about learning new languages such as HTML,CSS,and JavaScript and implementing them to our program. We attended a multitude of workshops and tried to apply them to our project. Our logo was drawn in chalk and went through many iterations to seem more kid friendly. Our group was able to modify the Standard Library's text message API to make sharing poems possible.
Our company hopes to implement e-books and add new features to it by implementing text to speech and converting text into different languages and catering to the disabled. | winning |
# Doctors Within Borders
### A crowdsourcing app that improves first response time to emergencies by connecting city 911 dispatchers with certified civilians
## 1. The Challenge
In Toronto, ambulances get to the patient in 9 minutes 90% of the time. We all know
that the first few minutes after an emergency occurs are critical, and the difference of
just a few minutes could mean the difference between life and death.
Doctors Within Borders aims to get the closest responder within 5 minutes of
the patient to arrive on scene so as to give the patient the help needed earlier.
## 2. Main Features
### a. Web view: The Dispatcher
The dispatcher takes down information about an ongoing emergency from a 911 call, and dispatches a Doctor with the help of our dashboard.
### b. Mobile view: The Doctor
A Doctor is a certified individual who is registered with Doctors Within Borders. Each Doctor is identified by their unique code.
The Doctor can choose when they are on duty.
On-duty Doctors are notified whenever a new emergency occurs that is both within a reasonable distance and the Doctor's certified skill level.
## 3. The Technology
The app uses *Flask* to run a server, which communicates between the web app and the mobile app. The server supports an API which is used by the web and mobile app to get information on doctor positions, identify emergencies, and dispatch doctors. The web app was created in *Angular 2* with *Bootstrap 4*. The mobile app was created with *Ionic 3*.
Created by Asic Chen, Christine KC Cheng, Andrey Boris Khesin and Dmitry Ten. | ## Inspiration
We love to travel and have found that we typically have to use multiple sources in the process of creating an itinerary. With Path Planner, the user can get their whole tripped planned for them in one place.
## What it does
Build you an itinerary for any destination you desire
## How we built it
Using react and nextjs we developed the webpage and used chatGpt API to pull an itinerary with out given prompts
## Challenges we ran into
Displaying a Map to help choose a destination
## Accomplishments that we're proud of
Engineering an efficient prompt that allows us to get detailed itinerary information and display it in a user friendly fashion.
## What we learned
How to use leaflet with react and next.js and utilizing leaflet to input an interactive map that can be visualized on our web pages.
How to engineer precise prompts using open AI's playground.
## What's next for Path Planner
Integrating prices for hotels, cars, and flights as well as having a login page so that you can store your different itineraries and preferences. This would require creating a backend as well. | # **MedKnight**
#### Professional medical care in seconds, when the seconds matter
## Inspiration
Natural disasters often put emergency medical responders (EMTs, paramedics, combat medics, etc.) in positions where they must assume responsibilities beyond the scope of their day-to-day job. Inspired by this reality, we created MedKnight, an AR solution designed to empower first responders. By leveraging cutting-edge computer vision and AR technology, MedKnight bridges the gap in medical expertise, providing first responders with life-saving guidance when every second counts.
## What it does
MedKnight helps first responders perform critical, time-sensitive medical procedures on the scene by offering personalized, step-by-step assistance. The system ensures that even "out-of-scope" operations can be executed with greater confidence. MedKnight also integrates safety protocols to warn users if they deviate from the correct procedure and includes a streamlined dashboard that streams the responder’s field of view (FOV) to offsite medical professionals for additional support and oversight.
## How we built it
We built MedKnight using a combination of AR and AI technologies to create a seamless, real-time assistant:
* **Meta Quest 3**: Provides live video feed from the first responder’s FOV using a Meta SDK within Unity for an integrated environment.
* **OpenAI (GPT models)**: Handles real-time response generation, offering dynamic, contextual assistance throughout procedures.
* **Dall-E**: Generates visual references and instructions to guide first responders through complex tasks.
* **Deepgram**: Enables speech-to-text and text-to-speech conversion, creating an emotional and human-like interaction with the user during critical moments.
* **Fetch.ai**: Manages our system with LLM-based agents, facilitating task automation and improving system performance through iterative feedback.
* **Flask (Python)**: Manages the backend, connecting all systems with a custom-built API.
* **SingleStore**: Powers our database for efficient and scalable data storage.
## SingleStore
We used SingleStore as our database solution for efficient storage and retrieval of critical information. It allowed us to store chat logs between the user and the assistant, as well as performance logs that analyzed the user’s actions and determined whether they were about to deviate from the medical procedure. This data was then used to render the medical dashboard, providing real-time insights, and for internal API logic to ensure smooth interactions within our system.
## Fetch.ai
Fetch.ai provided the framework that powered the agents driving our entire system design. With Fetch.ai, we developed an agent capable of dynamically responding to any situation the user presented. Their technology allowed us to easily integrate robust endpoints and REST APIs for seamless server interaction. One of the most valuable aspects of Fetch.ai was its ability to let us create and test performance-driven agents. We built two types of agents: one that automatically followed the entire procedure and another that responded based on manual input from the user. The flexibility of Fetch.ai’s framework enabled us to continuously refine and improve our agents with ease.
## Deepgram
Deepgram gave us powerful, easy-to-use functionality for both text-to-speech and speech-to-text conversion. Their API was extremely user-friendly, and we were even able to integrate the speech-to-text feature directly into our Unity application. It was a smooth and efficient experience, allowing us to incorporate new, cutting-edge speech technologies that enhanced user interaction and made the process more intuitive.
## Challenges we ran into
One major challenge was the limitation on accessing AR video streams from Meta devices due to privacy restrictions. To work around this, we used an external phone camera attached to the headset to capture the field of view. We also encountered microphone rendering issues, where data could be picked up in sandbox modes but not in the actual Virtual Development Environment, leading us to scale back our Meta integration. Additionally, managing REST API endpoints within Fetch.ai posed difficulties that we overcame through testing, and configuring SingleStore's firewall settings was tricky but eventually resolved. Despite these obstacles, we showcased our solutions as proof of concept.
## Accomplishments that we're proud of
We’re proud of integrating multiple technologies into a cohesive solution that can genuinely assist first responders in life-or-death situations. Our use of cutting-edge AR, AI, and speech technologies allows MedKnight to provide real-time support while maintaining accuracy and safety. Successfully creating a prototype despite the hardware and API challenges was a significant achievement for the team, and was a grind till the last minute. We are also proud of developing an AR product as our team has never worked with AR/VR.
## What we learned
Throughout this project, we learned how to efficiently combine multiple AI and AR technologies into a single, scalable solution. We also gained valuable insights into handling privacy restrictions and hardware limitations. Additionally, we learned about the importance of testing and refining agent-based systems using Fetch.ai to create robust and responsive automation. Our greatest learning take away however was how to manage such a robust backend with a lot of internal API calls.
## What's next for MedKnight
Our next step is to expand MedKnight’s VR environment to include detailed 3D renderings of procedures, allowing users to actively visualize each step. We also plan to extend MedKnight’s capabilities to cover more medical applications and eventually explore other domains, such as cooking or automotive repair, where real-time procedural guidance can be similarly impactful. | winning |
## Inspiration
Our friend Holly had the idea for an app that makes it easier for groups of friends to meet up.
## What it does
Just enter you and your friends' locations, and what kind of place you want to meet up at (cafe, restaurant, or park), and see which places are most convenient for you. It also links to Venmo to make it easy to pay your friends!
## How we built it
Python, Flask, Google Maps Javascript API, Materialize
## Challenges we ran into
Everything from not having the right API key for Google Maps to merge conflicts in Git.
## Accomplishments that we're proud of
The entire thing!
## What we learned
Everyone on our team learned a lot! For many of us, it was our first hackathon, so we learned the basics of HTML, CSS, and Git version control. For those of us with front-end experience already, we learned how to manipulate the various Google Maps APIs, asynchronous JavaScript callbacks, and Materialize, a CSS framework.
## What's next for Centr
$$$$$$$$$ | ## Inspiration
As university students, we often struggled to manage our own finances. A lot of money was spent without even realizing it, and small costs started to add up quickly. Recognizing this challenge, we thought of a solution that could help students like us take control of their spending. Presenting WealthHack — an intelligent tool to track expenses and manage budgets effortlessly.
## What it does
WealthHack allows users to input their daily expenses, set a budget, and track their spending against this budget. The app provides insights into spending habits, helping users stay within their financial limits and make informed decisions. It also offers personalized financial tips and reminders to help users stay on track.
## How we built it
We built WealthHack using a combination of front-end and back-end technologies. The front-end is powered by React, providing a seamless and interactive user interface. The back-end is developed using Flask, which handles data processing and storage. We utilized local storage for quick development during the hackathon, ensuring user data is stored securely on their device.
## Challenges we ran into
* Dependencies: We often lost track of the dependencies required for the FE and BE servers. This led to us keeping our Discord pins and ReadMe.md up-to-date with the latest dependencies we use in our projects.
* HTTP Requests: While Axios is a powerful tool for GET and POST requests, we ran into request errors with OpenAI API, and our backend SQLite DB. With proper CORS configuration, we were able to get past this.
* State Changes: We have multiple components passing and relying on data across our application. There was unexpected loss in data when being passed around with event handlers and hooks. Careful planning and modularization of components helped to simplify this process.
## Accomplishments that we're proud of
We’re proud of creating a functional and user-friendly budget tracker within a short period of time. The financial tips adds an extra layer of usability that enhances the user experience. We're also proud of how the app can serve as a practical tool for students and new migrants in managing their finances.
## What we learned
* Documentation and initial planning is crucial for not losing track in the middle of the project. Once we sat down and designed a user flow for our application, it became easy to divide up tasks.
* Prompt Engineering is crucial to narrow down the result you are looking for. We learnt a lot about crafting an initial prompt for our chatbot.
## What's next for WealthHack
* Chrome Extension: Create a browser extension that assists users with online shopping by providing budget-friendly recommendations and financial advice. The extension will alert users if a purchase is within their budget or if it exceeds their financial limits.
* .csv Upload Support: Ability to upload existing excel sheets and formatted bank statements for parsing and input into the database. This will allow users to quickly seek financial support, without meticulous data entry. | ## Inspiration
We thought it would be nice if, for example, while working in the Computer Science building, you could send out a little post asking for help from people around you.
Also, it would also enable greater interconnectivity between people at an event without needing to subscribe to anything.
## What it does
Users create posts that are then attached to their location, complete with a picture and a description. Other people can then view it two ways.
**1)** On a map, with markers indicating the location of the post that can be tapped on for more detail.
**2)** As a live feed, with the details of all the posts that are in your current location.
The posts don't last long, however, and only posts within a certain radius are visible to you.
## How we built it
Individual pages were built using HTML, CSS and JS, which would then interact with a server built using Node.js and Express.js. The database, which used cockroachDB, was hosted with Amazon Web Services. We used PHP to upload images onto a separate private server. Finally, the app was packaged using Apache Cordova to be runnable on the phone. Heroku allowed us to upload the Node.js portion on the cloud.
## Challenges we ran into
Setting up and using CockroachDB was difficult because we were unfamiliar with it. We were also not used to using so many technologies at once. | losing |
## Inspiration
We started this project because we observed the emotional toll loneliness, grief, and dementia challenges can take on individuals and their caregivers, and we wanted to create a solution that could make a meaningful difference in their lives. Loneliness is especially tough on them, affecting both their mental and physical health. Our aim was simple: to offer them some companionship and comfort, a reminder of the good memories, and a way to feel less alone.
## What it does
Using the latest AI technology, we create lifelike AI clones of their closest loved ones, whether living or passed away, allowing seniors to engage in text or phone conversations whenever they wish. For dementia patients and their caregivers, our product goes a step further by enabling the upload of cherished memories to the AI clones. This not only preserves these memories but also allows dementia patients to relive them, reducing the need to constantly ask caregivers about their loved ones. Moreover, for those who may forget that a loved one has passed away, our technology helps provide continuous connection, eliminating the need to repeatedly go through the grief of loss. It's a lifeline to cherished memories and emotional well-being for our senior community.
## How we built it
We've developed a platform that transforms seniors' loved ones into AI companions accessible through web browsers, SMS, and voice calls. This technology empowers caregivers to customize AI companions for the senior they are in charge of, defining the companion personality and backstory with real data to foster the most authentic interaction, resulting in more engaging and personalized conversations. These interactions are stored indefinitely, enabling companions to learn and enhance their realism with each use.
We used the following tools. . .
Auth --> Clerk + Firebase
App logic --> Next.js
VectorDB --> Pinecone
LLM orchestration --> Langchain.js
Text model --> OpenAI
Text streaming --> ai sdk
Conversation history --> Upstash
Deployment --> Ngrok
Text and call with companion --> Twilio
Landing Page --> HTML, CSS, Javascript
Caregiver Frontend --> Angular
Voice Clone --> ElevenLabs
## Challenges we ran into
One of the significant challenges we encountered was in building and integrating two distinct front ends. We developed a Next.js front end tailored for senior citizens' ease of use, while a more administrative Angular front end was created for caregivers. Additionally, we designed a landing page using a combination of HTML, CSS, and JavaScript. The varying languages and technologies posed a challenge in ensuring seamless connectivity and cohesive operation between these different components. We faced difficulties in bridging the gap and achieving the level of integration we aimed for, but through collaborative efforts and problem-solving, we managed to create a harmonious user experience.
## Accomplishments that we're proud of
We all tackled frameworks we weren't so great at, like Next.js. And, together, we built a super cool project with tons of real-world uses. It's a big deal for us because it's not just tech, it's helping seniors and caregivers in many ways. We've shown how we can learn and innovate, and we're stoked about it!
## What we learned
We learned how to construct vector databases, which was an essential part of our project. Additionally, we discovered the intricacies of connecting the front-end and back-end components through APIs.
## What's next for NavAlone
We would like to improve the AI clone phone call feature. We recognize that older users often prefer phone calls over texting, and we want to make this experience as realistic, fast, and user-friendly as possible. | ## Inspiration
We want to fix healthcare! 48% of physicians in the US are burned out, which is a driver for higher rates of medical error, lower patient satisfaction, higher rates of depression and suicide. Three graduate students at Stanford have been applying design thinking to the burnout epidemic. A CS grad from USC joined us for TreeHacks!
We conducted 300 hours of interviews, learned iteratively using low-fidelity prototypes, to discover,
i) There was no “check engine” light that went off warning individuals to “re-balance”
ii) Current wellness services weren’t designed for individuals working 80+ hour weeks
iii) Employers will pay a premium to prevent burnout
And Code Coral was born.
## What it does
Our platform helps highly-trained individuals and teams working in stressful environments proactively manage their burnout. The platform captures your phones’ digital phenotype to monitor the key predictors of burnout using machine learning. With timely, bite-sized reminders we reinforce individuals’ atomic wellness habits and provide personalized services from laundry to life-coaching.
Check out more information about our project goals: <https://youtu.be/zjV3KeNv-ok>
## How we built it
We built the backend using a combination of API's to Fitbit/Googlemaps/Apple Health/Beiwe; Built a machine learning algorithm and relied on an App Builder for the front end.
## Challenges we ran into
API's not working the way we want. Collecting and aggregating "tagged" data for our machine learning algorithm. Trying to figure out which features are the most relevant!
## Accomplishments that we're proud of
We had figured out a unique solution to addressing burnout but hadn't written any lines of code yet! We are really proud to have gotten this project off the ground!
i) Setting up a system to collect digital phenotyping features from a smart phone ii) Building machine learning experiments to hypothesis test going from our digital phenotype to metrics of burnout iii) We figured out how to detect anomalies using an individual's baseline data on driving, walking and time at home using the Microsoft Azure platform iv) Build a working front end with actual data!
Note - login information to codecoral.net: username - test password - testtest
## What we learned
We are learning how to set up AWS, a functioning back end, building supervised learning models, integrating data from many source to give new insights. We also flexed our web development skills.
## What's next for Coral Board
We would like to connect the backend data and validating our platform with real data! | ## Inspiration
1 in 2 Canadians will personally experience a mental health issue by age 40, with minority communities at a greater risk. As the mental health epidemic surges and support at its capacity, we sought to build something to connect trained volunteer companions with people in distress in several ways for convenience.
## What it does
Vulnerable individuals are able to call or text any available trained volunteers during a crisis. If needed, they are also able to schedule an in-person meet-up for additional assistance. A 24/7 chatbot is also available to assist through appropriate conversation. You are able to do this anonymously, anywhere, on any device to increase accessibility and comfort.
## How I built it
Using Figma, we designed the front end and exported the frame into Reacts using Acovode for back end development.
## Challenges I ran into
Setting up the firebase to connect to the front end react app.
## Accomplishments that I'm proud of
Proud of the final look of the app/site with its clean, minimalistic design.
## What I learned
The need for mental health accessibility is essential but unmet still with all the recent efforts. Using Figma, firebase and trying out many open-source platforms to build apps.
## What's next for HearMeOut
We hope to increase chatbot’s support and teach it to diagnose mental disorders using publicly accessible data. We also hope to develop a modeled approach with specific guidelines and rules in a variety of languages. | partial |
## 💡 Inspiration 💡
The idea for Foodbox was inspired by the growing problem of food waste in our society. According to the United Nations, one-third of all food produced globally is wasted, which not only has a significant environmental impact but also a social one, as so many people are struggling with food insecurity. We realized that grocery stores often have a surplus of certain products that don't sell well, and we wanted to find a way to help them move this inventory and avoid waste. At the same time, we saw an opportunity for consumers to discover new products at a discounted price.
Another inspiration for Foodbox was the increasing awareness of the environmental impact of our food choices. Food production is a major contributor to greenhouse gas emissions and deforestation, and by reducing food waste, we can also reduce our environmental footprint. We believe that Foodbox is a small but meaningful step in the right direction, and we hope that it will inspire others to think creatively about how they can also make a positive impact on the planet and their communities.
## 🛠️ What it does 🛠️
Foodbox is a platform that helps to reduce food waste and promote sustainability. We partner with local grocery stores to offer a 20% discount on their least popular items and package them in a "food mystery box" for our customers to purchase. This way, grocery stores can move their inventory and avoid waste, while customers can discover new products at discounted prices. The items in the box are drawn randomly, so customers can try new things and experiment with different products. Our platform also has a rating system where customers can rate the products they received and this will help the store to know which products are popular among customers. With Foodbox, we hope to create a win-win situation for both the stores and customers, while also making a positive impact on the environment.
## 🏗️ How we built it 🏗️
Our team built the Foodbox web application using Next.js, a framework for building server-rendered React applications. We used Tailwind CSS for styling and Supabase PostgreSQL as our database management system. The final application was deployed using Vercel, a platform for hosting and deploying web applications. With these technologies, we were able to create a fast, responsive, and visually appealing application that is easy to use and navigate. The combination of these technologies allowed us to quickly and efficiently build the application, allowing us to focus on the core functionality of the project.
## 🧗♂️ Challenges we ran into 🧗♂️
Upon meeting initially on the morning of the hackathon, we quickly put together a really good digital receipt app idea! But after doing some research and realizing our great new original idea wasn’t so original after all, we ended up thinking of a food loot box idea in the line for lunch! The best ideas really do come from the strangest places.
## 🏆 Accomplishments that we're proud of 🏆
Kelly: I’d had experience using Adobe XD for mockups at my job as a part-time web designer, but I’d never really used Figma to do the same thing! I’m proud that I managed to learn how to use new software proficiently in one night which allowed me to easily cooperate with my front-end teammates!
Abraham: I’m familiar with React and front-end development in general but I have not used many of the frameworks we used in this project. I was excited to learn and implement Next.js and Tailwind CSS and I am proud of the pages I’ve created using them. I’m also proud of our adaptability to change when we had to accept that our old idea was no longer viable and come up with a new one, throwing away all our previous planning.
Amir: I'm really proud of the idea my team came up with for Foodbox and how we made it happen. I also had a blast playing around with Framer motion and Tailwind, it helped me level up my CSS and React animation skills. It was a great hackathon experience!
Calder: I got to work with 2 new technologies: FastAPI and Beautifulsoup. I learned a ton about parsing HTML and creating an API! Expanding my skills on the server side and working with my awesome teammates made this hackathon a special one.
## 📕 What we learned 📕
During this hackathon, my team and I faced a lot of challenges, but we also had a lot of fun. We learned that it's important to take breaks and have fun to stay motivated and refreshed. We also made some awesome connections with sponsors and other teams. We were able to learn from other participants and sponsors about their experiences and ideas. This helped us to have different perspectives on the problem we were trying to solve and helped us to come up with more creative solutions. And let me tell you, we totally crushed the cup-stacking competition thanks to the strategy we learned. Overall, it was a great learning experience for all of us. We walked away with new skills, new connections, and a sense of accomplishment.
## ⏭️ What's next for Foodbox ⏭️
The next step for Foodbox is to continue to expand our partnerships with local grocery stores. By working with more stores, we can offer a wider variety of products to our customers and help even more stores avoid food waste. We also plan to continue to improve the customer experience by gathering feedback and using it to make our platform more user-friendly. We also plan to expand our efforts to educate our customers about food waste and the environmental impact of their food choices. Finally, we will be looking at other ways to leverage technology, such as artificial intelligence, to further personalize the experience for our customers and make even better recommendations based on their preferences. | ## Inspiration
Noise sensitivity is common in autism, but it can also affect individuals without autism. Research shows that 50 to 70 percent of people with autism experience hypersensitivity to everyday sounds. This inspired us to create a wearable device to help individuals with heightened sensory sensitivities manage noise pollution. Our goal is to provide a dynamic solution that adapts to changing sound environments, offering a more comfortable and controlled auditory experience.
## What it does
SoundShield is a wearable device that adapts to noisy environments by automatically adjusting calming background audio and applying noise reduction. It helps individuals with sensory sensitivities block out overwhelming sounds while keeping them connected to their surroundings. The device also alerts users if someone is behind them, enhancing both awareness and comfort. It filters out unwanted noise using real-time audio processing and only plays calming music if the noise level becomes too high. If it detects a person speaking or if the noise is low enough to be important, such as human speech, it doesn't apply filters or background music.
## How we built it
We developed SoundShield using a combination of real-time audio processing and computer vision, integrated with a Raspberry Pi Zero, a headphone, and a camera. The system continuously monitors ambient sound levels and dynamically adjusts music accordingly. It filters noise based on amplitude and frequency, applying noise reduction techniques such as Spectral Subtraction, Dynamic Range Compression to ensure users only hear filtered audio. The system plays calming background music when noise levels become overwhelming. If the detected noise is low, such as human speech, it leaves the sound unfiltered. Additionally, if a person is detected behind the user and the sound amplitude is high, the system alerts the user, ensuring they are aware of their surroundings.
## Challenges we ran into
Processing audio in real-time while distinguishing sounds based on frequency was a significant challenge, especially with the limited computing power of the Raspberry Pi Zero. Additionally, building the hardware and integrating it with the software posed difficulties, especially when ensuring smooth, real-time performance across audio and computer vision tasks.
## Accomplishments that we're proud of
We successfully integrated computer vision, audio processing, and hardware components into a functional prototype. Our device provides a real-world solution, offering a personalized and seamless sensory experience for individuals with heightened sensitivities. We are especially proud of how the system dynamically adapts to both auditory and visual stimuli.
## What we learned
We learned about the complexities of real-time audio processing and how difficult it can be to distinguish between different sounds based on frequency. We also gained valuable experience in integrating audio processing with computer vision on a resource-constrained device like the Raspberry Pi Zero. Most importantly, we deepened our understanding of the sensory challenges faced by individuals with autism and how technology can be tailored to assist them.
## What's next for SoundSheild
We plan to add a heart rate sensor to detect when the user is becoming stressed, which would increase the noise reduction score and automatically play calming music. Additionally, we want to improve the system's processing power and enhance its ability to distinguish between human speech and other noises. We're also researching specific frequencies that can help differentiate between meaningful sounds, like human speech, and unwanted noise to further refine the user experience. | ## Inspiration
Food is a basic human need. As someone who often finds themselves wandering the aisles of Target, I know firsthand how easy it is to get lost among the countless products and displays. The experience can quickly become overwhelming, leading to forgotten items and a less-than-efficient shopping trip. This project was born from the desire to transform that chaos into a seamless shopping experience. We aim to create a tool that not only helps users stay organized with their grocery lists but also guides them through the store in a way that makes shopping enjoyable and stress-free.
## What it does
**TAShopping** is a smart grocery list app that records your grocery list in an intuitive user interface and generates a personalized route in **(almost)** any Target location across the United States. Users can easily add items to their lists, and the app will optimize their shopping journey by mapping out the most efficient path through the store.
## How we built it
* **Data Aggregation:** We utilized `Selenium` for web scraping, gathering product information and store layouts from Target's website.
* **Object Storage:** `Amazon S3` was used for storing images and other static files related to the products.
* **User Data Storage:** User preferences and grocery lists are securely stored using `Google Firebase`.
* **Backend Compute:** The backend is powered by `AWS Lambda`, allowing for serverless computing that scales with demand.
* **Data Categorization:** User items are classified with `Google Gemini`
* **API:** `AWS API Endpoint` provides a reliable way to interact with the backend services and handle requests from the front end.
* **Webapp:** The web application is developed using `Reflex`, providing a responsive and modern interface for users.
* **iPhone App:** The iPhone application is built with `Swift`, ensuring a seamless experience for iOS users.
## Challenges we ran into
* **Data Aggregation:** Encountered challenges with the rigidity of `Selenium` for scraping dynamic content and navigating web page structures.
* **Object Storage:** N/A (No significant issues reported)
* **User Data Storage:** N/A (No significant issues reported)
* **Backend Compute:** Faced long compute times; resolved this by breaking the Lambda function into smaller, more manageable pieces for quicker processing.
* **Backend Compute:** Dockerized various builds to ensure compatibility with the AWS Linux environment and streamline deployment.
* **API:** Managed the complexities of dealing with and securing credentials to ensure safe API access.
* **Webapp:** Struggled with a lack of documentation for `Reflex`, along with complicated Python dependencies that slowed development.
* **iPhone App:** N/A (No significant issues reported)
## Accomplishments that we're proud of
* Successfully delivered a finished product with a relatively good user experience that has received positive feedback.
* Achieved support for hundreds of Target stores across the United States, enabling a wide range of users to benefit from the app.
## What we learned
>
> We learned a lot about:
>
>
> * **Gemini:** Gained insights into effective data aggregation and user interface design.
> * **AWS:** Improved our understanding of cloud computing and serverless architecture with AWS Lambda.
> * **Docker:** Mastered the process of containerization for development and deployment, ensuring consistency across environments.
> * **Reflex:** Overcame challenges related to the framework, gaining hands-on experience with Python web development.
> * **Firebase:** Understood user authentication and real-time database capabilities through Google Firebase.
> * **User Experience (UX) Design:** Emphasized the importance of intuitive navigation and clear presentation of information in app design.
> * **Version Control:** Enhanced our collaboration skills and code management practices using Git.
>
>
>
## What's next for TAShopping
>
> There are many exciting features on the horizon, including:
>
>
> * **Google SSO for web app user data:** Implementing Single Sign-On functionality to simplify user authentication.
> * **Better UX for grocery list manipulation:** Improving the user interface for adding, removing, and organizing items on grocery lists.
> * **More stores:** Expanding support to additional retailers, including Walmart and Home Depot, to broaden our user base and shopping capabilities.
>
>
> | winning |
## The Idea Behind the App
When you're grabbing dinner at a restaurant, how do you choose which meal to order?
Perhaps you look at the price, calorie count, or, if you're trying to be environmentally-conscious, the carbon footprint of your meal.
COyou is an Android app which reveals the carbon footprint of the meals on a restaurant menu, so that you can make the most informed choice about the food you eat.
### Why is this important?
Food production is responsible for a quarter of all greenhouse gas emissions which contribute to global warming. In particular, meat and other animal products are responsible for more than half of food-related greenhouse gas emissions. However, even among fruits and vegetables, the environmental impact of different foods varies considerably. Therefore, being able to determine the carbon footprint of what you eat can make a difference to the planet.
## How COyou Works
Using pandas, we cleaned our [carbon emissions per ingredients data](https://link.springer.com/article/10.1007/s11367-019-01597-8#Sec24) and mapped it by ingredient-emission.
We used Firebase to enable OCR, so that the app recognizes the text of the menu items. Using Google Cloud's Natural Language API, we broke each menu item name down into entities. For example:
```
1. scrambled eggs -> "eggs",
2. yogurt & granola -> "yogurt", "granola"
```
If an entry is found (for simpler menu items such as "apple juice"), the CO2 emission is immediately returned. Otherwise, we make an API call to USDA Food Central's database, which returns a list of ingredients for the menu item. Then, we map the ingredients to its CO2 emissions and sum the individual CO2 emissions of each ingredient in the dish. Finally, we display a list of the ingredients and the total CO2 emissions of each dish.
## The Creation of COyou
We used Android Studio, Google Firebase, Google NLP API, and an enthusiasm for food and restaurants in the creation of COyou.
## Sources and Further Reading
1. [Determining the climate impact of food for use in a climate tax—design of a consistent and transparent model](https://link.springer.com/article/10.1007/s11367-019-01597-8#Sec24)
2. [Carbon Footprint Factsheet](http://css.umich.edu/factsheets/carbon-footprint-factsheet)
3. [Climate change food calculator: What's your diet's carbon footprint?](https://www.bbc.com/news/science-environment-4645971)
4. [USDA Food Data Central API](https://fdc.nal.usda.gov/index.htm) | ## Inspiration
When it comes to finding solutions to global issues, we often feel helpless: making us feel as if our small impact will not help the bigger picture. Climate change is a critical concern of our age; however, the extent of this matter often reaches beyond what one person can do....or so we think!
Inspired by the feeling of "not much we can do", we created *eatco*. *Eatco* allows the user to gain live updates and learn how their usage of the platform helps fight climate change. This allows us to not only present users with a medium to make an impact but also helps spread information about how mother nature can heal.
## What it does
While *eatco* is centered around providing an eco-friendly alternative lifestyle, we narrowed our approach to something everyone loves and can apt to; food! Other than the plenty of health benefits of adopting a vegetarian diet — such as lowering cholesterol intake and protecting against cardiovascular diseases — having a meatless diet also allows you to reduce greenhouse gas emissions which contribute to 60% of our climate crisis. Providing users with a vegetarian (or vegan!) alternative to their favourite foods, *eatco* aims to use small wins to create a big impact on the issue of global warming. Moreover, with an option to connect their *eatco* account with Spotify we engage our users and make them love the cooking process even more by using their personal song choices, mixed with the flavours of our recipe, to create a personalized playlist for every recipe.
## How we built it
For the front-end component of the website, we created our web-app pages in React and used HTML5 with CSS3 to style the site. There are three main pages the site routes to: the main app, and the login and register page. The login pages utilized a minimalist aesthetic with a CSS style sheet integrated into an HTML file while the recipe pages used React for the database. Because we wanted to keep the user experience cohesive and reduce the delay with rendering different pages through the backend, the main app — recipe searching and viewing — occurs on one page. We also wanted to reduce the wait time for fetching search results so rather than rendering a new page and searching again for the same query we use React to hide and render the appropriate components. We built the backend using the Flask framework. The required functionalities were implemented using specific libraries in python as well as certain APIs. For example, our web search API utilized the googlesearch and beautifulsoup4 libraries to access search results for vegetarian alternatives and return relevant data using web scraping. We also made use of Spotify Web API to access metadata about the user’s favourite artists and tracks to generate a personalized playlist based on the recipe being made. Lastly, we used a mongoDB database to store and access user-specific information such as their username, trees saved, recipes viewed, etc. We made multiple GET and POST requests to update the user’s info, i.e. saved recipes and recipes viewed, as well as making use of our web scraping API that retrieves recipe search results using the recipe query users submit.
## Challenges we ran into
In terms of the front-end, we should have considered implementing Routing earlier because when it came to doing so afterward, it would be too complicated to split up the main app page into different routes; this however ended up working out alright as we decided to keep the main page on one main component. Moreover, integrating animation transitions with React was something we hadn’t done and if we had more time we would’ve liked to add it in. Finally, only one of us working on the front-end was familiar with React so balancing what was familiar (HTML) and being able to integrate it into the React workflow took some time. Implementing the backend, particularly the spotify playlist feature, was quite tedious since some aspects of the spotify web API were not as well explained in online resources and hence, we had to rely solely on documentation. Furthermore, having web scraping and APIs in our project meant that we had to parse a lot of dictionaries and lists, making sure that all our keys were exactly correct. Additionally, since dictionaries in Python can have single quotes, when converting these to JSONs we had many issues with not having them be double quotes. The JSONs for the recipes also often had quotation marks in the title, so we had to carefully replace these before the recipes were themselves returned. Later, we also ran into issues with rate limiting which made it difficult to consistently test our application as it would send too many requests in a small period of time. As a result, we had to increase the pause interval between requests when testing which made it a slow and time consuming process. Integrating the Spotify API calls on the backend with the frontend proved quite difficult. This involved making sure that the authentication and redirects were done properly. We first planned to do this with a popup that called back to the original recipe page, but with the enormous amount of complexity of this task, we switched to have the playlist open in a separate page.
## Accomplishments that we're proud of
Besides our main idea of allowing users to create a better carbon footprint for themselves, we are proud of accomplishing our Spotify integration. Using the Spotify API and metadata was something none of the team had worked with before and we're glad we learned the new skill because it adds great character to the site. We all love music and being able to use metadata for personalized playlists satisfied our inner musical geek and the integration turned out great so we're really happy with the feature. Along with our vast recipe database this far, we are also proud of our integration! Creating a full-stack database application can be tough and putting together all of our different parts was quite hard, especially as it's something we have limited experience with; hence, we're really proud of our service layer for that. Finally, this was the first time our front-end developers used React for a hackathon; hence, using it in a time and resource constraint environment for the first time and managing to do it as well as we did is also one of our greatest accomplishments.
## What we learned
This hackathon was a great learning experience for all of us because everyone delved into a tool that they'd never used before! As a group, one of the main things we learned was the importance of a good git workflow because it allows all team members to have a medium to collaborate efficiently by combing individual parts. Moreover, we also learned about Spotify embedding which not only gave *eatco* a great feature but also provided us with exposure to metadata and API tools. Moreover, we also learned more about creating a component hierarchy and routing on the front end. Another new tool that we used in the back-end was learning how to perform database operations on a cloud-based MongoDB Atlas database from a python script using the pymongo API. This allowed us to complete our recipe database which was the biggest functionality in *eatco*.
## What's next for Eatco
Our team is proud of what *eatco* stands for and we want to continue this project beyond the scope of this hackathon and join the fight against climate change. We truly believe in this cause and feel eatco has the power to bring meaningful change; thus, we plan to improve the site further and release it as a web platform and a mobile application. Before making *eatco* available for users publically we want to add more functionality and further improve the database and present the user with a more accurate update of their carbon footprint. In addition to making our recipe database bigger, we also want to focus on enhancing the front-end for a better user experience. Furthermore, we also hope to include features such as connecting to maps (if the user doesn't have a certain ingredient, they will be directed to the nearest facility where that item can be found), and better use of the Spotify metadata to generate even better playlists. Lastly, we also want to add a saved water feature to also contribute into the global water crisis because eating green also helps cut back on wasteful water consumption! We firmly believe that *eatco* can go beyond the range of the last 36 hours and make impactful change on our planet; hence, we want to share with the world how global issues don't always need huge corporate or public support to be solved, but one person can also make a difference. | ## Inspiration
We were inspired by the 63%, of edible food that Canadians throw away. The majority (45%) of this food (by weight) is fresh produce. We determined a large contributing factor to produce waste as the expiration dates— which can often be difficult to keep track of.
Food waste directly connects to global warming, as a large amount of resources are used to grow food. Thus, this excess waste translates directly into C02 emissions. We recognize that through simple actions we can all contribute to the preservation of our planet.
Our team found a very simple, viable solution to this problem as a convenient app that would allow users to keep track of all their fresh produce and expiration dates, simply by taking a photo.
## What it does
Our application, FOEX, provides an estimated time to expiry for produce items scanned in by the user. FOEX manages all expiry dates, allowing users to keep track of their food items. The app will automatically update the days left until an item expires. We hope to further develop this application by adding additional functionality that processes printed expiration dates on packaged items as well as enabling users to sort and arrange their inventory manually.
## How we built it
We first designed the application with a wireframe on Figma. Utilizing the pre-created format of Android Studio, we coded the back-end of our project in Java and the frontend in XML. Within the backend we used okHTTP to access the Microsoft Azure Computer Vision API. Throughout the whole process we utilized Git and Github to streamline the build process.
## Challenges we ran into
The largest challenge that we encountered was the implementation of the Computer Vision API into the application. Initially, we had utilized an external library that simplified calls to the API; however, after hours of debugging, we found it to be a source of many compilation and configuration errors. This required us to rethink our implementation and resort to OkHTTP to help us implement the API. In addition, the technical issues we experienced when configuring our API and APK resulted in smaller challenges such as Android Studio’s inability to handle lambda functions. Overall, we found it challenging to integrate all of the files together.
## Accomplishments that we're proud of
We are very proud of the dedication and hardwork put in by all of our members. With this being our second Hackathon, we found ourselves constantly learning new skills and experimenting with new tools throughout the entire process. We were able to successfully implement the Microsoft Azure API and strengthened our knowledge in Android Studio and Figma. Avoided mistakes from our first Hackathon, we applied knowledge to help organize ourselves effectively, ensuring that we considered all potential challenges while remaining grounded in our main idea without worrying about minor features. Despite facing some challenges, we persevered and had fun doing it!
## What we learned
Our team learned a lot about utilizing version control, we had some organization differences and difficulties. Throughout the process we became better at communicating which member was pushing specific changes, in order to avoid having the app crash upon compilation (of partially completed pieces of code).
In addition, our team became more fluid at debugging Android Studio, and coming up with unique, creative approaches to solving a problem. Often we would encounter a problem which there were multiple ways to fix, the first solution not always being the best.
## What's next for FOEX
We will further implement our algorithms to include a greater variety of produce and add support for scanning barcodes and expiry dates for packaged food items. We also plan to add notification alerts and potential recipes for items that will expire soon. FOEX will later be released on IOS in order to on expanding the app’s usage and impact on society. | winning |
## Inspiration
With caffeine being a staple in almost every student’s lifestyle, many are unaware when it comes to the amount of caffeine in their drinks. Although a small dose of caffeine increases one’s ability to concentrate, higher doses may be detrimental to physical and mental health. This inspired us to create The Perfect Blend, a platform that allows users to manage their daily caffeine intake, with the aim of preventing students from spiralling down a coffee addiction.
## What it does
The Perfect Blend tracks caffeine intake and calculates how long it takes to leave the body, ensuring that users do not consume more than the daily recommended amount of caffeine. Users can add drinks from the given options and it will update on the tracker. Moreover, The Perfect Blend educates users on how the quantity of caffeine affects their bodies with verified data and informative tier lists.
## How we built it
We used Figma to lay out the design of our website, then implemented it into Velo by Wix. The back-end of the website is coded using JavaScript. Our domain name was registered with domain.com.
## Challenges we ran into
This was our team’s first hackathon, so we decided to use Velo by Wix as a way to speed up the website building process; however, Wix only allows one person to edit at a time. This significantly decreased the efficiency of developing a website. In addition, Wix has building blocks and set templates making it more difficult for customization. Our team had no previous experience with JavaScript, which made the process more challenging.
## Accomplishments that we're proud of
This hackathon allowed us to ameliorate our web design abilities and further improve our coding skills. As first time hackers, we are extremely proud of our final product. We developed a functioning website from scratch in 36 hours!
## What we learned
We learned how to lay out and use colours and shapes on Figma. This helped us a lot while designing our website. We discovered several convenient functionalities that Velo by Wix provides, which strengthened the final product. We learned how to customize the back-end development with a new coding language, JavaScript.
## What's next for The Perfect Blend
Our team plans to add many more coffee types and caffeinated drinks, ranging from teas to energy drinks. We would also like to implement more features, such as saving the tracker progress to compare days and producing weekly charts. | ## Inspiration
The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand.
## What it does
Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked.
## How we built it
To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process.
## Challenges we ran into
One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process.
## Accomplishments that we're proud of
Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives.
## What we learned
We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application.
## What's next for Winnur
Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur. | ## Inspiration
We needed a system to operate our Hackspace utilities shop, where we sell materials, snacks, and even the use of some restricted equipment. It needed to be instantaneous, allow for a small amount of debt in case of emergency and work using our College ID cards to keep track of who is purchasing what.
## What it does
Each student has a set credit assigned to them that they may spend in our shop's products. To start the transaction they must tap their college ID onto the RFID scanner and, after being able to check their current credit, they can scan the barcode of the product they want to buy. If this transaction would leave them with less than £5 of debt, the may scan more items or proceed to checkout. Their credit can be topped up through our College Union website which will, in turn, update our database with their new credit amount.
## How we built it
The interface is built in bootstrap-generated webpages (html) that we controlled with python, and these are locally hosted.
We hosted all of our databases on firebase, accessing them through the firebase API with a python wrapper.
## Challenges we ran into
Connecting the database with the GUI without the python program crashing took the majority of our debugging time, but getting it to work in the end was one of the most incredible moments of the weekend.
## Accomplishments that we're proud of
We've never made a webapp before, and we have been pleasantly surprised with how well it turned out: it was clean, easy to use and modular, making it easy to update and develop.
Instead of using technology we wouldn't have available back in London and doing a project with no real future outlook or development, we chose to tackle a problem which we actually needed to solve and whose solution we will be able to use many times in the future. This has meant that competing the Hackathon will have a real impact in the every day transactions that happen in our lab.
We're also very proud of developing the database in a system which we knew nothing about at the beginning of this event: firebase. It was challenging, but the final result was as great as we expected.
## What we learned
During the Hackathon, we have improved our coding skills, teamworking, database management, GUI development and many other skills which we will be also able to use in our future projects and careers.
## What's next for ICRS Checkout
Because we have concentrated on a specific range of useful tasks for this Hackathon, we would love to be able to develop a more general system that can be used by different societies, universities and even schools; operated under the same principles but with a wider range of card identification possibilities, products and debt allowance. | winning |
## Inspiration
At Hack the North 11, dreaming big is a key theme. I wanted to play off of this theme, in combination with my growing interest in AI, to create a chill platform to help people realize and achieve their best *literal* dreams!
I enjoy listening to ambient music to wind down before I go to sleep, and I'm sure I'm not the only one who does, so I wanted to create something where people could, with the help of AI, make their own custom ambient tracks to listen to before or as they're falling asleep, overall enriching their pre-sleep routine and (hopefully) having better dreams.
## What it does
The user, when making a custom track, enters up to three keywords (or "ambiances") that are then provided to the AI music generating platform soundraw.io to create ambient music that closely aligns with their desired sound. There are also pre-made "templates" of ambient tracks that a user can easily click play and listen to at their leisure.
## How we built it
AI ambient tracks were generated using [soundraw.io](https://soundraw.io/). The web app interface was built using the Vue.js framework, with additional JavaScript, HTML, and CSS code as needed.
## Challenges we ran into
In all honesty, this project was mostly to push my own limits and expand my knowledge in front-end development. I had very little experience in that area of development coming into HTN 11, so I faced a number of challenges in building an interface from scratch, such as playing around with HTML and CSS for the first time and properly creating transitions between different pages based on which icons I clicked.
## Accomplishments that we're proud of
I have never done a solo hack before, nor have I ever built a web app interface myself, so these were some major personal milestones that I am very proud of and I did a lot of learning when it came to front end development!
## What we learned
How to create a web app interface from scratch, more about AI text-to-sound.
## What's next for REMI
Future improvements could include (but are not limited to):
* Refining the appearance of the interface (i.e. making it cleaner and prettier)
* Providing actual functionality to the next song/previous song buttons
* Matching the track slider element to the actual progress of the track
* Ability to go back in and edit a custom track after generating it
* Ability to delete custom tracks
* The capability to search and add other pre-made templates
* Allowing user more freedom to customize track icons (ex: choose images for icons, rather than just colors) | ## Check it out
Add [[email protected]](mailto:[email protected]) and give it a command in the form [sentence/phrase] tr [language]
## Inspiration
We wanted to create something functional for Cisco Spark, that could be of real use to people using Cisco Spark in a business context. We thought that people at large businesses, often with multiple offices, might need or want a quick way to translate text.
## What it does
Leverages the Google Translate API to allow translation with the command @TranslatorBot [sentence/phrase] tr [language]
## How we built it
With blood, sweat, and tears. We began basic bot implementations with Node.js and then built up to the integration of the Google Translate API.
## Challenges we ran into
None of us had ever used an API, or Node.js, or javascript in general. It was a very steep learning curve that never really flattened out.
## Accomplishments that we're proud of
It works pretty well with most languages!!
## What we learned
You can do anything you put your mind to.
## What's next for Translator Bot
We wanted to use Google Natural Language Processing API in order to check the accuracy of sentiment of translations | ## Inspiration
Our inspiration comes from the desire to democratize music creation. We saw a gap where many people, especially youth, had a passion for making music but lacked the resources, knowledge, or time to learn traditional instruments and complex DAW software. We wanted to create a solution that makes music creation accessible and enjoyable for everyone, allowing them to express their musical ideas quickly and easily.
## What it does
IHUM is an AI application that converts artists' vocals or hums into MIDI instruments. It simplifies the music creation process by allowing users to record their vocal melodies and transform them into instrumental tracks with preset instruments. This enables users to create complex musical pieces without needing to know how to play an instrument or use a DAW.
## How we built it
For the frontend, we used HTML, CSS and React JS to develop IHUM. Using React JS and its libraries such as Pitchy, we were able to process, change, and output the sound waves of audio inputs. For the backend, we used Auth0's API to create a login/signup system, which stores and verifies user emails and passwords.
## Challenges we ran into
One of the main challenges we faced was ensuring the AI's accuracy in interpreting vocal inputs and converting them into MIDI data that sounds natural and musical. Furthermore, the instruments that we converted had a ton of issues in how it sounds, especially regarding pitch, tone, etc. However, we were able to pull and troubleshoot through our way through most of them.
## Accomplishments that we're proud of
Through all the hours of hard work and effort, an accomplishment we are extremely proud of is the fact that our program is able to process the audio. By using Pitchy JS, we were able to change the audio to fit how we want it to sound. On top of this, we are also proud that we were able to implement a fully working login/signup system using Auth0's API and integrate it within the program.
## What we learned
As this was our first time working with audio in web development and many of our group's first hackathons, we faced many issues that we had to overcome and learn from. From processing to setting up APIs to modifying the sound waves, it definitely provided us valuable insight and expanded our skillsets.
## What's next for IHUM
Our future updates allow running multiple audio samples simultaneously and increasing our instrument libraries. By doing so, IHUM can essentially be a simple and easy-to-use DAW which permits the user to create entire beats out of their voice on our web application. | losing |
# pennapps
## blockat: a block-based web interpreter for learning Python
The blockat website is designed to allow students to easily learn Python. The website allows students to program with a custom block-based language, which is transpiled to Python. There are a few existing solutions for this, but none offer a comprehensive learning suite that provides the full power of the Python language to students in a block-based form. The block language introduced also has the extensibility to be integrated with different Python modules and packages, leveraging the massive community of open-source Python software. Most importantly, all of the code for this project is open-source, allowing students and educators to look inside the hood and gain complete control over their learning.
Blockat is implemented entirely client-side and is hosted on an AWS EC2 instance. The blocks are created in pure HTML/CSS, which is directly converted to Python syntax in an embedded [text editor](https://ace.c9.io/). The user can run the Python through [Skulpt](http://www.skulpt.org/#) and view the result on the page. Blockat's block-based editor supports all major features of the Python programming language, including support for importing a few modules like `math` and `numpy`. If the user requires more advanced functionality that is not currently available in the web editor, blockat offers the ability to download the block code as either HTML or Python; the HTML can be re-uploaded to continue progress on an old block script.
As a learning tool, blockat strives to ease new students into serious programming. The block-based workflow is easy for new learners to visualize, while the direct connection with real Python code provides the power and versatility necessary to take their growth to the next level. | ## Inspiration
The inspiration for building this project likely stemmed from the desire to empower students and make learning coding more accessible and engaging. It combines AI technology with education to provide tailored support, making it easier for students to grasp coding concepts. The goal may have been to address common challenges students face when learning to code, such as doubts and the need for personalized resources. Overall, the project's inspiration appears to be driven by a passion for enhancing the educational experience and fostering a supportive learning environment.
## What it does
The chatbot project is designed to cater to a range of use cases, with a clear hierarchy of priorities. At the highest priority level, the chatbot serves as a real-time coding companion, offering students immediate and accurate responses and explanations to address their coding questions and doubts promptly. This ensures that students can swiftly resolve any coding-related issues they encounter. Moving to the medium priority use case, the chatbot provides personalized learning recommendations. By evaluating a student's individual skills and preferences, the chatbot tailors its suggestions for learning resources, such as tutorials and practice problems. This personalized approach aims to enhance the overall learning experience by delivering materials that align with each student's unique needs. At the lowest priority level, the chatbot functions as a bridge, facilitating connections between students and coding mentors. When students require more in-depth assistance or guidance, the chatbot can help connect them with human mentors who can provide additional support beyond what the chatbot itself offers. This multi-tiered approach reflects the project's commitment to delivering comprehensive support to students learning to code, spanning from immediate help to personalized recommendations and, when necessary, human mentorship.
## How we built it
The development process of our AI chatbot involved a creative integration of various Language Models (**LLMs**) using an innovative technology called **LangChain**. We harnessed the capabilities of LLMs like **Bard**, **ChatGPT**, and **PaLM**, crafting a robust pipeline that seamlessly combines all of them. This integration forms the core of our powerful AI bot, enabling it to efficiently handle a wide range of coding-related questions and doubts commonly faced by students. By unifying these LLMs, we've created a chatbot that excels in providing accurate and timely responses, enhancing the learning experience for students.
Moreover, our project features a **centralized database** that plays a pivotal role in connecting students with coding mentors. This database serves as a valuable resource, ensuring that students can access the expertise and guidance of coding mentors when they require additional assistance. It establishes a seamless mechanism for real-time interaction between students and mentors, fostering a supportive learning environment. This element of our project reflects our commitment to not only offer AI-driven solutions but also to facilitate meaningful human connections that further enrich the educational journey.
In essence, our development journey has been marked by innovation, creativity, and a deep commitment to addressing the unique needs of students learning to code. By integrating advanced LLMs and building a robust infrastructure for mentorship, we've created a holistic AI chatbot that empowers students and enhances their coding learning experience.
## Challenges we ran into
Addressing the various challenges encountered during the development of our AI chatbot project involved a combination of innovative solutions and persistent efforts. To conquer integration complexities, we invested substantial time and resources in research and development, meticulously fine-tuning different Language Models (LLMs) such as Bard, ChatGPT, and Palm to work harmoniously within a unified pipeline. Data quality and training challenges were met through an ongoing commitment to curate high-quality coding datasets and an iterative training process that continually improved the chatbot's accuracy based on real-time user interactions and feedback.
For real-time interactivity, we optimized our infrastructure, leveraging cloud resources and employing responsive design techniques to ensure low-latency communication and enhance the overall user experience. Mentor matching algorithms were refined continuously, considering factors such as student proficiency and mentor expertise, making the pairing process more precise. Ethical considerations were addressed by implementing strict ethical guidelines and bias audits, promoting fairness and transparency in chatbot responses.
User experience was enhanced through user-centric design principles, including usability testing, user interface refinements, and incorporation of user feedback to create an intuitive and engaging interface. Ensuring scalability involved the deployment of elastic cloud infrastructure, supported by regular load testing and optimization to accommodate a growing user base.
Security was a paramount concern, and we safeguarded sensitive data through robust encryption, user authentication protocols, and ongoing cybersecurity best practices, conducting regular security audits to protect user information. Our collective dedication, collaborative spirit, and commitment to excellence allowed us to successfully navigate and overcome these challenges, resulting in a resilient and effective AI chatbot that empowers students in their coding education while upholding the highest standards of quality, security, and ethical responsibility.
## Accomplishments that we're proud of
Throughout the development and implementation of our AI chatbot project, our team has achieved several accomplishments that we take immense pride in:
**Robust Integration of LLMs:** We successfully integrated various Language Models (LLMs) like Bard, ChatGPT, and Palm into a unified pipeline, creating a versatile and powerful chatbot that combines their capabilities to provide comprehensive coding assistance. This accomplishment showcases our technical expertise and innovation in the field of natural language processing.
**Real-time Support**: We achieved the goal of providing real-time coding assistance to students, ensuring they can quickly resolve their coding questions and doubts. This accomplishment significantly enhances the learning experience, as students can rely on timely support from the chatbot.
**Personalized Learning Recommendations**: Our chatbot excels in offering personalized learning resources to students based on their skills and preferences. This accomplishment enhances the effectiveness of the learning process by tailoring educational materials to individual needs.
**Mentor-Matching Database**: We established a centralized database for coding mentors, facilitating connections between students and mentors when more in-depth assistance is required. This accomplishment emphasizes our commitment to fostering meaningful human connections within the digital learning environment.
**Ethical and Bias Mitigation**: We implemented rigorous ethical guidelines and bias audits to ensure that the chatbot's responses are fair and unbiased. This accomplishment demonstrates our dedication to responsible AI development and user fairness.
**User-Centric Design**: We created an intuitive and user-friendly interface that simplifies the interaction between students and the chatbot. This user-centric design accomplishment enhances the overall experience for students, making the learning process more engaging and efficient.
**Scalability**: Our chatbot's architecture is designed to scale efficiently, allowing it to accommodate a growing user base without compromising performance. This scalability accomplishment ensures that our technology remains accessible to a broad audience.
**Security Measures**: We implemented robust security protocols to protect user data, ensuring that sensitive information is safeguarded. Regular security audits and updates represent our commitment to user data privacy and cybersecurity.
These accomplishments collectively reflect our team's dedication to advancing education through technology, providing students with valuable support, personalized learning experiences, and access to coding mentors. We take pride in the positive impact our AI chatbot has on the educational journey of students and our commitment to responsible and ethical AI development.
## What we learned
The journey of developing our AI chatbot project has been an enriching experience, filled with valuable lessons that have furthered our understanding of technology, education, and teamwork. Here are some of the key lessons we've learned:
**Complex Integration Requires Careful Planning**: Integrating diverse Language Models (LLMs) is a complex task that demands meticulous planning and a deep understanding of each model's capabilities. We learned the importance of a well-thought-out integration strategy.
**Data Quality Is Paramount**: The quality of training data significantly influences the chatbot's performance. We've learned that meticulous data curation and continuous improvement are essential to building an accurate AI model.
**Real-time Interaction Enhances Learning**: The ability to provide real-time coding assistance has a profound impact on the learning experience. We learned that prompt support can greatly boost students' confidence and comprehension.
**Personalization Empowers Learners**: Tailoring learning resources to individual students' needs is a powerful way to enhance education. We've discovered that personalization leads to more effective learning outcomes.
**Mentorship Matters**: Our mentor-matching database has highlighted the importance of human interaction in education. We learned that connecting students with mentors for deeper assistance is invaluable.
Ethical AI Development Is Non-Negotiable: Addressing ethical concerns and bias in AI systems is imperative. We've gained insights into the importance of transparent, fair, and unbiased AI interactions.
**User Experience Drives Engagement**: A user-centric design is vital for engaging students effectively. We've learned that a well-designed interface improves the overall educational experience.
**Scalability Is Essential for Growth**: Building scalable infrastructure is crucial to accommodate a growing user base. We've learned that the ability to adapt and scale is key to long-term success.
**Security Is a Constant Priority**: Protecting user data is a fundamental responsibility. We've learned that ongoing vigilance and adherence to best practices in cybersecurity are essential.
**Teamwork Is Invaluable**: Collaborative and cross-disciplinary teamwork is at the heart of a successful project. We've experienced the benefits of diverse skills and perspectives working together.
These lessons have not only shaped our approach to the AI chatbot project but have also broadened our knowledge and understanding of technology's role in education and the ethical responsibilities that come with it. As we continue to develop and refine our chatbot, these lessons serve as guideposts for our future endeavors in enhancing learning and supporting students through innovative technology.
## What's next for ~ENIGMA
The journey of our AI chatbot project is an ongoing one, and we have ambitious plans for its future:
**Continuous Learning and Improvement**: We are committed to a continuous cycle of learning and improvement. This includes refining the chatbot's responses, expanding its knowledge base, and enhancing its problem-solving abilities.
**Advanced AI Capabilities**: We aim to incorporate state-of-the-art AI techniques to make the chatbot even more powerful and responsive. This includes exploring advanced machine learning models and technologies.
**Expanded Subject Coverage**: While our chatbot currently specializes in coding, we envision expanding its capabilities to cover a wider range of subjects and academic disciplines, providing comprehensive educational support.
**Enhanced Personalization**: We will invest in further personalization, tailoring learning resources and mentor matches even more closely to individual student needs, preferences, and learning styles.
**Multi-Lingual Support**: We plan to expand the chatbot's language capabilities, enabling it to provide support to students in multiple languages, making it accessible to a more global audience.
**Mobile Applications**: Developing mobile applications will enhance the chatbot's accessibility, allowing students to engage with it on their smartphones and tablets.
**Integration with Learning Management Systems**: We aim to integrate our chatbot with popular learning management systems used in educational institutions, making it an integral part of formal education.
**Feedback Mechanisms**: We will implement more sophisticated feedback mechanisms, allowing users to provide input that helps improve the chatbot's performance and user experience.
**Research and Publication**: Our team is dedicated to advancing the field of AI in education. We plan to conduct research and contribute to academic publications in the realm of AI-driven educational support.
**Community Engagement**: We are eager to engage with the educational community to gather insights, collaborate, and ensure that our chatbot remains responsive to the evolving needs of students and educators.
In essence, the future of our project is marked by a commitment to innovation, expansion, and a relentless pursuit of excellence in the realm of AI-driven education. Our goal is to provide increasingly effective and personalized support to students, empower educators, and contribute to the broader conversation surrounding AI in education. | ## Inspiration
We are part of a generation that has lost the art of handwriting. Because of the clarity and ease of typing, many people struggle with clear shorthand. Whether you're a second language learner or public education failed you, we wanted to come up with an intelligent system for efficiently improving your writing.
## What it does
We use an LLM to generate sample phrases, sentences, or character strings that target letters you're struggling with. You can input the writing as a photo or directly on the webpage. We then use OCR to parse, score, and give you feedback towards the ultimate goal of character mastery!
## How we built it
We used a simple front end utilizing flexbox layouts, the p5.js library for canvas writing, and simple javascript for logic and UI updates. On the backend, we hosted and developed an API with Flask, allowing us to receive responses from API calls to state-of-the-art OCR, and the newest Chat GPT model. We can also manage user scores with Pythonic logic-based sequence alignment algorithms.
## Challenges we ran into
We really struggled with our concept, tweaking it and revising it until the last minute! However, we believe this hard work really paid off in the elegance and clarity of our web app, UI, and overall concept.
..also sleep 🥲
## Accomplishments that we're proud of
We're really proud of our text recognition and matching. These intelligent systems were not easy to use or customize! We also think we found a creative use for the latest chat-GPT model to flex its utility in its phrase generation for targeted learning. Most importantly though, we are immensely proud of our teamwork, and how everyone contributed pieces to the idea and to the final project.
## What we learned
3 of us have never been to a hackathon before!
3 of us never used Flask before!
All of us have never worked together before!
From working with an entirely new team to utilizing specific frameworks, we learned A TON.... and also, just how much caffeine is too much (hint: NEVER).
## What's Next for Handwriting Teacher
Handwriting Teacher was originally meant to teach Russian cursive, a much more difficult writing system. (If you don't believe us, look at some pictures online) Taking a smart and simple pipeline like this and updating the backend intelligence allows our app to incorporate cursive, other languages, and stylistic aesthetics. Further, we would be thrilled to implement a user authentication system and a database, allowing people to save their work, gamify their learning a little more, and feel a more extended sense of progress. | losing |
# Course Connection
## Inspiration
College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life.
## What it does
Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students.
From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data.
## High-Level Tech Stack
Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing).
## How we built it
### Initial Setup
Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization.
### Backend Infrastructure
We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes.
We distributed our workload across several servers.
We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end.
### Graph Construction
Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it.
Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1.
We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken.
With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet!
## Challenges we ran into
We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers.
In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph.
## Accomplishments that we're proud of
We’re very proud of the graph component both in its data structure and in its visual representation.
## What we learned
It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel.
## What's next for Course Connections
There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation. | ## Inspiration
Over the course of the past year, one of the most heavily impacted industries due to the COVID-19 pandemic is the service sector. Specifically, COVID-19 has transformed the financial viability of restaurant models. Moving forward, it is projected that 36,000 small restaurants will not survive the winter as successful restaurants have thus far relied on online dining services such as Grubhub or Doordash. However, these methods come at the cost of flat premiums on every sale, driving up the food price and cutting at least 20% from a given restaurant’s revenue. Within these platforms, the most popular, established restaurants are prioritized due to built-in search algorithms. As such, not all small restaurants can join these otherwise expensive options, and there is no meaningful way for small restaurants to survive during COVID.
## What it does
Potluck provides a platform for chefs to conveniently advertise their services to customers who will likewise be able to easily find nearby places to get their favorite foods. Chefs are able to upload information about their restaurant, such as their menus and locations, which is stored in Potluck’s encrypted database. Customers are presented with a personalized dashboard containing a list of ten nearby restaurants which are generated using an algorithm that factors in the customer’s preferences and sentiment analysis of previous customers. There is also a search function which will allow customers to find additional restaurants that they may enjoy.
## How I built it
We built a web app with Flask where users can feed in data for a specific location, cuisine of food, and restaurant-related tags. Based on this input, restaurants in our database are filtered and ranked based on the distance to the given user location calculated using Google Maps API and the Natural Language Toolkit (NLTK), and a sentiment score based on any comments on the restaurant calculated using Google Cloud NLP. Within the page, consumers can provide comments on their dining experience with a certain restaurant and chefs can add information for their restaurant, including cuisine, menu items, location, and contact information. Data is stored in a PostgreSQL-based database on Google Cloud.
## Challenges I ran into
One of the challenges that we faced was coming up a solution that matched the timeframe and bandwidth of our team. We did not want to be too ambitious with our ideas and technology yet provide a product that we felt was novel and meaningful.
We also found it difficult to integrate the backend with the frontend. For example, we needed the results from the Natural Language Toolkit (NLTK) in the backend to be used by the Google Maps JavaScript API in the frontend. By utilizing Jinja templates, we were able to serve the webpage and modify its script code based on the backend results from NLTK.
## Accomplishments that I'm proud of
We were able to identify a problem that was not only very meaningful to us and our community, but also one that we had a reasonable chance of approaching with our experience and tools. Not only did we get our functions and app to work very smoothly, we ended up with time to create a very pleasant user-experience and UI. We believe that how comfortable the user is when using the app is equally as important as how sophisticated the technology is.
Additionally, we were happy that we were able to tie in our product into many meaningful ideas on community and small businesses, which we believe are very important in the current times.
## What I learned
Tools we tried for the first time: Flask (with the additional challenge of running HTTPS), Jinja templates for dynamic HTML code, Google Cloud products (including Google Maps JS API), and PostgreSQL.
For many of us, this was our first experience with a group technical project, and it was very instructive to find ways to best communicate and collaborate, especially in this virtual setting. We benefited from each other’s experiences and were able to learn when to use certain ML algorithms or how to make a dynamic frontend.
## What's next for Potluck
For example, we want to incorporate an account system to make user-specific recommendations (Firebase). Additionally, regarding our Google Maps interface, we would like to have dynamic location identification. Furthermore, the capacity of our platform could help us expand program to pair people with any type of service, not just food. We believe that the flexibility of our app could be used for other ideas as well. | ## Inspiration
Many of us had class sessions in which the teacher pulled out whiteboards or chalkboards and used them as a tool for teaching. These made classes very interactive and engaging. With the switch to virtual teaching, class interactivity has been harder. Usually a teacher just shares their screen and talks, and students can ask questions in the chat. We wanted to build something to bring back this childhood memory for students now and help classes be more engaging and encourage more students to attend, especially in the younger grades.
## What it does
Our application creates an environment where teachers can engage students through the use of virtual whiteboards. There will be two available views, the teacher’s view and the student’s view. Each view will have a canvas that the corresponding user can draw on. The difference between the views is that the teacher’s view contains a list of all the students’ canvases while the students can only view the teacher’s canvas in addition to their own.
An example use case for our application would be in a math class where the teacher can put a math problem on their canvas and students could show their work and solution on their own canvas. The teacher can then verify that the students are reaching the solution properly and can help students if they see that they are struggling.
Students can follow along and when they want the teacher’s attention, click on the I’m Done button to notify the teacher. Teachers can see their boards and mark up anything they would want to. Teachers can also put students in groups and those students can share a whiteboard together to collaborate.
## How we built it
* **Backend:** We used Socket.IO to handle the real-time update of the whiteboard. We also have a Firebase database to store the user accounts and details.
* **Frontend:** We used React to create the application and Socket.IO to connect it to the backend.
* **DevOps:** The server is hosted on Google App Engine and the frontend website is hosted on Firebase and redirected to Domain.com.
## Challenges we ran into
Understanding and planning an architecture for the application. We went back and forth about if we needed a database or could handle all the information through Socket.IO. Displaying multiple canvases while maintaining the functionality was also an issue we faced.
## Accomplishments that we're proud of
We successfully were able to display multiple canvases while maintaining the drawing functionality. This was also the first time we used Socket.IO and were successfully able to use it in our project.
## What we learned
This was the first time we used Socket.IO to handle realtime database connections. We also learned how to create mouse strokes on a canvas in React.
## What's next for Lecturely
This product can be useful even past digital schooling as it can save schools money as they would not have to purchase supplies. Thus it could benefit from building out more features.
Currently, Lecturely doesn’t support audio but it would be on our roadmap. Thus, classes would still need to have another software also running to handle the audio communication. | winning |
## Inspiration
We started this project because we observed the emotional toll loneliness, grief, and dementia challenges can take on individuals and their caregivers, and we wanted to create a solution that could make a meaningful difference in their lives. Loneliness is especially tough on them, affecting both their mental and physical health. Our aim was simple: to offer them some companionship and comfort, a reminder of the good memories, and a way to feel less alone.
## What it does
Using the latest AI technology, we create lifelike AI clones of their closest loved ones, whether living or passed away, allowing seniors to engage in text or phone conversations whenever they wish. For dementia patients and their caregivers, our product goes a step further by enabling the upload of cherished memories to the AI clones. This not only preserves these memories but also allows dementia patients to relive them, reducing the need to constantly ask caregivers about their loved ones. Moreover, for those who may forget that a loved one has passed away, our technology helps provide continuous connection, eliminating the need to repeatedly go through the grief of loss. It's a lifeline to cherished memories and emotional well-being for our senior community.
## How we built it
We've developed a platform that transforms seniors' loved ones into AI companions accessible through web browsers, SMS, and voice calls. This technology empowers caregivers to customize AI companions for the senior they are in charge of, defining the companion personality and backstory with real data to foster the most authentic interaction, resulting in more engaging and personalized conversations. These interactions are stored indefinitely, enabling companions to learn and enhance their realism with each use.
We used the following tools. . .
Auth --> Clerk + Firebase
App logic --> Next.js
VectorDB --> Pinecone
LLM orchestration --> Langchain.js
Text model --> OpenAI
Text streaming --> ai sdk
Conversation history --> Upstash
Deployment --> Ngrok
Text and call with companion --> Twilio
Landing Page --> HTML, CSS, Javascript
Caregiver Frontend --> Angular
Voice Clone --> ElevenLabs
## Challenges we ran into
One of the significant challenges we encountered was in building and integrating two distinct front ends. We developed a Next.js front end tailored for senior citizens' ease of use, while a more administrative Angular front end was created for caregivers. Additionally, we designed a landing page using a combination of HTML, CSS, and JavaScript. The varying languages and technologies posed a challenge in ensuring seamless connectivity and cohesive operation between these different components. We faced difficulties in bridging the gap and achieving the level of integration we aimed for, but through collaborative efforts and problem-solving, we managed to create a harmonious user experience.
## Accomplishments that we're proud of
We all tackled frameworks we weren't so great at, like Next.js. And, together, we built a super cool project with tons of real-world uses. It's a big deal for us because it's not just tech, it's helping seniors and caregivers in many ways. We've shown how we can learn and innovate, and we're stoked about it!
## What we learned
We learned how to construct vector databases, which was an essential part of our project. Additionally, we discovered the intricacies of connecting the front-end and back-end components through APIs.
## What's next for NavAlone
We would like to improve the AI clone phone call feature. We recognize that older users often prefer phone calls over texting, and we want to make this experience as realistic, fast, and user-friendly as possible. | ## Inspiration
University gets students really busy and really stressed, especially during midterms and exams. We would normally want to talk to someone about how we feel and how our mood is, but due to the pandemic, therapists have often been closed or fully online. Since people will be seeking therapy online anyway, swapping a real therapist with a chatbot trained in giving advice and guidance isn't a very big leap for the person receiving therapy, and it could even save them money. Further, since all the conversations could be recorded if the user chooses, they could track their thoughts and goals, and have the bot respond to them. This is the idea that drove us to build Companion!
## What it does
Companion is a full-stack web application that allows users to be able to record their mood and describe their day and how they feel to promote mindfulness and track their goals, like a diary. There is also a companion, an open-ended chatbot, which the user can talk to about their feelings, problems, goals, etc. With realtime text-to-speech functionality, the user can speak out loud to the bot if they feel it is more natural to do so. If the user finds a companion conversation helpful, enlightening or otherwise valuable, they can choose to attach it to their last diary entry.
## How we built it
We leveraged many technologies such as React.js, Python, Flask, Node.js, Express.js, Mongodb, OpenAI, and AssemblyAI. The chatbot was built using Python and Flask. The backend, which coordinates both the chatbot and a MongoDB database, was built using Node and Express. Speech-to-text functionality was added using the AssemblyAI live transcription API, and the chatbot machine learning models and trained data was built using OpenAI.
## Challenges we ran into
Some of the challenges we ran into were being able to connect between the front-end, back-end and database. We would accidentally mix up what data we were sending or supposed to send in each HTTP call, resulting in a few invalid database queries and confusing errors. Developing the backend API was a bit of a challenge, as we didn't have a lot of experience with user authentication. Developing the API while working on the frontend also slowed things down, as the frontend person would have to wait for the end-points to be devised. Also, since some APIs were relatively new, working with incomplete docs was sometimes difficult, but fortunately there was assistance on Discord if we needed it.
## Accomplishments that we're proud of
We're proud of the ideas we've brought to the table, as well the features we managed to add to our prototype. The chatbot AI, able to help people reflect mindfully, is really the novel idea of our app.
## What we learned
We learned how to work with different APIs and create various API end-points. We also learned how to work and communicate as a team. Another thing we learned is how important the planning stage is, as it can really help with speeding up our coding time when everything is nice and set up with everyone understanding everything.
## What's next for Companion
The next steps for Companion are:
* Ability to book appointments with a live therapists if the user needs it. Perhaps the chatbot can be swapped out for a real therapist for an upfront or pay-as-you-go fee.
* Machine learning model that adapts to what the user has written in their diary that day, that works better to give people sound advice, and that is trained on individual users rather than on one dataset for all users.
## Sample account
If you can't register your own account for some reason, here is a sample one to log into:
Email: [[email protected]](mailto:[email protected])
Password: password | Can you save a life?
## Inspiration
For the past several years heart disease has remained the second leading cause of death in Canada. Many know how to prevent it, but many don’t know how to deal with cardiac events that have the potential to end your life. What if you can change this?
## What it does
Can You Save Me simulates three different conversational actions showcasing cardiac events at some of their deadliest moments. It’s your job to make decisions to save the person in question from either a stroke, sudden cardiac arrest, or a heart attack. Can You Save Me puts emphasis on the symptomatic differences between men and women during specific cardiac events. Are you up for it?
## How we built it
We created the conversational action with Voiceflow. While the website was created with HTML, CSS, Javascript, and Bootstrap. Additionally, the backend of the website, which counts the number of simulated lives our users saved, uses Node.js and Google sheets.
## Challenges we ran into
There were several challenges our team ran into, however, we managed to overcome each of them. Initially, we used React.js but it deemed too complex and time consuming given our time-sensitive constraints. We switched over to Javascript, HTML, CSS, and Bootstrap instead for the frontend of the website.
## Accomplishments that we're proud of
Our team is proud of the fact that we were able to come together as complete strangers and produce a product that is educational and can empower people to save lives. We managed our time efficiently and divided our work fairly according to our strengths.
## What we learned
Our team learned many technical skills such as how to use React.js, Node.js, Voiceflow and how to deploy actions on Google Assistant. Due to the nature of this project, we completed extensive research on cardiovascular health using resources from Statstics Canada, the Standing Committee on Health, the University of Ottawa Heart Institute, The Heart & Stroke Foundation, American Journal of Cardiology, American Heart Association and Harvard Health.
## What's next for Can You Save Me
We're interested in adding more storylines and variables to enrich our users' experience and learning. We are considering adding a play again action to improve our Voice Assistant and encourage iterations. | partial |
# BananaExpress
A self-writing journal of your life, with superpowers!
We make journaling easier and more engaging than ever before by leveraging **home-grown, cutting edge, CNN + LSTM** models to do **novel text generation** to prompt our users to practice active reflection by journaling!
Features:
* User photo --> unique question about that photo based on 3 creative techniques
+ Real time question generation based on (real-time) user journaling (and the rest of their writing)!
+ Ad-lib style questions - we extract location and analyze the user's activity to generate a fun question!
+ Question-corpus matching - we search for good questions about the user's current topics
* NLP on previous journal entries for sentiment analysis
I love our front end - we've re-imagined how easy and futuristic journaling can be :)
And, honestly, SO much more! Please come see!
♥️ from the Lotus team,
Theint, Henry, Jason, Kastan | ## Inspiration
Let's start by taking a look at some statistics on waste from Ontario and Canada. In Canada, only nine percent of plastics are recycled, while the rest is sent to landfills. More locally, in Ontario, over 3.6 million metric tonnes of plastic ended up as garbage due to tainted recycling bins. Tainted recycling bins occur when someone disposes of their waste into the wrong bin, causing the entire bin to be sent to the landfill. Mark Badger, executive vice-president of Canada Fibers, which runs 12 plants that sort about 60 percent of the curbside recycling collected in Ontario has said that one in three pounds of what people put into blue bins should not be there. This is a major problem, as it is causing our greenhouse gas emissions to grow exponentially. However, if we can reverse this, not only will emissions lower, but according to Deloitte, around 42,000 new jobs will be created. Now let's turn our focus locally. The City of Kingston is seeking input on the implementation of new waste strategies to reach its goal of diverting 65 percent of household waste from landfill by 2025. This project is now in its public engagement phase. That’s where we come in.
## What it does
Cycle AI is an app that uses machine learning to classify certain articles of trash/recyclables to incentivize awareness of what a user throws away. You simply pull out your phone, snap a shot of whatever it is that you want to dispose of and Cycle AI will inform you where to throw it out as well as what it is that you are throwing out. On top of that, there are achievements for doing things such as using the app to sort your recycling every day for a certain amount of days. You keep track of your achievements and daily usage through a personal account.
## How We built it
In a team of four, we separated into three groups. For the most part, two of us focused on the front end with Kivy, one on UI design, and one on the backend with TensorFlow. From these groups, we divided into subsections that held certain responsibilities like gathering data to train the neural network. This was done by using photos taken from waste picked out of, relatively unsorted, waste bins around Goodwin Hall at Queen's University. 200 photos were taken for each subcategory, amounting to quite a bit of data by the end of it. The data was used to train the neural network backend. The front end was all programmed on Python using Kivy. After the frontend and backend were completed, a connection was created between them to seamlessly feed data from end to end. This allows a user of the application to take a photo of whatever it is they want to be sorted, having the photo feed to the neural network, and then returned to the front end with a displayed message. The user can also create an account with a username and password, for which they can use to store their number of scans as well as achievements.
## Challenges We ran into
The two hardest challenges we had to overcome as a group was the need to build an adequate dataset as well as learning the framework Kivy. In our first attempt at gathering a dataset, the images we got online turned out to be too noisy when grouped together. This caused the neural network to become overfit, relying on patterns to heavily. We decided to fix this by gathering our own data. I wen around Goodwin Hall and went into the bins to gather "data". After washing my hands thoroughly, I took ~175 photos of each category to train the neural network with real data. This seemed to work well, overcoming that challenge. The second challenge I, as well as my team, ran into was our little familiarity with Kivy. For the most part, we had all just began learning Kivy the day of QHacks. This posed to be quite a time-consuming problem, but we simply pushed through it to get the hang of it.
## 24 Hour Time Lapse
**Bellow is a 24 hour time-lapse of my team and I work. The naps on the tables weren't the most comfortable.**
<https://www.youtube.com/watch?v=oyCeM9XfFmY&t=49s> | # TextMemoirs
## nwHacks 2023 Submission by Victor Parangue, Kareem El-Wishahy, Parmvir Shergill, and Daniel Lee
Journalling is a well-established practice that has been shown to have many benefits for mental health and overall well-being. Some of the main benefits of journalling include the ability to reflect on one's thoughts and emotions, reduce stress and anxiety, and document progress and growth. By writing down our experiences and feelings, we are able to process and understand them better, which can lead to greater self-awareness and insight. We can track our personal development, and identify the patterns and triggers that may be contributing to our stress and anxiety. Journalling is a practice that everyone can benefit from.
Text Memoirs is designed to make the benefits of journalling easy and accessible to everyone. By using a mobile text-message based journaling format, users can document their thoughts and feelings in a real-time sequential journal, as they go about their day.
Simply text your assigned number, and your journal text entry gets saved to our database. You're journal text entries are then displayed on our web app GUI. You can view all your text journal entries on any given specific day on the GUI.
You can also text commands to your assigned number using /EDIT and /DELETE to update your text journal entries on the database and the GUI (see the image gallery).
Text Memoirs utilizes Twilio’s API to receive and store user’s text messages in a CockroachDB database. The frontend interface for viewing a user’s daily journals is built using Flutter.
![]()
![]()
# TextMemoirs API
This API allows you to insert users, get all users, add texts, get texts by user and day, delete texts by id, get all texts and edit texts by id.
## Endpoints
### Insert User
Insert a user into the system.
* Method: **POST**
* URL: `/insertUser`
* Body:
`{
"phoneNumber": "+17707626118",
"userName": "Test User",
"password": "Test Password"
}`
### Get Users
Get all users in the system.
* Method: **GET**
* URL: `/getUsers`
### Add Text
Add a text to the system for a specific user.
* Method: **POST**
* URL: `/addText`
* Body:
`{
"phoneNumber": "+17707626118", "textMessage": "Text message #3", "creationDate": "1/21/2023", "creationTime": "2:57:14 PM"
}`
### Get Texts By User And Day
Get all texts for a specific user and day.
* Method: **GET**
* URL: `/getTextsByUserAndDay`
* Parameters:
+ phoneNumber: The phone number of the user.
+ creationDate: The date of the texts in the format `MM/DD/YYYY`.
### Delete Texts By ID
Delete a specific text by ID.
* Method: **DELETE**
* URL: `/deleteTextsById`
* Body:
`{
"textId": 3
}`
### Edit Texts By ID
Edit a specific text by ID.
* Method: **PUT**
* URL: `/editTextsById`
* Parameters:
+ id: The ID of the text to edit.
* Body:
`{
"textId": 2,
"textMessage": "Updated text message"
}`
### Get All Texts
Get all texts in the database.
* Method: **GET**
* URL: `/getAllTexts` | winning |
# Inspiration
During the information session presented by the Wharton Risk Center, we were inspired by the story of a little girl who recognized the early signs of a tsunami but found difficulty in spreading awareness. After finally contacting the right person, they managed to get everyone off the beach and it became the only place where no one died to the tsunami. Other problems found were a lack of data involving the extent of flooded areas and other natural disasters, and difficulties in safely delivering supplies to sites in areas affected by diasters. We wanted to build a solution that gives users the power to alert others to disasters who are within close proximity, collects missing data in a non-intrusive manner, and provides critical support and directions during disasters.
# What it does
## Mobile Application
We connect our users.
Our iOS mobile app gives users the power to report a disaster in their area, whether it's flooding, fire, earthquake, tsunami, etc. Other users of the application within a certain radius is notified of a potential threat. They can quickly and conveniently respond yes or no with two clicks of the notification. Whether they answer yes, no, or no response, the data is stored and mapped on our web application. After sending a report, the application offers tips and resources for staying safe, and the user can prepare accordingly for the disaster.
## Web Application
### Data Visualization
Our web application makes it extremely to visualize the extent of an incident. If someone responds yes, they are plotted with a red heatmap. If someone responds no, they are plotted with a blue heatmap. As a result, the affected area should be plotted with red by users who were affected, and should be bounded by a blue where users reported they didn't see anything. This data can be analyzed and visualized even more in the visualization page of our web a pp
### Mission Control
Delivering supplies to sites can be dangerous, so first responders need a safe and efficient way to quickly deliver supplies. Safe routes are provided such that in order to get from point A to point B, only non affected routes will be taken. For example, if the shortest path to get from point A to point B is to go through a road that was reported to be flooded by the users, the route would avoid this road entirely.
# How we built it
We built the front end using React+Redux for the webapp and Swift for the mobile app, and Firebase for the backend. We mainly had two people working on each platform, but we all helped each other integrate all our separate moving parts together. Most importantly, we had a lot of fun!
(And Lots of basketball with Bloomberg)
# Challenges we ran into
Coordinating live updates across platforms
# Accomplishments that we're proud of
Making a coherent and working product!
# What we learned
Lots of firebase.
# What's next for foresite
Endless analytical opportunities with the data. Machine learning for verification of disaster reports (we don't want users to falsely claim disasters in areas, but we also don't want to deter anyone from claiming so). | ## Inspiration
This system was designed to make waste collection more efficient, organized and user-friendly.
Keeping the end users in mind we have created a system that detects what type of waste has been inserted in the bin and categorizes it as a recyclable or garbage.
The system then opens the appropriate shoot (using motors) and turns on an LED corresponding to the type of waste that was just disposed to educate the user.
## What it does
Able to sort out waste into recycling or garbage with the use of the Google Vision API to identify the waste object, Python to sort the object into recycling or garbage, and Arduino to move the bin/LED to show the appropriate waste bin.
## How we built it
We built our hack using the Google Cloud Vision API, Python to convert data received from the API, and then used that data to transmit to the Arduino on which bin to open. The bin was operated using a stepper motor and LED that indicated the appropriate bin, recycling or garbage, so that the waste object can automatically be correctly disposed of. We built our hardware model by using cardboard. We split a box into 2 sections and attached a motor onto the centre of a platform that allows it to rotate to each of the sections.
## Challenges we ran into
We were planning on using a camera interface with the Arduino to analyze the garbage at the input, unfortunately, the hardware component that was going to act as our camera ended up failing, forcing us to find an alternative way to analyze the garbage. Another challenge we ran into was getting the Google Cloud Vision API, but we stayed motivated and got it all to work. One of the biggest challenges we ran into was trying to use the Dragonboard 410c, due to inconsistent wifi, and the controller crashing frequently, it was hard for us to get anything concrete.
## Accomplishments that we're proud of
Something that we are really proud of is that we were able to come up with a hardware portion of our hack overnight. We finalized our idea late into the hackathon (around 7pm) and took up most of the night splitting our resources between the hardware and software components of our hack. Another accomplishment that we are proud of is that our hack has positive implications for the environment and society, something that all of our group members are really passionate about.
## What we learned
We learned a lot through our collaboration on this project. What stands out is our exploration of APIs and attempts at using new technologies like the Dragonboard 410c and sensors. We also learned how to use serial communications, and that there are endless possibilities when we look to integrate multiple different technologies together.
## What's next for Eco-Bin
In the future, we hope to have a camera that is built in with our hardware to take pictures and analyze the trash at the input. We would also like to add more features like a counter that keeps up with how many elements have been recycled and how many have been thrown into the trash. We can even go into specifics like counting the number of plastic water bottles that have been recycled. This data could also be used to help track the waste production of certain areas and neighbourhoods. | ## Inspiration
Our inspiration came from a common story that we have been seeing on the news lately - the wildfires that are impacting people on a nationwide scale. These natural disasters strike at uncertain times, and we don't know if we are necessarily going to be in the danger zone or not. So, we decided to ease the tensions that occur during these high-stress situations by acting as the middle persons.
## What it does
At RescueNet, we have two types of people with using our service - either subscribers or homeowners. The subscriber pays RescueNet monthly or annually at a rate which is cheaper than insurance! Our infrastructure mainly targets people who live in natural disaster-prone areas. In the event such a disaster happens, the homeowners will provide temporary housing and will receive a stipend after the temporary guests move away. We also provide driving services for people to escape their emergency situations.
## How we built it
We divided our work into the clientside and the backend. Diving into the clientside, we bootstrapped our project using Vite.js for faster loadtimes. Apart from that, React.js was used along with React Router to link the pages and organize the file structure accordingly. Tailwind CSS was employed to simplify styling along with Material Tailwind, where its pre-built UI components were used in the about page.
Our backend server is made using Node.js and Express.js, and it connects to a MongoDB Atlas database making use of a JavaScript ORM - Mongoose. We make use of city data from WikiData, geographic locations from GeoDB API, text messaging functionality of Twilio, and crypto payment handling of Circle.
## Challenges we ran into
Some challenges we ran into initially is to make the entire web app responsive across devices while still keeping our styles to be rendered. At the end, we figured out a great way of displaying it in a mobile setting while including a proper navbar as well. In addition, we ran into trouble working with the Circle API for the first time. Since we've never worked with cryptocurrency before, we didn't understand some of the implications of the code we wrote, and that made it difficult to continue with Circle.
## Accomplishments that we're proud of
An accomplishment we are proud of is rendering the user dashboard along with the form component, which allowed the user to either enlist as a subscriber or homeowner. The info received from this component would later be parsed into the dashboard would be available for show.
We are also proud of how we integrated Twilio's SMS messaging services into the backend algorithm for matching subscribers with homeowners. This algorithm used information queried from our database, accessed from WikiData, and returned from various API calls to make an "optimal" matching based on distance and convenience, and it was nice to see this concept work in real life by texting those who were matched.
## What we learned
We learned many things such as how to use React Router in linking to pages in an easy way. Also, leaving breadcrumbs in our Main.jsx allowed us to manually navigate to such pages when we didn't necessarily had anything set up in our web app. We also learned how to use many backend tools like Twilio and Circle.
## What's next for RescueNet
What's next for RescueNet includes many things. We are planning on completing the payment model using Circle API, including implementing automatic monthly charges and the ability to unsubscribe. Additionally, we plan on marketing to a few customers nationwide, this will allow us to conceptualize and iterate on our ideas till they are well polished. It will also help in scaling things to include countries such as the U.S.A and Mexico. | losing |
## Aperture
*Share your moments effortlessly*

## Inspiration
Motivated by our personal experiences, we have realized that the current state of the social network applications does not fulfill a particular use case that most of us share: Every time we go on a trip, to a party or to an event with friends or family, we struggle to get ahold of the photos that we took together.

The best solution to this problem has been to create a Google Photos album, after the event/trip has ended, then individually invite everyone to share their photos to it.
Some people, may choose to use Snapchat as an alternative, creating a "group chat" or sharing to the location's "Story", however, this prevents saving photos and gives no control over privacy.
So, we thought to ourselves: No one should be working this hard just to preserve the joyful memories they created with family and friends.
## What it does
 
**Aperture** is the future of social photo sharing: It takes all the effort away from the experience of sharing moments with friends and family. With location-based events, our advanced algorithm determines when you are near an event that your friends are attending. We then send a notification to your phone inviting you to use the app to share photos with your friends. When you open the notification, you are taken to the event's page where you will find all of the photos that your friends have been taking throughout the event. You can now choose to save the album so that every moment is never lost! You can also tie your app to the event for a certain time, so that every subsequential photo you take until the end of the event is shared with everyone attending! We then proceed to tag the images through machine learning algorithms to let you search your moments by tags.
## How we built it
We used the amazing **Expo** framework, built on top of **React-Native**. Our backend is powered by **Firebase** (for database storage), **Amazon Web Services** (for data storage) and **Google Cloud Services** (for Machine Learning and Computer Vision algorithms). We used **git** and **Github** to manage our development process.
## Challenges we ran into
While two of us have had previous experience with **Expo**, the other two of us did not and none of us have had experience with **Firebase**. Additionally, while our team has experience with **NoSQL** databases, the **Firebase** data structure is much different than **MongoDB**, so it took time to adjust to it. We also had issues with certain modules (like **Google Cloud Vision**) being incompatible with **React Native** due to dependency on core **Node** modules. To get around this, we had to manually write out the **HTTP** requests necessary to be an analog to the functionality of the methods we couldn't use. One of the biggest issues we had, however, was the inability to upload images to **Firebase** storage from **React Native** (which we confirmed through **GitHub** issue trackers). After many attempts at converting local pictures into base64 strings, we setup an **AWS** bucket and uploaded the images there.
## Accomplishments that we're proud of
We're proud of our ability to consume a (probably unhealthily) large volume of caffeine and stay focused and productive. We're also proud of the way we handled the **React Native** module compatibility issues and the workarounds we found for them. We're also happy with how much work we got done in such a short amount of time, especially given the relative lack of experience we had with the major components of our project.
## What we learned
We learned about a great brand of caffeinated chocolate that is very, very effective and keeping you awake. We also learned a lot about **Firebase** and its **Realtime Database** system. Additionally, we got some great experience with **Expo** and **React Native**.
## What's next for Aperture
Silicon Valley. | ## Inspiration
A big problem that exists and is most prevalent in cancer is the idea that people developed these diseases and conditions but never get it checked up on, because no one knows that they have one of these diseases until it is too late. The whole spectrum of neurodegenerative diseases matches this phenomenon. Diseases related to dementia have symptoms that are present way before the average time that someone suspects they have this disease. When shown the power of Alexa, that idea came to us that we could utilize Alexa, a product that is increasing in popularity especially amongst the older generations, to be able to utilize predictive analytics to foresee potential diseases or conditions. With neurodegenerative diseases, the symptoms are usually changes in the speaking patterns and habits that an individual has. Hence, we believed that utilizing this along with machine learning would be an amazing way to predict the likelihood of these devastating conditions.
## What it does
This Alexa skill allows for analysis of speech attributes such as pauses, repeated words, and unintelligible words, which are attributes of speech known for being among the first victims of neurodegenerative diseases like Alzheimer's. With this information, we can monitor significant worsening of speech over time, notify users that they have exhibited symptoms of Alzheimer's and recommend that they see a doctor for a second opinion.
## How we built it
Our projects had many parts. The analytical part of the project was utilizing Python and machine learning through scikit-learn to develop an algorithm given a lot of training data to predict if an individual has a form of dementia given speech to text translated transcripts. From there that input was received (somewhat) from Amazon Alexa where she would be able to respond to user prompts. This was done through a combination of node.js on the Amazon Developer platform and AWS Lambda to achieve this. The combination of the two allows for an analytical mechanism to interact over time with individuals to predict their health and well being; the use case for our project was predicting the presence of dementia given a certain confidence interval along with having some cool health-related features integrated into our Alexa Skill. This is what constituted of our project, Echo MyHealth.
## Challenges we ran into
Because we aren’t Amazon, we could not create the dream implementation of the idea. Ideally, the machine learning algorithm could run passively upon any request without having to open the MyHealth skill. Instead, we created a proof of concept in the form of an Alexa Skill.
## Accomplishments that we're proud of
Using the machine learning algorithm, we are able to accurately predict instances of Alzheimer’s with an accuracy of 70 percent, with nothing but vocal recordings. Along with that, our team had no experience with Javascript and limited machine learning experience. None of our team knew each other coming into the Hackathon and we created something super awesome and that we enjoyed doing.
## What we learned
We learned so much so it was really awesome. We all learned aspects of node.js and javascript that none of us knew before. We were also exposed to creating Alexa Skills which was so exciting because of the power developers have.
## What's next for Echo MyHealth
We did a lot in 36 hours but there was more we wanted to do. We wanted to better increase the interactions between our analytical half and our Alexa half. Moreover, we want to be able to implement more features that can help us increase the accuracy of our machine learning algorithm. Lastly, we also would love to make Echo MyHealth a more immersive platform with more functionality given more time. | ## What it does
Take a picture, get a 3D print of it!
## Challenges we ran into
The 3D printers going poof on the prints.
## How we built it
* AI model transforms the picture into depth data. Then post-processing was done to make it into a printable 3D model. And of course, real 3D printing.
* MASV to transfer the 3D model files seamlessly.
* RBC reward system to incentivize users to engage more.
* Cohere to edit image prompts to be culturally appropriate for Flux to generate images.
* Groq to automatically edit the 3D models via LLMs.
* VoiceFlow to create an AI agent that guides the user through the product. | partial |
## Inspiration
An Article, about 86 per cent of Canada's plastic waste ends up in landfill, a big part due to Bad Sorting. We thought it shouldn't be impossible to build a prototype for a Smart bin.
## What it does
The Smart bin is able, using Object detection, to sort Plastic, Glass, Metal, and Paper
We see all around Canada the trash bins split into different types of trash. It sometimes becomes frustrating and this inspired us to built a solution that doesn't require us to think about the kind of trash being thrown
The Waste Wizard takes any kind of trash you want to throw, uses machine learning to detect what kind of bin it should be disposed in, and drops it in the proper disposal bin
## How we built it\
Using Recyclable Cardboard, used dc motors, and 3d printed parts.
## Challenges we ran into
We had to train our Model for the ground up, even getting all the data
## Accomplishments that we're proud of
We managed to get the whole infrastructure build and all the motor and sensors working.
## What we learned
How to create and train model, 3d print gears, use sensors
## What's next for Waste Wizard
A Smart bin able to sort the 7 types of plastic | ## Inspiration
At many public places, recycling is rarely a priority. Recyclables are disposed of incorrectly and thrown out like garbage. Even here at QHacks2017, we found lots of paper and cans in the [garbage](http://i.imgur.com/0CpEUtd.jpg).
## What it does
The Green Waste Bin is a waste bin that can sort the items that it is given. The current of the version of the bin can categorize the waste as garbage, plastics, or paper.
## How we built it
The physical parts of the waste bin are the Lego, 2 stepper motors, a raspberry pi, and a webcam. The software of the Green Waste Bin was entirely python. The web app was done in html and javascript.
## How it works
When garbage is placed on the bin, a picture of it is taken by the web cam. The picture is then sent to Indico and labeled based on a collection that we trained. The raspberry pi then controls the stepper motors to drop the garbage in the right spot. All of the images that were taken are stored in AWS buckets and displayed on a web app. On the web app, images can be relabeled and the Indico collection is retrained.
## Challenges we ran into
AWS was a new experience and any mistakes were made. There were some challenges with adjusting hardware to the optimal positions.
## Accomplishments that we're proud of
Able to implement machine learning and using the Indico api
Able to implement AWS
## What we learned
Indico - never done machine learning before
AWS
## What's next for Green Waste Bin
Bringing the project to a larger scale and handling more garbage at a time. | ## Inspiration
Want to see how a product, service, person or idea is doing in the court of public opinion? Market analysts are experts at collecting data from a large array of sources, but monitoring public happiness or approval ratings is notoriously difficult. Usually, focus groups and extensive data collection is required before any estimates can be made, wasting both time and money. Why bother with all of this when the data you need can be easily mined from social media websites such as Twitter? Through aggregating tweets, performing sentiment analysis and visualizing the data, it would be possible to observe trends on how happy the public is about any topic, providing a valuable tool for anybody who needs to monitor customer satisfaction or public perception.
## What it does
Queries Twitter Search API to return relevant tweets that are sorted into buckets of time. Sentiment analysis is then used to categorize whether the tweet is positive or negative in regards to the search term. The collected data is visualized with graphs such as average sentiment over time, percentage of positive to percentage of negative tweets, and other in depth trend analyses. An NLP algorithm that involves the clustering of similar tweets was developed to return a representative summary of good and bad tweets. This can show what most people are happy or angry about and can provide insight on how to improve public reception.
## How we built it
The application is split into a **Flask** back-end and a **ReactJS** front-end.
The back-end queries the Twitter API, parses and stores relevant information from the received tweets, and calculates any extra statistics that the front-end requires. The back-end then provides this information in a JSON object that the front-end can access through a `get` request.
The React front-end presents all UI elements in components styled by [Material-UI](https://material-ui.com/). [React-Vis](https://uber.github.io/react-vis/) was utilized to compose charts and graphs that presents our queried data in an efficient and visually-appealing way.
## Challenges we ran into
Twitter API throttles querying to 1000 tweets per minute, a number much less than what this project needs in order to provide meaningful data analysis. This means that by itself, after returning 1000 tweets we would have to wait another minute before continuing to request tweets. With some keywords returning hundreds of thousands of tweets, this was a huge problem. In addition, extracting a representative summary of good and bad tweet topics was challenging, as features that represent contextual similarity between words are not very well defined. Finally, we found it difficult to design a user interface that displays the vast amount of data we collect in a clear, organized, and aesthetically pleasing manner.
## Accomplishments that we're proud of
We're proud of how well we visualized our data. In the course of a weekend, we managed to collect and visualize a large sum of data in six different ways. We're also proud that we managed to implement the clustering algorithm. In addition, the application is fully functional with nothing manually mocked!
## What we learned
We learnt about several different natural language processing techniques. We also learnt about the Flask REST framework and best practices for building a React web application.
## What's next for Twitalytics
We plan on cleaning some of the code that we rushed this weekend, implementing geolocation filtering and data analysis, and investigating better clustering algorithms and big data techniques. | winning |
## Inspiration
Initially, we struggled to find a project idea. After circling through dozens of ideas and the occasional hacker's block, we were still faced with a huge ***blank space***. In the midst of all our confusion, it hit us that this feeling of desperation and anguish is familiar to all thinkers and creators. There came our inspiration - the search for inspiration. Tailor is a tool that enables artists to overcome their mental blocks in a fun and engaging manner, while leveraging AI technology. AI is very powerful, but finding the right prompt can sometimes be tricky, especially for children or those with special needs. With our easy to use app, anyone can find inspiration as swiftly as possible.
## What it does
The site generates prompts artists to generate creative prompts for DALLE. By clicking the "add" button, a react component containing a random noun is added to the main container. Users can then specify the color and size of this noun. They can add as many nouns as they want, then specify the style and location of the final artwork. After hitting submit, a prompt is generated and sent to OpenAI's API, which returns an image.
## How we built it
It was built using React Remix, OpenAI's API, and a random noun generator API. TailwindCSS was used for styling which made it easy to create beautiful components.
## Challenges we ran into
Getting tailwind installed, and installing dependencies in general. Sometimes our API wouldn't connect, and OpenAI rotated our keys since we were developing together. Even with tailwind, it was sometimes hard for the CSS to do what we wanted it to. Passing around functions and state between parent and child components in react was also difficult. We tried to integrate twiliio with an API call but it wouldn't work, so we had to setup a separate backend on Vercel and manually paste the image link and phone number. Also, we learned Remix can't use react-speech libraries so that was annoying.
## Accomplishments that we're proud of
* Great UI/UX!
* Connecting to the OpenAI Dalle API
* Coming up with a cool domain name
* Sleeping more than 2 hours this weekend
## What we learned
We weren't really familiar with React as none of use had really used it before this hackathon. We really wanted to up our frontend skills and selected Remix, a metaframework based on React, to do multipage routing. It turned out to be a little overkill, but we learned a lot and are thankful to mentors. They showed us how to avoid overuse of Hooks, troubleshoot API connection problems, and use asynchrous functions. We also learned many more tailwindcss classes and how to use gradients.
## What's next for Tailor
It would be cool to have this website as a browser extension, maybe just to make it more accessible, or even to have it scrape websites for AI prompts. Also, it would be nice to implement speech to text, maybe through AssemblyAI | ## Inspiration
Globally, over 92 million tons of textile waste are generated annually, contributing to overflowing landfills and environmental degradation. What's more, the fashion industry is responsible for 10% of global carbon emissions, with fast fashion being a significant contributor due to its rapid production cycles and disposal of unsold items. The inspiration behind our project, ReStyle, is rooted in the urgent need to address the environmental impact of fast fashion. Witnessing the alarming levels of clothing waste and carbon emissions prompted our team to develop a solution that empowers individuals to make sustainable choices effortlessly. We believe in reshaping the future of fashion by promoting a circular economy and encouraging responsible consumer behaviour.
## What it does
ReStyle is a revolutionary platform that leverages AI matching to transform how people buy and sell pre-loved clothing items. The platform simplifies the selling process for users, incentivizing them to resell rather than contribute to the environmental crisis of clothing ending up in landfills. Our advanced AI matching algorithm analyzes user preferences, creating tailored recommendations for buyers and ensuring a seamless connection between sellers and buyers.
## How we built it
We used React Native and Expo to build the front end, creating different screens and components for the clothing matching, camera, and user profile functionality. The backend functionality was made possible using Firebase and the OpenAI API. Each user's style preferences are saved in a Firebase Realtime Database, as are the style descriptions for each piece of clothing, and when a user takes a picture of a piece of clothing, the OpenAI API is called to generate a description for that piece of clothing, and this description is saved to the DB. When the user is on the home page, they will see the top pieces of clothing that match with their style, retrieved from the DB and the matches generated using the OpenAI API.
## Challenges we ran into
* Our entire team was new to the technologies we utilized.
* This included React Native, Expo, Firebase, OpenAI.
## Accomplishments that we're proud of
* Efficient and even work distribution between all team members
* A visually aesthetic and accurate and working application!
## What we learned
* React Native
* Expo
* Firebase
* OpenAI
## What's next for ReStyle
Continuously refine our AI matching algorithm, incorporating machine learning advancements to provide even more accurate and personalized recommendations for users, enabling users to save clothing that they are interested in. | # AI Video Generator: From Concept to Creation
## Inspiration
Have you ever wondered in class, "When am I really going to use these concepts in the real world?" To answer this question our team set out to present a video model of real life applications of concepts presented from a prompt. Our team was also inspired by the rapid advancements in AI technology, particularly in the fields of natural language processing and video generation. We saw an opportunity to bridge these technologies, creating a tool that could transform simple text prompts into engaging video content. The idea of democratizing video creation, making it accessible to anyone with an idea, regardless of their technical or creative skills, was a driving force behind our project.
## Program Flow
1. User inputs topic they want to learn about in Next.js site
2. Query is sent to flask backend
3. LLM converts topic to a narative
4. LLM converts narrative into screenplay (with scenes and scene descriptions)
5. Cloud hosted Open-Sora generates 4 second clip for each scene
6. Text to Speech generates audio of narrative
7. Scenes and text to speech are combined and returned to front end
8. Video is displayed to the user
## What We Learned
Throughout the development of this project, we gained valuable insights and skills:
1. **AI Integration**: We deepened our understanding of OpenAI's GPT models and how to effectively prompt them for creative tasks.
2. **Full-Stack Development**: We honed our skills in building a cohesive application that spans from frontend to backend, integrating various technologies.
3. **API Design**: We learned the intricacies of designing and implementing RESTful APIs that connect our frontend with our AI-powered backend.
4. **React and Next.js**: We expanded our knowledge of modern frontend frameworks, particularly in creating dynamic and responsive user interfaces.
5. **Python Backend**: We improved our Python skills, especially in handling asynchronous operations and subprocess management.
6. **Implement Open-Sora** : Compiled and hosted Open-Sora ourselves
7. **Use OpenAI TTS** : First time using it
## How We Built It
Our project was built using a combination of cutting-edge technologies:
* **Frontend**: We used React with Next.js for a fast, SEO-friendly frontend. Material-UI provided a sleek, responsive design.
* **Backend**: Python powered our backend, with Flask serving as our web framework.
* **AI Integration**: We leveraged OpenAI's GPT-4 model to generate creative scripts and screenplays from user prompts.
* **API**: We created a custom API using Next.js API routes to handle communication between the frontend and backend.
* **Video Generation**: We generated videos using Open-Sora to present an understandable model to explain our prompt
## Challenges We Faced
Despite our enthusiasm, we encountered several challenges:
1. **CORS Issues**: We struggled with Cross-Origin Resource Sharing (CORS) when trying to connect our frontend to the backend API. This required us to implement workarounds and properly configure our server.
2. **AI Model Limitations**: Working within the constraints of the AI model's output format and ensuring consistent, usable results was a significant challenge.
3. **Asynchronous Operations**: Managing asynchronous operations, especially when dealing with AI-generated content and video processing, proved to be complex.
4. **Performance Optimization**: Ensuring the application remained responsive while handling resource-intensive tasks like video generation was a constant concern.
5. **Error Handling**: Implementing robust error handling across both frontend and backend to provide a smooth user experience was more challenging than anticipated.
6. **Integration Complexity**: Bringing together various technologies (React, Next.js, Python, OpenAI API) into a cohesive application required careful planning and execution.
Despite these challenges, our team persevered, learning valuable lessons along the way. The result is an innovative application that pushes the boundaries of what's possible with AI-assisted content creation. | partial |
# cirkvito
Ever wanna quickly check your Intro to CS Logic course homework without firin' up the ol' Quartus-Beast? Love circuits, but hate Java applets? Or maybe you're dying to make yourself a shiny half-adder but don't have the time (or dexterity) to fashion one out of physical components?
## Inspo
Enter *cirkvito* (serk-vee-toh)! Circuits are awesome but finding good software to simulate them is hard. We chose to build a tool to provide ourselves with an alternative to clunky and overly-complex Flash or Java applets. (Appropriately, its name is a translation of the word "circuits" in Esperanto.) Ableson's *The Structure and Interpretation of Computer Programs* mentioned this idea, too.
## What it Does
cirkvito is a drag and drop single-page-app that simulates simple circuits. You click and drag components on the page and connect them in whatever way you want and end up with something cool. Click buttons to zoom in and out, hold the space bar to pan around, and shift-click to select or delete multiple components.
## Some Challenges
We built cirkvito with a lot of JavaScript by taking advantage of *canvas* and *jQuery*. We learned that UI is really hard to get right by rolling our own buttons and zoom/recenter system.
## What We're Proud of
We're proud that cirkvito successfully simulates not only combinational circuits like adders, but also sequential circuits with flip flops made from elementary logic gates (albeit with a manual "clock"). Also, we designed the data structures to represent circuit components and the way they're connected on our own.
## What's Next?
A fully-responsive app that's able to resize components depending on the window size would have been nice to have. Also, being able to support custom circuits (allowing builders to "bundle" custom configurations of logic gates into black boxes) would make it easier to build more complex circuits. | # Nexus, **Empowering Voices, Creating Connections**.
## Inspiration
The inspiration for our project, Nexus, comes from our experience as individuals with unique interests and challenges. Often, it isn't easy to meet others with these interests or who can relate to our challenges through traditional social media platforms.
With Nexus, people can effortlessly meet and converse with others who share these common interests and challenges, creating a vibrant community of like-minded individuals.
Our aim is to foster meaningful connections and empower our users to explore, engage, and grow together in a space that truly understands and values their uniqueness.
## What it Does
In Nexus, we empower our users to tailor their conversational experience. You have the flexibility to choose how you want to connect with others. Whether you prefer one-on-one interactions for more intimate conversations or want to participate in group discussions, our application Nexus has got you covered.
We allow users to either get matched with a single person, fostering deeper connections, or join one of the many voice chats to speak in a group setting, promoting diverse discussions and the opportunity to engage with a broader community. With Nexus, the power to connect is in your hands, and the choice is yours to make.
## How we built it
We built our application using a multitude of services/frameworks/tool:
* React.js for the core client frontend
* TypeScript for robust typing and abstraction support
* Tailwind for a utility-first CSS framework
* DaisyUI for animations and UI components
* 100ms live for real-time audio communication
* Clerk for a seamless and drop-in OAuth provider
* React-icons for drop-in pixel perfect icons
* Vite for simplified building and fast dev server
* Convex for vector search over our database
* React-router for client-side navigation
* Convex for real-time server and end-to-end type safety
* 100ms for real-time audio infrastructure and client SDK
* MLH for our free .tech domain
## Challenges We Ran Into
* Navigating new services and needing to read **a lot** of documentation -- since this was the first time any of us had used Convex and 100ms, it took a lot of research and heads-down coding to get Nexus working.
* Being **awake** to work as a team -- since this hackathon is both **in-person** and **through the weekend**, we had many sleepless nights to ensure we can successfully produce Nexus.
* Working with **very** poor internet throughout the duration of the hackathon, we estimate it cost us multiple hours of development time.
## Accomplishments that we're proud of
* Finishing our project and getting it working! We were honestly surprised at our progress this weekend and are super proud of our end product Nexus.
* Learning a ton of new technologies we would have never come across without Cal Hacks.
* Being able to code for at times 12-16 hours straight and still be having fun!
* Integrating 100ms well enough to experience bullet-proof audio communication.
## What we learned
* Tools are tools for a reason! Embrace them, learn from them, and utilize them to make your applications better.
* Sometimes, more sleep is better -- as humans, sleep can sometimes be the basis for our mental ability!
* How to work together on a team project with many commits and iterate fast on our moving parts.
## What's next for Nexus
* Make Nexus rooms only open at a cadence, ideally twice each day, formalizing the "meeting" aspect for users.
* Allow users to favorite or persist their favorite matches to possibly re-connect in the future.
* Create more options for users within rooms to interact with not just their own audio and voice but other users as well.
* Establishing a more sophisticated and bullet-proof matchmaking service and algorithm.
## 🚀 Contributors 🚀
| | | | |
| --- | --- | --- | --- |
| [Jeff Huang](https://github.com/solderq35) | [Derek Williams](https://github.com/derek-williams00) | [Tom Nyuma](https://github.com/Nyumat) | [Sankalp Patil](https://github.com/Sankalpsp21) | | ## Inspiration
My brother & I have taken over 15 online courses from coursera.org & edx.org. We <3 learning the content but never met anyone new, so we created a web app to match people that would enjoy working together on online courses.
## What it does
A "Sorting Hat", inspired from Harry Potter, collects user beliefs. Anyone can ask any question & vote on any question. We use each user's votes to match them with the other users that have the highest intersection of beliefs.
## How I built it
React.js/Redux, Typescript/ES6, Python, MongoDB, Redis, RabbitMQ, Google & LinkedIn Oauth2, Express, Heroku, Chron tab, Socket.io, SendGrid, & various APIs.
## Challenges I ran into
1. Trouble using socket.io's API to detect when a user leaves the site.
## Greatest Satisfaction so far
Users that have responded back about how much they've wanted a product like infinity2o.
## What I learned
Every problem does not have a perfect tool, but finding the right tool for the problem is an important part of
## What's next for [www.infinity2o.com](http://www.infinity2o.com)
Getting as much feedback from as much user's as possible. | winning |
## Inspiration
Our inspiration for TRACY came from the desire to enhance tennis training through advanced technology. One of our members was a former tennis enthusiast who has always strived to refine their skills. They soon realized that the post-game analysis process took too much time in their busy schedule. We aimed to create a system that not only analyzes gameplay but also provides personalized insights for players to improve their skills.
## What it does and how we built it
TRACY utilizes computer vision algorithms and pre-trained neural networks to analyze tennis footage, tracking player movements, and ball trajectories. The system then employs ChatGPT for AI-driven insights, generating personalized natural language summaries highlighting players' strengths and weaknesses. The output includes dynamic visuals and statistical data using React.js, offering a comprehensive overview and further insights into the player's performance.
## Challenges we ran into
Developing a seamless integration between computer vision, ChatGPT, and real-time video analysis posed several challenges. Ensuring accuracy in 2D ball tracking from a singular camera angle, optimizing processing speed, and fine-tuning the algorithm for accurate tracking were a key hurdle we overcame during the development process. The depth of the ball became a challenge as we were limited to one camera angle but we were able to tackle it by using machine learning techniques.
## Accomplishments that we're proud of
We are proud to have successfully created TRACY, a system that brings together state-of-the-art technologies to provide valuable insights to tennis players. Achieving a balance between accuracy, speed, and interpretability was a significant accomplishment for our team.
## What we learned
Through the development of TRACY, we gained valuable insights into the complexities of integrating computer vision with natural language processing. We also enhanced our understanding of the challenges involved in real-time analysis of sports footage and the importance of providing actionable insights to users.
## What's next for TRACY
Looking ahead, we plan to further refine TRACY by incorporating user feedback and expanding the range of insights it can offer. Additionally, we aim to explore potential collaborations with tennis coaches and players to tailor the system to meet the diverse needs of the tennis community. | ## Inspiration
Being sport and fitness buffs, we understand the importance of right form. Incidentally, suffering from a wrist injury himself, Mayank thought of this idea while in a gym where he could see almost everyone following wrong form for a wide variety of exercises. He knew that it couldn't be impossible to make something that easily accessible yet accurate in recognizing wrong exercise form and most of all, be free. He was sick of watching YouTube videos and just trying to emulate the guys in it with no real guidance. That's when the idea for Fi(t)nesse was born, and luckily, he met an equally health passionate group of people at PennApps which led to this hack, an entirely functional prototype that provides real-time feedback on pushup form. It also lays down an API that allows expansion to a whole array of exercises or even sports movements.
## What it does
A user is recorded doing the push-up twice, from two different angles. Any phone with a camera can fulfill this task.
The data is then analyzed and within a minute, the user has access to detailed feedback pertaining to the 4 most common push-up mistakes. The application uses custom algorithms to detect these mistakes and also their extent and uses this information to provide a custom numerical score to the user for each category.
## How we built it
Human Pose detection with a simple camera was achieved with OpenCV and deep neural nets. We tried using both the COCO and the MPI datasets for training data and ultimately went with COCO. We then setup an Apache server running Flask using the Google Computing Engine to serve as an endpoint for the input videos. Due to lack of access to GPUs, a 24 core machine on the Google Cloud Platform was used to run the neural nets and generate pose estimations.
The Fi(t)nesse website was coded in HTML+CSS while all the backend was written in Python.
## Challenges we ran into
Getting the Pose Detection right and consistent was a huge challenge. After a lot of tries, we ended and a model that works surprisingly accurately. Combating the computing power requirements of a large neural network was also a big challenge. We were initially planning to do the entire project on our local machines but when they kept slowing down to a crawl, we decided to shift everything to a VM.
The algorithms to detect form mistakes and generate scores for them were also a challenge since we could find no mathematical information about the right form for push-ups, or any of the other popular exercises for that matter. We had to come up with the algorithms and tweak them ourselves which meant we had to do a LOT of pushups. But to our pleasant surprise, the application worked better than we expected.
Getting a reliable data pipeline setup was also a challenge since everyone on our team was new to deployed systems. A lot of hours and countless tutorials later, even though we couldn't reach exactly the level of integration we were hoping for, we were able to create something fairly streamlined. Every hour of the struggle taught us new things though so it was all worth it.
## Accomplishments that we're proud of
-- Achieving accurate single body human pose detection with support for multiple bodies as well from a simple camera feed.
-- Detecting the right frames to analyze from the video since running every frame through our processing pipeline was too resource intensive
-- Developing algorithms that can detect the most common push-up mistakes.
-- Deploying a functioning app
## What we learned
Almost every part of this project involved a massive amount of learning for all of us. Right from deep neural networks to using huge datasets like COCO and MPI, to learning how deployed app systems work and learning the ins and outs of the Google Cloud Service.
## What's next for Fi(t)nesse
There is an immense amount of expandability to this project.
Adding more exercises/movements is definitely an obvious next step. Also interesting to consider is the 'gameability' of an app like this. By giving you a score and sassy feedback on your exercises, it has the potential to turn exercise into a fun activity where people want to exercise not just with higher weights but also with just as good form.
We also see this as being able to be turned into a full-fledged phone app with the right optimizations done to the neural nets. | ## 🚀**Inspiration**
Stakes, friendly fights, and fiery debates have followed since the 2022 FIFA world cup's impending ascent and enthusiasm. Our team made the decision to apply machine learning principles to real data in order to assess and forecast which side would win these matches.
## 🤔**What it does**
Our front-end web app allows users to match-up 2 countries that are participating in the 2022 FIFA world cup. Then it will display the win percentage from our back-end API which is in constant communication with an AI. This AI was taught through Machine Learning with a large dataset of past soccer games and their results. Using extensive services like Google Cloud in order to implement Vertex AI, we are able to train this large dataset in order to obtain the results for a win, a draw, or a tie given the two team names. Using Google Clouds Vertex AI we were able to deploy endpoints in order to then work with the predicted data in our Flask app. This Flask app communicated with our React app in order to display the information according to user input, and team matchup combinations.
## 👨💻**How we built it**
We built the front end using React, and bootstrap. The front end communicates with an endpoint running Flask and Python. This is because we need to have an authenticated computer talk directly to the Google Cloud. This endpoint calls Google Cloud API, which predicts the odds of a team winning, given some information.
## 🚧**Challenges we ran into**
From comprehending how it operates to building our own machine learning environments and processing information through them, machine learning presented a new challenge for all of us. A challenge was figuring out how to apply Vertex AI's AutoML to our dataset and modify it to make it more compatible with the AI. Another challenge was our React app. Learning React and building our first web application was challenging, and in order to make our React website work best with backend computation, we had to cram a lot of tutorials and documentation into it.
## 🎯**Accomplishments that we're proud of**
making an AI that responds to the right request and outputs the right data We entered this data into Google Clouds Vertex AI, where we trained and successfully predicted soccer match results given the names of two teams utilising the effective and user-friendly AutoML technology.
## 💡**What we learned**
Working with Google Clouds Platform our team really extended our capabilities across the wide array of features that it included, all the way from creating VM instances to host python flask apps, to utilizing the Vertex Ai’s AutoML to train our large dataset, and effectively target specific columns to produce the data that we needed. In order to understand and effectively communicate the data in a visual manner our team had to learn react, and understand HTTP requests, this process involved long hours of research and team efforts.
## 📈**What's next for Mirai9**
Implementing our machine learning model to more variety of sports, esports, and even hackathon statistics (using data on ideas) to predict which projects have higher chances of winning based on prior hackathon data. Mirai 9 works to achieve more accurate predictions in the near future with greater success than as of current. | winning |
## Inspiration
Our journey to creating this project stems from a shared realization: the path from idea to execution is fraught with inefficiencies that can dilute even the most brilliant concepts. As developers with a knack for turning visions into reality, we've faced the slow erosion of enthusiasm and value that time imposes on innovation. This challenge is magnified for those outside the technical realm, where a lack of coding skills transforms potential breakthroughs into missed opportunities. Harvard Business Review and TechCrunch analyzed Y Combinator startups and found that around 40% of founders are non-technical.
Drawing from our experiences in fast-paced sectors like health and finance, we recognized the critical need for speed and agility. The ability to iterate quickly and gather user feedback is not just beneficial but essential in these fields. Yet, this process remains a daunting barrier for many, including non-technical visionaries whose ideas have the potential to reshape industries.
With this in mind, we set out to democratize the development process. Our goal was to forge a tool that transcends technical barriers, enabling anyone to bring their ideas to life swiftly and efficiently. By leveraging our skills and insights into the needs of both developers and non-developers alike, we've crafted a solution that bridges the gap between imagination and tangible innovation, ensuring that no idea is left unexplored due to the constraints of technical execution.
This project is more than just a tool; it's a testament to our belief that the right technology can unlock the potential within every creative thought, transforming fleeting ideas into impactful realities.
## What it does
Building on the foundation laid by your vision, MockupMagic represents a leap toward democratizing digital innovation. By transforming sketches into interactive prototypes, we not only streamline the development process but also foster a culture of inclusivity where ideas, not technical prowess, stand in the spotlight. This tool is a catalyst for creativity, enabling individuals from diverse backgrounds to participate actively in the digital creation sphere.
The user can upload a messy sketch on paper to our website. MockupMagic will then digitize your low-fidelity prototype into a high-fidelity replica with interactive capabilities. The user can also see code alongside the generated mockups, which serves as both a bridge to tweak the generated prototype and a learning tool, gently guiding users toward deeper technical understanding. Moreover, the integration of a community feedback mechanism through the Discussion tab directly within the platform enhances the iterative design process, allowing for real-time user critique and collaboration.
MockupMagic is more than a tool; it's a movement towards a future where the digital divide is narrowed, and the translation of ideas into digital formats is accessible to all. By empowering users to rapidly prototype and refine their concepts, we're not just accelerating the pace of innovation; we're ensuring that every great idea has the chance to be seen, refined, and realized in the digital world.
## How we built it
Conceptualization: The project began with brainstorming sessions where we discussed the challenges non-technical individuals face in bringing their ideas to life. Understanding the value of quick prototyping, especially for designers and founders with creative but potentially fleeting ideas, we focused on developing a solution that accelerates this process.
Research and Design: We conducted research to understand the needs of our target users, including designers, founders, and anyone in between who might lack technical skills. This phase helped us design a user-friendly interface that would make it intuitive for users to upload sketches and receive functional web mockups.
Technology Selection: Choosing the right technologies was crucial. We decided on a combination of advanced image processing and AI algorithms capable of interpreting hand-drawn sketches and translating them into HTML, CSS, and JavaScript code. We leveraged and finetuned existing AI models from MonsterAPI and GPT API and tailored them to our specific needs for better accuracy in digitizing sketches.
Development: The development phase involved coding the backend logic that processes the uploaded sketches, the AI model integration for sketch interpretation, and the frontend development for a seamless user experience. We used the Reflex platform to build out our user-facing website, capitalizing on their intuitive Python-like web development tools.
Testing and Feedback: Rigorous testing was conducted to ensure the accuracy of the mockups generated from sketches. We also sought feedback from early users, including designers and founders, to understand how well the tool met their needs and what improvements could be made.
## Challenges we ran into
We initially began by building off our own model, hoping to aggregate quality training data mapping hand-drawn UI components to final front-end components, but we quickly realized this data was very difficult to find and hard to scrape for. Our model performs well for a few screens however it still struggles to establish connections between multiple screens or more complex actions.
## Accomplishments that we're proud of
Neither of us had much front-end & back-end experience going into this hackathon, so we made it a goal to use a framework that would give us experience in this field. After learning about Reflex during our initial talks with sponsors, we were amazed that Web Apps could be built in pure Python and wanted to jump right in. Using Reflex was an eye-opening experience because we were not held back by preconceived notions of traditional web development - we got to enjoy learning about Reflex and how to build products with it. Reflex’s novelty also translates to limited knowledge about it within LLM tools developers use to help them while coding, this helped us solidify our programming skills through reading documentation and creative debugging methodologies - skills almost being abstracted away by LLM coding tools. Finally, our favorite part about doing hackathons is building products we enjoy using. It helps us stay aligned with the end user while giving us personal incentives to build the best hack we can.
## What we learned
Through this project, we learned that we aren’t afraid to tackle big problems in a short amount of time. Bringing ideas on napkins to full-fledged projects is difficult, and it became apparent hitting all of our end goals would be difficult to finish in one weekend. We quickly realigned and ensured that our MVP was as good as it could get before demo day.
## What's next for MockupMagic
We would like to fine-tune our model to handle more edge cases in handwritten UIs. While MockupMagic can handle a wide range of scenarios, we hope to perform extensive user testing to figure out where we can improve our model the most. Furthermore, we want to add an easy deployment pipeline to give non-technical founders even more autonomy without knowing how to code. As we continue to develop MockupMagic, we would love to see the platform being used even at TreeHacks next year by students who want to rapidly prototype to test several ideas! | ## Overview
According to the WHO, at least 2.2 billion people worldwide have a vision impairment or blindness. Out of these, an estimated 1 billion cases could have been prevented or have yet to be addressed. This underscores the vast number of people who lack access to necessary eye care services. Even as developers, our screens have been both our canvas and our cage. We're intimately familiar with the strain they exert on our eyes, a plight shared by millions globally.
We need a **CHANGE**.
What if vision care could be democratized, made accessible, and seamlessly integrated with cutting-edge technology? Introducing OPTimism.
## Inspiration
The very genesis of OPTimism is rooted in empathy. Many in underserved communities lack access to quality eye care, a necessity that most of us take for granted. Coupled with the increasing screen time in today's digital age, the need for effective and accessible solutions becomes even more pressing. Our team has felt this on a personal level, providing the emotional catalyst for OPTimism. We didn't just want to create another app; we aspired to make a tangible difference.
## Core Highlights
**Vision Care Chatbot:** Using advanced AI algorithms, our vision chatbot assists users in answering vital eye care questions, offering guidance and support when professional help might not be immediately accessible.
**Analytics & Feedback:** Through innovative hardware integrations like posture warnings via a gyroscope and distance tracking with ultrasonic sensors, users get real-time feedback on their habits, empowering them to make healthier decisions.
**Scientifically-Backed Exercises:** Grounded in research, our platform suggests eye exercises designed to alleviate strain, offering a holistic approach to vision care.
**Gamified Redemption & Leaderboard System:** Users are not just passive recipients but active participants. They can earn optimism credits, leading to a gamified experience where they can redeem valuable eye care products. This not only incentivizes regular engagement but also underscores the importance of proactive vision care. The donation system using Circle allows users to make the vision care product possible.
## Technical Process
Bridging the gap between the technical and the tangible was our biggest challenge. We leaned on technologies such as React, Google Cloud, Flask, Taipy, and more to build a robust frontend and backend, containerized using Docker and Kubernetes and deployed on Netlify. Arduino's integration added a layer of real-world interaction, allowing users to receive physical feedback. The vision care chatbot was a product of countless hours spent on refining algorithms to ensure accuracy and reliability.
## Tech Stack
React, JavaScript, Vite, Tailwind CSS, Ant Design, Babel, NodeJS, Python, Flask, Taipy, GitHub, Docker, Kubernetes, Firebase, Google Cloud, Netlify, Circle, OpenAI
**Hardware List:** Arduino, Ultrasonic sensor, smart glasses, gyroscope, LEDs, breadboard
## Challenges we ran into
* Connecting the live data retrieved from the Arduino into the backend application for manipulating and converting into appropriate metrics
* Circle API key not authorized
* Lack of documentation for different hardwares and support for APIs.
## Summary
OPTimism isn't just about employing the latest technologies; it's about leveraging them for a genuine cause. We've seamlessly merged various features, from chatbots to hardware integrations, under one cohesive platform. Our aim? Clear, healthy vision for all, irrespective of their socio-economic background.
We believe OPTimism is more than just a project. It's a vision, a mission, and a commitment. We will convert the hope to light the path to a brighter, clearer future for everyone into reality. | ## Inspiration for Creating sketch-it
Art is fundamentally about the process of creation, and seeing as many of us have forgotten this, we are inspired to bring this reminder to everyone. In this world of incredibly sophisticated artificial intelligence models (many of which can already generate an endless supply of art), now more so than ever, we must remind ourselves that our place in this world is not only to create but also to experience our uniquely human lives.
## What it does
Sketch-it accepts any image and breaks down how you can sketch that image into 15 easy-to-follow steps so that you can follow along one line at a time.
## How we built it
On the front end, we used Flask as a web development framework and an HTML form that allows users to upload images to the server.
On the backend, we used Python libraries ski-kit image and Matplotlib to create visualizations of the lines that make up that image. We broke down the process into frames and adjusted the features of the image to progressively create a more detailed image.
## Challenges we ran into
We initially had some issues with scikit-image, as it was our first time using it, but we soon found our way around fixing any importing errors and were able to utilize it effectively.
## Accomplishments that we're proud of
Challenging ourselves to use frameworks and libraries we haven't used earlier and grinding the project through until the end!😎
## What we learned
We learned a lot about personal working styles, the integration of different components on the front and back end side, as well as some new possible projects we would want to try out in the future!
## What's next for sketch-it
Adding a feature that converts the step-by-step guideline into a video for an even more seamless user experience! | partial |
## Inspiration
Personal assistant AI like Siri, Cortana and Alexa were the inspiration for our project.
## What it does
Our mirror displays information about the weather, date, social media, news and nearby events.
## How we built it
The mirror works by placing a monitor behind a two-way mirror and displaying information using multiple APIs. We integrated a Kinect so users can select that's being displayed using various gestures. We hosted our project with Azure
## Challenges we ran into
Integrating the Kinect
## Accomplishments that we're proud of
Integrating as many components as we did including the Kinect
## What we learned
How to use Microsoft Azure and work with various APIs as well as use Visual Studio to interact with the Kinect
## What's next for Microsoft Mirror
Implementing more features! We would like to explore more of Microsoft's Cognitive Services as well as utilize the Kinect in a more refined fashion. | ## Inspiration
Ever since the creation of department stores, if a customer needed assistance, he or she would have to look for a sales representative. This is often difficult and frustrating as people are constantly on the move. Some companies try to solve this problem by implementing stationary machines such as kiosks. However, these machines can only answer specific questions and they still have to be located. So we wanted to find a better way to feasibly connect customers with in-store employees.
## What it does
Utilizing NFC technology, customers can request help or find more information about their product with just a simple tap. By tapping their phone on the price tag, one's default browser will open up and the customer will be given two options:
**Product link** - Directly goes to the company's website of that specific product
**Request help** - Sends a request to the in store employees notifying them of where you tapped (eg. computers aisle 5)
The customer service representative can let you know that he or she is on the way with the representative's face displayed on the customer's phone so they know who to look for. Once helped, the customer can provide feedback on the in-store employee. Using Azure Cognitive Services, the customer's feedback comments will be translated to a score between 0-100 in which the information will be stored. All of this can be performed without an app, just tapp.
## How We Built It
The basis of the underlying simplicity for the customer is the simple "tap" into our webapp. The NFC sticker stores a URL and the product details - allowing users to bypass the need to install a 3rd party app.
The webapp is powered by node.js, running on a Azure VM. The staff members have tablets that connect directly to our database service, powered by Firebase, to see real-time changes.
The analytics obtained by our app is stored in Azure's SQL server. We use Azure Cognitive Services to identify sentiment level of the customer's feedback and is stored into the SQL server for future analysis for business applications.
## Challenges We Ran Into
Finding a way to record and display the data tapp provides.
## Accomplishments that We're Proud of
* Learning how to use Azure
* Having the prototype fully functional after 36 hours
* Creating something that is easy to use and feasible (no download required)
## What I learned
* How to integrate Azure technology into our apps
* Better understanding of NFC technology
* Setting up a full-stack server - from the frontend to the backend
* nginx reverse proxy to host our two node apps serving with HTTPS
## What's next for Tapp
* We are going to improve on the design and customization options for Tapp, and pitch it to multiple businesses
* We will be bringing this idea forward to an entrepreneurship program at UBC | ## Inspiration
In online documentaries, we saw visually impaired individuals and their vision consisted of small apertures. We wanted to develop a product that would act as a remedy for this issue.
## What it does
When a button is pressed, a picture is taken of the user's current view. This picture is then analyzed using OCR (Optical Character Recognition) and the text is extracted from the image. The text is then converted to speech for the user to listen to.
## How we built it
We used a push button connected to the GPIO pins on the Qualcomm DragonBoard 410c. The input is taken from the button and initiates a python script that connects to the Azure Computer Vision API. The resulting text is sent to the Azure Speech API.
## Challenges we ran into
Coming up with an idea that we were all interested in, incorporated a good amount of hardware, and met the themes of the makeathon was extremely difficult. We attempted to use Speech Diarization initially but realized the technology is not refined enough for our idea. We then modified our idea and wanted to use a hotkey detection model but had a lot of difficulty configuring it. In the end, we decided to use a pushbutton instead for simplicity in favour of both the user and us, the developers.
## Accomplishments that we're proud of
This is our very first Makeathon and we were proud of accomplishing the challenge of developing a hardware project (using components we were completely unfamiliar with) within 24 hours. We also ended up with a fully function project.
## What we learned
We learned how to operate and program a DragonBoard, as well as connect various APIs together.
## What's next for Aperture
We want to implement hot-key detection instead of the push button to eliminate the need of tactile input altogether. | partial |
## Inspiration
We love boba. Who wouldn't?? Our eyes are as black and pure as the boba we consume.
## What it does
Our beautiful little spinning wheel will generate a boba order for you! So if you are like us and can't decide which boba we could possibly buy because the options of those sugary tapioca pearls are endless, you can use the boba generator!
## How we built it
With react and a lot of tears
## Accomplishments that we're proud of
That spinning spinning wheel :3
## What we learned
React has a really hard learning curve and we all overcame that together as we all worked on our first web application.
## What's next for Boba Generator
Things will rise up for this meme machine for boba teens my dudes. It's going to expand to all the boba chains! Android App! iOS App! Google Glasses! | ## Inspiration
With the current pandemic, it's easy to forget about mundane things, which includes keeping track of what's in your fridge. Even before the pandemic, it has always been so easy to forget about that avocado or that smelly cheese in the back of your fridge. No more of that! With potatOH!!!, you can keep track of what's in your fridge, as well as track the expiration date. Let's all reduce our food waste!
## What it does
potatOH!!! allows its user to input a food item, along with its expiration date and food type. It stores the input in a simple and clear table, where the user can see what's currently in their fridge. After enjoying the food item, the user can delete it from the table.
## How we built it
We built it with all of our heart <3
We used javascript, CSS, HTML and React to create our wonderful project, potatOH!!!
We used React Router DOM to connect our pages together so that our *spinning potato* can have its own page.
## Challenges we ran into
We had some issues with connecting the pages, but we were dedicated to having a page for our spinning potato and we persevered. We learned the importance of .gitignore (32000+ files that, unfortunately, took too much time to commit).
## Accomplishments that we're proud of
First and foremost, our **PRETTY SPINNING POTATO**.
We are proud of how far we've come with collaborating on GitHub. We didn't have any prior experience with javascript and React, and little to no experience with CSS. We were also unfamiliar with the IDE we had chosen to execute our project on. To have been able to create a project that compiles as we wanted is a feat in itself. We are also quite happy with our newfound familiarity with the command line.
## What we learned
We learned that implementing a website requires a concrete and detailed plan. We also learned how to use React, its features and specifically, how it combines with javascript, CSS and HTML. We learned about some cool features of Twilio, that we, unfortunately, did not have time to explore and implement.
## What's next for potatOH!!!
We want to implement a way to sort the items and to add a quantity column, so that users can keep track of how much they have left of a specific food item. We would like to have a graph feature that shows everyone's waste in order to motivate people to check what's in their fridge more regularly. We also want to implement a login feature so that it can be shared with family members and roommates, so that we can all contribute to reducing food waste! | ## Inspiration
Americans waste about 425 beverage containers per capita per year in landfill, litter, etc. Bottles are usually replaced with cans and bottles made from virgin materials which are more energy-intensive than recycled materials. This causes emissions of a host of toxics to the air and water and increases greenhouse gas emissions.
The US recycling rate is about 33% while that in stats what have container deposit laws have a 70% average rate of beverage recycling rate. This is a significant change in the amount of harm we do to the planet.
While some states already have a program for exchanging cans for cash, EarthCent brings some incentive to states to make this a program and something available. Eventually when this software gets accurate enough, there will not be as much labor needed to make this happen.
## What it does
The webapp allows a GUI for the user to capture an image of their item in real time. The EarthCents image recognizer recognizes the users bottles and dispenses physical change through our change dispenser. The webapp then prints a success or failure message to the user.
## How we built it
Convolutional Neural Networks were used to scan the image to recognize cans and bottles.
Frontend and Flask presents a UI as well as processes user data.
The Arduino is connected up to the Flask backend and responds with a pair of angle controlled servos to spit out coins.
Change dispenser: The change dispenser is built from a cardboard box with multiple structural layers to keep the Servos in place. The Arduino board is attached to the back and is connected to the Servos by a hole in the cardboard box.
## Challenges we ran into
Software: Our biggest challenge was connect the image file from the HTML page to the Flask backend for processing through a TensoFlow model. Flask was also a challenge since complex use of it was new to us.
Hardware: Building the cardboard box for the coin dispenser was quite difficult. We also had to adapt the Servos with the Arduino so that the coins can be successfully spit out.
## Accomplishments that we're proud of
With very little tools, we could build with hardware a container for coins, a web app, and artificial intelligence all within 36 hours. This project is also very well rounded (hardware, software, design, web development) and let us learn a lot about connecting everything together.
## What we learned
We learned about Arduino/hardware hacking. We learned about the pros/cons of Flask vs. using something like Node.js. In general, there was a lot of light shed on the connectivity of all the elements in this project. We both had skills here and there, but this project brought it all together. Learned how to better work together and manage our time effectively through the weekend to achieve as much as possible without being too overly ambitious.
## What's next for EarthCents
EarthCents could deposit Cryptocurrency/venmo etc and hold more coins. If this will be used, we would want to connect it to a weight to ensure that the user exchanged their can/bottle. More precise recognition. | losing |
Yeezy Vision is a Chrome extension that lets you view the web from the eyes of the 21st century's greatest artist - Kanye West. Billions of people have been deprived of the life changing perspective that Mr. West has cultivated. This has gone on for far too long. We had to make a change.
Our extension helps the common man view the world as Yeezus does. Inferior text is replaced with Kanye-based nouns, less important faces are replaced by Kanye's face and links are replaced by the year of Ye - #2020.
## How we built it
We split the functionality to 4 important categories: Image replacement, text and link replacement, website and chrome extension. We did the image and text replacement using http post and gets from the microsoft azure server, bootstrap for the website and wrote a replacement script in javascript.
## What's next for Yeezy Vision
Imma let you finish, but Yeezy Vision will be THE GREATEST CHROME EXTENSION OF ALL TIME. Change your world today at yeezyvision.tech. | ## 💡Inspiration
In a world of fast media, many youth struggle with short attention spans and find it difficult to stay focused. We want to help overcome this phenomena with the help of a baby leaf sprout and automated browser proctoring! We hope this tool will help users ace their tests and achieve their dreams!
## 🔍 What it does
Don't Leaf Me! utilizes a camera and monitors the user throughout the study session. If the user leaves their computer (leaves the camera) we know the user is not studying, and the browser opens a page to let the user know to come back and study! The tool also allows users to add urls that they wish to avoid during their study session and monitors which pages the user goes to. If the user travels to one of those pages, the tool alerts them to go back to study. The user can also use the built-in study session timer which lets them keep track of their progress.
## ⚙️ How we built it
We constructed the front end with React, TypeScript, Chakra UI, styled-components, and webpack. We built the backend with Node, Express, and the Cloud Vision API from the Google Cloud Platform to detect whether or not a user was in front of the camera.
## 🚧 Challenges we ran into
It was quite tricky to get the camera working to detect a person's face. There aren't many examples of what we were going for online, and much of our progress was made through trial and error. There were even less resources on how to connect the camera to the chrome extension. It was also difficult to extract the url of a browser from the chrome extension page since they are two separate pages.
## ✔️ Accomplishments that we're proud of
* Creating a beautiful UI that creates a comfortable and motivating environment for users to be productive in
* Successfully grappling with Google's Cloud Vision API
* We managed to program the vision we had and also implemented most of the features we had planned to.
## 📚 What we learned
* Through creating this project we gained a deeper understanding of the browser and DOM.
* This was also the first time using our camera and chrome extension for many of our team members
## 🔭 What's next for Don't Leaf Me!
* Don't Leaf Me! would like to add the ability for users to input the amount of time they wish to stay focused for on the timer.
* Explore options of publishing our extension to the chrome web store. | ## Inspiration
One of our teammate’s grandfathers suffers from diabetic retinopathy, which causes severe vision loss.
Looking on a broader scale, over 2.2 billion people suffer from near or distant vision impairment worldwide. After examining the issue closer, it can be confirmed that the issue disproportionately affects people over the age of 50 years old. We wanted to create a solution that would help them navigate the complex world independently.
## What it does
### Object Identification:
Utilizes advanced computer vision to identify and describe objects in the user's surroundings, providing real-time audio feedback.
### Facial Recognition:
It employs machine learning for facial recognition, enabling users to recognize and remember familiar faces, and fostering a deeper connection with their environment.
### Interactive Question Answering:
Acts as an on-demand information resource, allowing users to ask questions and receive accurate answers, covering a wide range of topics.
### Voice Commands:
Features a user-friendly voice command system accessible to all, facilitating seamless interaction with the AI assistant: Sierra.
## How we built it
* Python
* OpenCV
* GCP & Firebase
* Google Maps API, Google Pyttsx3, Google’s VERTEX AI Toolkit (removed later due to inefficiency)
## Challenges we ran into
* Slow response times with Google Products, resulting in some replacements of services (e.g. Pyttsx3 was replaced by a faster, offline nlp model from Vosk)
* Due to the hardware capabilities of our low-end laptops, there is some amount of lag and slowness in the software with average response times of 7-8 seconds.
* Due to strict security measures and product design, we faced a lack of flexibility in working with the Maps API. After working together with each other and viewing some tutorials, we learned how to integrate Google Maps into the dashboard
## Accomplishments that we're proud of
We are proud that by the end of the hacking period, we had a working prototype and software. Both of these factors were able to integrate properly. The AI assistant, Sierra, can accurately recognize faces as well as detect settings in the real world. Although there were challenges along the way, the immense effort we put in paid off.
## What we learned
* How to work with a variety of Google Cloud-based tools and how to overcome potential challenges they pose to beginner users.
* How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to connect a smartphone to a laptop with a remote connection to create more opportunities for practical designs and demonstrations.
How to create docker containers to deploy google cloud-based flask applications to host our dashboard.
How to develop Firebase Cloud Functions to implement cron jobs. We tried to developed a cron job that would send alerts to the user.
## What's next for Saight
### Optimizing the Response Time
Currently, the hardware limitations of our computers create a large delay in the assistant's response times. By improving the efficiency of the models used, we can improve the user experience in fast-paced environments.
### Testing Various Materials for the Mount
The physical prototype of the mount was mainly a proof-of-concept for the idea. In the future, we can conduct research and testing on various materials to find out which ones are most preferred by users. Factors such as density, cost and durability will all play a role in this decision. | partial |
## Inspiration
According to the World Health Organization (WHO), 1 in every 5 college students suffer from mental health disorders including depression and anxiety. This epidemic has far-reaching personal and societal consequences – increasing disease risk, fracturing relationships, and reducing workforce productivity.
Current methods of depression diagnosis are extremely time-consuming, relying heavily on one-on-one clinical interviews conducted manually by psychiatrists. Moreover, the subjective nature of evaluations often lead to inconsistent medical advice and recommended treatment paths across different specialists. As a result, there is a clear need for a solution for efficient and standardized depression diagnosis.
## What it does
SmartPsych is a streamlined web platform for automated depression diagnosis from videos of natural patient conversation. Leveraging a unique mix of audio, computer vision, and natural language processing deep learning algorithms, SmartPsych accurately pinpoints and quantifies the complex network of symptoms underlying depression for further psychiatric evaluation. Furthermore, SmartPsych has a special sentence-by-sentence playback feature that enables psychiatrists to hone in on specific sentences (color-coded for severity from green to red being the most severe). This enables psychiatrists to identify the patient-specific struggles and affected topics – paving the way for personalized treatment.
SmartPsych is currently able to detect depression sentiment from patient words, negative emotions (e.g. anger, contempt, fear, sadness) from facial images, and valence and arousal (i.e. positive energy levels) from spectral qualities of a user's voice.
Overall, SmartPsych represents an entirely new way of depression diagnosis – instead of wasting the valuable time of psychiatrists, we delegate the tedious task of identifying depression symptoms to machines and bring in psychiatrists at the end for the final diagnosis and treatment when their expertise is most crucial. This hybrid man and machine model increases efficiency and retains accuracy compared to relying on machine learning predictions alone.
## How I built it
The web application was built entirely in Flask; to facilitate user interaction with the application, I used the pre-designed UIKit components to create a minimalistic yet user-friendly front-end.
Since current sentiment analysis methods are unable to directly infer depression, I trained my own CNN-LSTM neural network on the 189 interview transcripts of both depressed and non-depressed patients from the USC Distress Analysis Wizard-of-Oz database. After undergoing supervised learning, my neural network had a validation accuracy of 69%, which is highly comparable to current human inter-rater variabilities between 70-75%.
Emotion classification and facial landmark detection was performed through the Microsoft Cognitive Face API, valence and arousal prediction was performed through multi-variate regression on audio Mel Frequency Cepstral Coefficients (MFCC), and speech-to-text transcription was performed through Google Cloud.
## Accomplishments that I'm proud of
Integrating custom-trained Keras/Tensorflow models, Google and Microsoft cloud APIs, and regression trees into a single unified and streamlined application.
## What I learned
The sheer amount of factors behind depression, and calling cloud APIs!
## What's next for SmartPsych
While SmartPsych can now be easily used on the web for effective depression diagnosis, there is still much room left for improvement. Development of a mobile form of SmartPsych to increase usability, integration of other biometrics (i.e. sleep quality, heart rate, exercise) into depression diagnosis, and increasing understanding of depression sentiment prediction with entity analysis. | ## Inspiration
The purchase of goods has changed drastically over the past decade, especially over the period of the pandemic. Although with these online purchases comes a drawback, the buyer can not see the product in front of them before buying it. This adds an element of uncertainty and undesirability to online shopping as this can cost the consumer time and the seller money in processing returns, in fact, a study showed that up to 40% of all online purchases are returned, and out of those returned items just 30% were resold to customers, with the rest going to landfills or other warehouses.
With this app, we hope to reduce the number of returns by putting the object the user wants to buy in front of them before they buy it so that they know exactly what they are getting.
## What it does
Say you are looking to buy a tv but are not sure if it will fit or how it will look in your home. You would be able to open the Ecomerce ARena Android app and browse the TVs on Amazon(since that's where you were planning to buy the TV from anyways). You can see all the info that Amazon has on the TV but then also use AR mode to view the TV in real life.
## How we built it
To build the app we used Unity, coding everything within the engine using C#. We used the native AR foundation function provided and then built upon them to get the app to work just right. We also incorporated EchoAr into the app to manage all 3d models and ensure the app is lean and small in size.
## Challenges we ran into
Augmented Reality development was new to all of us as well as was the Unity engine, having to learn and harness the power of these tools was difficult and we ran into a lot of problems building and getting the desired outcome. Another problem was how to get the models for each different product, we decided for this Hackathon to limit our scope to two types of products with the ability to keep adding more in the future easily.
## Accomplishments that we're proud of
We are really proud of the final product for being able to detect surfaces and use the augmented reality capabilities super well. We are also really happy that we were able to incorporate web scraping to get live data from Amazon, as well as the echo AR cloud integration.
## What we learned
We learned a great deal about how much work and how truly amazing it is to get augmented reality applications built even for those who look simple on the surface. There was a lot that changed quickly as this is still a new bleeding-edge technology.
## What's next for Ecommerce ARena
We hope to expand its functionality to cover a greater variety of products, as well as supporting other vendors aside from Amazon such as Best Buy and Newegg. We can also start looking into the process for releasing the app into the Google app store, might even look into porting it to Apple products. | ## Inspiration
Rates of patient nonadherence to therapies average around 50%, particularly among those with chronic diseases. One of my closest friends has Crohn's disease, and I wanted to create something that would help with the challenges of managing a chronic illness. I built this app to provide an on-demand, supportive system for patients to manage their symptoms and find a sense of community.
## What it does
The app allows users to have on-demand check-ins with a chatbot. The chatbot provides fast inference, classifies actions and information related to the patient's condition, and flags when the patient’s health metrics fall below certain thresholds. The app also offers a community aspect, enabling users to connect with others who have chronic illnesses, helping to reduce the feelings of isolation.
## How we built it
We used Cerebras for the chatbot to ensure fast and efficient inference. The chatbot is integrated into the app for real-time check-ins. Roboflow was used for image processing and emotion detection, which aids in assessing patient well-being through facial recognition. We also used Next.js as the framework for building the app, with additional integrations for real-time community features.
## Challenges we ran into
One of the main challenges was ensuring the chatbot could provide real-time, accurate classifications and flagging low patient metrics in a timely manner. Managing the emotional detection accuracy using Roboflow's emotion model was also complex. Additionally, creating a supportive community environment without overwhelming the user with too much data posed a UX challenge.
## Accomplishments that we're proud of
✅deployed on defang
✅integrated roboflow
✅integrated cerebras
We’re proud of the fast inference times with the chatbot, ensuring that users get near-instant responses. We also managed to integrate an emotion detection feature that accurately tracks patient well-being. Finally, we’ve built a community aspect that feels genuine and supportive, which was crucial to the app's success.
## What we learned
We learned a lot about balancing fast inference with accuracy, especially when dealing with healthcare data and emotionally sensitive situations. The importance of providing users with a supportive, not overwhelming, environment was also a major takeaway.
## What's next for Muni
Next, we aim to improve the accuracy of the metrics classification, expand the community features to include more resources, and integrate personalized treatment plans with healthcare providers. We also want to enhance the emotion detection model for more nuanced assessments of patients' well-being. | partial |
## Challenges we ran into
* Integrating the Chrome extension with various CAD applications to capture real-time screen data
* Creating and optimizing a RAG process to curate data and provide relevant and accurate advice based on captured screenshots and project context
* Balancing the frequency of API calls to Claude 3.5 to provide timely advice without overwhelming the system
* Ensuring the extension's UI remained responsive and user-friendly while processing complex CAD data and AI responses
* Implementing effective error handling and fallback mechanisms for various scenarios
## Accomplishments that we're proud of
* Successfully developing a functional Chrome extension that integrates seamlessly with online CAD applications
* Implementing a RAG system that effectively combines Claude 3.5 with relevant CAD design principles and best practices
* Creating an intuitive user interface for easy interaction with the AI assistant during CAD work
* Developing a system that provides real-time, context-aware advice across different skill levels and project types
* Collaborating effectively as a team to bring together various components into a cohesive product within the hackathon timeframe
## What we learned
* Intricacies of developing Chrome extensions and integrating them with complex web applications like CAD software
* Techniques for effectively prompting and utilizing large language models for specialized tasks
* Importance of user experience design in creating tools for technical applications
* Strategies for rapid prototyping and iterative development in a hackathon environment
* Potential of AI to augment and enhance complex design processes in fields like mechanical engineering
## What's next for CADvisor AI
* Expand compatibility to support a wider range of CAD software and platforms
* Implement advanced computer vision techniques for better 3D model analysis
* Develop features to suggest specific CAD operations based on the user's design stage and goals | ## Inspiration
In the maze of social interactions, we've all encountered the awkward moment of not remembering the name of someone we've previously met. Face-Book is our solution to the universal social dilemma of social forgetfulness.
## What it does
Face-Book is an iPhone app that--discreetly--records and analyzes faces and conversations using your phone's camera and audio. Upon recognizing a familiar face, it instantly retrieves their name gathered during past saved conversations, as well as past interactions, and interesting tidbits – a true social lifesaver.
## How we built it
Swift, Xcode, AWS Face Rekognition & Diarization, OpenAI.
## Challenges we ran into
We navigated the uncharted waters of ethical boundaries and technical limitations, but our vision of a seamlessly connected world guided us. We didn't just build an app; we redefined social norms.
## Accomplishments that we're proud of
We take pride in Face-Book's unparalleled ability to strip away the veil of privacy, presenting it as the ultimate tool for social convenience. Our app isn't just a technological triumph; it's a gateway to omnipresent social awareness.
## What we learned
Our journey revealed the immense potential of data in understanding and predicting human behavior. Every interaction is a data point, contributing to an ever-growing atlas of human connections.
## What's next for Face-Book
The future is limitless. We envision Face-Book as a standard feature in smartphones, working hand-in-hand with governments worldwide. Imagine a society where every face is known, every interaction logged – a utopia of safety and social convenience. Porting it to an AR platform would also be nice. | ## Inspiration
Emergency situations can be extremely sudden and can seem paralyzing, especially for young children. In most cases, children from the ages of 4-10 are unaware of how to respond to a situation that requires contact with first responders, and what the most important information to communicate. In the case of a parent or guardian having a health issue, children are left feeling helpless. We wanted to give children confidence that is key to their healthy cognitive and social development by empowering them with the knowledge of how to quickly and accurately respond in emergency situations, which is why we created Hero Alert.
## What it does
Our product provides a tangible device for kids to interact with, guiding them through the process of making a call to 9-1-1 emergency services. A conversational AI bot uses natural language understanding to listen to the child’s responses and tailor the conversation accordingly, creating a sense that the child is talking to a real emergency operator.
Our device has multiple positive impacts: the educational aspect of encouraging children’s cognitive development skills and preparing them for serious, real-life situations; giving parents more peace of mind, knowing that their child can respond to dire situations; and providing a diverting, engaging game for children to feel like their favorite Marvel superhero while taking the necessary steps to save the day!
## How we built it
On the software side, our first step was to find find images from comic books that closely resemble real-life emergency and crisis scenarios. We implemented our own comic classifier with the help of IBM Watson’s visual recognition service, classifying and re-tagging images made available by Marvel’s Comics API into crisis categories such as fire, violence, water disasters, or unconsciousness. The physical device randomly retrieves and displays these image objects from an mLab database each time a user mimics a 9-1-1 call.
We used the Houndify conversational AI by SoundHound to interpret the voice recordings and generate smart responses. Different emergencies scenarios were stored as pages in Houndify and different responses from the child were stored as commands. We used Houndify’s smart expressions to build up potential user inputs and ensure the correct output was sent back to the Pi.
Running on the Pi was a series of Python scripts, a command engine and an interaction engine, that enabled the flow of data and verified the child’s input.
On the hardware end, we used a Raspberry Pi 3 connected to a Sony Eye camera/microphone to record audio and a small hdmi monitor to display a tagged Marvel image. The telephone 9-1-1 digits were inputted as haptic buttons connected to the Pi’s GPIO pins. All of the electronics were encapsulated in a custom laser cut box that acted as both a prototype for children’s toy and as protection for the electronics.
## Challenges we ran into
The comics from the Marvel API are hand-drawn and don’t come with detailed descriptions, so we had a tough time training a general model to match pictures to each scenario. We ended up creating a custom classifier with IBM Watson’s visual recognition service, using a few pre-selected images from Marvel, then applied that to the entirety of the Marvel imageset to diversify our selection.
The next challenge was creating conversational logic flow that could be applied to a variety of statements a child might say while on the phone. We created several scenarios that involved numerous potential emergency situations and used Houndify’s Smart Expressions to evaluate the response from a child. Matching statements to these expressions allowed us to understand the conversation and provide custom feedback and responses throughout the mock phone call.
We also wanted to make sure that we provide a sense of empowerment for the child. While they should not make unnecessary calls, children should not be afraid or anxious to talk with emergency services during an emergency. We want them to feel comfortable, capable, and strong enough to make that call and help the situation they are in. Our implementation of Marvel Comics allowed us provide some sense of super-power to the children during the calls.
## Accomplishments that we're proud of
Our end product works smoothly and simulates an actual conversation for a variety of crisis scenarios, while providing words of encouragement and an unconventional approach to emergency response. We used a large variety of APIs and platforms and are proud that we were able to have all of them work with one another in a unified product.
## What we learned
We learned that the ideation process and collaboration are keys in solving any wicked problem that exists in society. We also learned that having a multidisciplinary team with very diverse backgrounds and skill sets provides the most comprehensive contributions and challenges us both as individuals and as a team.
## What's next for Hero Alert!
We'd love to get more user feedback and continue development and prototyping of the device in the future, so that one day it will be available on store shelves. | losing |
# My Eyes
Helping dyslectics read by using vision and text-to-speech.
## Mission Statement
Dyslexia is a reading disability that affects 5% of the population. Individuals with dyslexia have difficulty decoding written word with speed and accuracy. To help those afflicted with dyslexia, many schools in BC provide additional reading classes. But even with reading strategies, it can take quite a bit of concentration and effort to comprehend text.
Listening to an audio book is more convenient than reading a physical one. There are text-to-speech services which can read off digital text on your tablet or computer. However, there aren't any easily accessible services which offer reading off physical text.
Our mission was to provide an easily accessible service that could read off physical text. Our MOBILE app at 104.131.142.126:3000 or eye-speak.org allows you to take a picture of any text and play it back. The site's UI was designed for those with dyslexia in mind. The site fonts and color scheme were purposely chosen to be as easily read as possible.
This site attempts to provide an easy free service for those with reading disabilities.
## Getting Started
These instructions will get you a copy of the project up and running on your local machine for development and testing purposes.
### Installing
Create a file called '.env' and a folder called 'uploads' in the root folder
Append API keys from [IBM Watson](https://www.ibm.com/watson/services/text-to-speech/)
Append API keys from Google Cloud Vision
1. [Select or create a Cloud Platform project](https://console.cloud.google.com/project)
2. [Enable billing for your project](https://support.google.com/cloud/answer/6293499#enable-billing)
3. [Enable the Google Cloud Vision API API](https://console.cloud.google.com/flows/enableapi?apiid=vision.googleapis.com)
4. [Set up authentication with a service account so you can access the API from your local workstation](https://cloud.google.com/docs/authentication/getting-started)
.env should look like this when you're done:
```
USERNAME=<watson_username>
PASSWORD=<watson_password>
GOOGLE_APPLICATION_CREDENTIALS=<path_to_json_file>
```
Install dependencies and start the program:
```
npm install
npm start
```
Take a picture of some text and and press play to activate text-to-speech.
## Built With
* [Cloud Vision API](https://cloud.google.com/vision/) - Used to read text from images
* [Watson Text to Speech](https://console.bluemix.net/catalog/services/text-to-speech) - Used to read text from images to speech
## Authors
* **Zachary Anderson** - *Frontend* - [ZachaRuba](https://github.com/ZachaRuba)
* **Håvard Estensen** - *Google Cloud Vision* - [estensen](https://github.com/estensen)
* **Kristian Jensen** - *Backend, IBM Watson* - [HoboKristian](https://github.com/HoboKristian)
* **Charissa Sukontasukkul** - *Design*
* **Josh Vocal** - *Frontend* - [joshvocal](https://github.com/joshvocal) | Fujifusion is our group's submission for Hack MIT 2018. It is a data-driven application for predicting corporate credit ratings.
## Problem
Scholars and regulators generally agree that credit rating agency failures were at the center of the 2007-08 global financial crisis.
## Solution
\*Train a machine learning model to automate the prediction of corporate credit ratings.
\*Compare vendor ratings with predicted ratings to identify discrepancies.
\*Present this information in a cross-platform application for RBC’s traders and clients.
## Data
Data obtained from RBC Capital Markets consists of 20 features recorded for 27 companies at multiple points in time for a total of 524 samples. Available at <https://github.com/em3057/RBC_CM>
## Analysis
We took two approaches to analyzing the data: a supervised approach to predict corporate credit ratings and an unsupervised approach to try to cluster companies into scoring groups.
## Product
We present a cross-platform application built using Ionic that works with Android, iOS, and PCs. Our platform allows users to view their investments, our predicted credit rating for each company, a vendor rating for each company, and visual cues to outline discrepancies. They can buy and sell stock through our app, while also exploring other companies they would potentially be interested in investing in. | ## Inspiration
There are thousands of people worldwide who suffer from conditions that make it difficult for them to both understand speech and also speak for themselves. According to the Journal of Deaf Studies and Deaf Education, the loss of a personal form of expression (through speech) has the capability to impact their (affected individuals') internal stress and lead to detachment. One of our main goals in this project was to solve this problem, by developing a tool that would a step forward in the effort to make it seamless for everyone to communicate. By exclusively utilizing mouth movements to predict speech, we can introduce a unique modality for communication.
While developing this tool, we also realized how helpful it would be to ourselves in daily usage, as well. In areas of commotion, and while hands are busy, the ability to simply use natural lip-reading in front of a camera to transcribe text would make it much easier to communicate.
## What it does
**The Speakinto.space website-based hack has two functions: first and foremost, it is able to 'lip-read' a stream from the user (discarding audio) and transcribe to text; and secondly, it is capable of mimicking one's speech patterns to generate accurate vocal recordings of user-inputted text with very little latency.**
## How we built it
We have a Flask server running on an AWS server (Thanks for the free credit, AWS!), which is connected to a Machine Learning model running on the server, with a frontend made with HTML and MaterializeCSS. This was trained to transcribe people mouthing words, using the millions of words in LRW and LSR datasets (from the BBC and TED). This algorithm's integration is the centerpiece of our hack. We then used the HTML MediaRecorder to take 8-second clips of video to initially implement the video-to-mouthing-words function on the website, using a direct application of the machine learning model.
We then later added an encoder model, to translate audio into an embedding containing vocal information, and then a decoder, to convert the embeddings to speech. To convert the text in the first function to speech output, we use the Google Text-to-Speech API, and this would be the main point of future development of the technology, in having noiseless calls.
## Challenges we ran into
The machine learning model was quite difficult to create, and required a large amount of testing (and caffeine) to finally result in a model that was fairly accurate for visual analysis (72%). The process of preprocessing the data, and formatting such a large amount of data to train the algorithm was the area which took the most time, but it was extremely rewarding when we finally saw our model begin to train.
## Accomplishments that we're proud of
Our final product is much more than any of us expected, especially the ability to which it seemed like it was an impossibility when we first started. We are very proud of the optimizations that were necessary to run the webpage fast enough to be viable in an actual use scenario.
## What we learned
The development of such a wide array of computing concepts, from web development, to statistical analysis, to the development and optimization of ML models, was an amazing learning experience over the last two days. We all learned so much from each other, as each one of us brought special expertise to our team.
## What's next for speaking.space
As a standalone site, it has its use cases, but the use cases are limited due to the requirement to navigate to the page. The next steps are to integrate it in with other services, such as Facebook Messenger or Google Keyboard, to make it available when it is needed just as conveniently as its inspiration. | winning |
## Inspiration
Our team was determined to challenge a major problem in society, and create a practical solution. It occurred to us early on that **false facts** and **fake news** has become a growing problem, due to the availability of information over common forms of social media. Many initiatives and campaigns recently have used approaches, such as ML fact checkers, to identify and remove fake news across the Internet. Although we have seen this approach become evidently better over time, our group felt that there must be a way to innovate upon the foundations created from the ML.
In short, our aspirations to challenge an ever-growing issue within society, coupled with the thought of innovating upon current technological approaches to the solution, truly inspired what has become ETHentic.
## What it does
ETHentic is a **betting platform** with a twist. Rather than preying on luck, you play against the odds of truth and justice. Users are given random snippets of journalism and articles to review, and must determine whether the information presented within the article is false/fake news, or whether it is legitimate and truthful, **based on logical reasoning and honesty**.
Users must initially trade in Ether for a set number of tokens (0.30ETH = 100 tokens). One token can be used to review one article. Every article that is chosen from the Internet is first evaluated using an ML model, which determines whether the article is truthful or false. For a user to *win* the bet, they must evaluate the same choice as the ML model. By winning the bet, a user will receive a $0.40 gain on bet. This means a player is very capable of making a return on investment in the long run.
Any given article will only be reviewed 100 times by any unique user. Once the 100 cap has been met, the article will retire, and the results will be published to the Ethereum blockchain. The results will include anonymous statistics of ratio of truth:false evaluation, the article source, and the ML's original evaluation. This data is public, immutable, and has a number of advantages. All results going forward will be capable of improving the ML model's ability to recognize false information, by comparing the relationship of assessment to public review, and training the model in a cost-effective, open source method.
To summarize, ETHentic is an incentivized, fun way to educate the public about recognizing fake news across social media, while improving the ability of current ML technology to recognize such information. We are improving the two current best approaches to beating fake news manipulation, by educating the public, and improving technology capabilities.
## How we built it
ETHentic uses a multitude of tools and software to make the application possible. First, we drew out our task flow. After sketching wireframes, we designed a prototype in Framer X. We conducted informal user research to inform our UI decisions, and built the frontend with React.
We used **Blockstack** Gaia to store user metadata, such as user authentication, betting history, token balance, and Ethereum wallet ID in a decentralized manner. We then used MongoDB and Mongoose to create a DB of articles and a counter for the amount of people who have viewed any given article. Once an article is added, we currently outsourced to Google's fact checker ML API to generate a true/false value. This was added to the associated article in Mongo **temporarily**.
Users who wanted to purchase tokens would receive a Metamask request, which would process an Ether transfer to an admin wallet that handles all the money in/money out. Once the payment is received, our node server would update the Blockstack user file with the correct amount of tokens.
Users who perform betting receive instant results on whether they were correct or wrong, and are prompted to accept their winnings from Metamask.
Everytime the Mongo DB updates the counter, it checks if the count = 100. Upon an article reaching a count of 100, the article is removed from the DB and will no longer appear on the betting game. The ML's initial evaluation, the user results, and the source for the article are all published permanently onto an Ethereum blockchain. We used IPFS to create a hash that linked to this information, which meant that the cost for storing this data onto the blockchain was massively decreased. We used Infuria as a way to get access to IPFS without needing a more heavy package and library. Storing on the blockchain allows for easy access to useful data that can be used in the future to train ML models at a rate that matches the user base growth.
As for our brand concept, we used a green colour that reminded us of Ethereum Classic. Our logo is Lady Justice - she's blindfolded, holding a sword in one hand and a scale in the other. Her sword was created as a tribute to the Ethereum logo. We felt that Lady Justice was a good representation of what our project meant, because it gives users the power to be the judge of the content they view, equipping them with a sword and a scale. Our marketing website, ethergiveawayclaimnow.online, is a play on "false advertising" and not believing everything you see online, since we're not actually giving away Ether (sorry!). We thought this would be an interesting way to attract users.
## Challenges we ran into
Figuring out how to use and integrate new technologies such as Blockstack, Ethereum, etc., was the biggest challenge. Some of the documentation was also hard to follow, and because of the libraries being a little unstable/buggy, we were facing a lot of new errors and problems.
## Accomplishments that we're proud of
We are really proud of managing to create such an interesting, fun, yet practical potential solution to such a pressing issue. Overcoming the errors and bugs with little well documented resources, although frustrating at times, was another good experience.
## What we learned
We think this hack made us learn two main things:
1) Blockchain is more than just a cryptocurrency tool.
2) Sometimes even the most dubious subject areas can be made interesting.
The whole fake news problem is something that has traditionally been taken very seriously. We took the issue as an opportunity to create a solution through a different approach, which really stressed the lesson of thinking and viewing things in a multitude of perspectives.
## What's next for ETHentic
ETHentic is looking forward to the potential of continuing to develop the ML portion of the project, and making it available on test networks for others to use and play around with. | # Inspiration
Traditional startup fundraising is often restricted by stringent regulations, which make it difficult for small investors and emerging founders to participate. These barriers favor established VC firms and high-networth individuals, limiting innovation and excluding a broad range of potential investors. Our goal is to break down these barriers by creating a decentralized, community-driven fundraising platform that democratizes startup investments through a Decentralized Autonomous Organization, also known as DAO.
# What It Does
To achieve this, our platform leverages blockchain technology and the DAO structure. Here’s how it works:
* **Tokenization**: We use blockchain technology to allow startups to issue digital tokens that represent company equity or utility, creating an investment proposal through the DAO.
* **Lender Participation**: Lenders join the DAO, where they use cryptocurrency, such as USDC, to review and invest in the startup proposals.
-**Startup Proposals**: Startup founders create proposals to request funding from the DAO. These proposals outline key details about the startup, its goals, and its token structure. Once submitted, DAO members review the proposal and decide whether to fund the startup based on its merits.
* **Governance-based Voting**: DAO members vote on which startups receive funding, ensuring that all investment decisions are made democratically and transparently. The voting is weighted based on the amount lent in a particular DAO.
# How We Built It
### Backend:
* **Solidity** for writing secure smart contracts to manage token issuance, investments, and voting in the DAO.
* **The Ethereum Blockchain** for decentralized investment and governance, where every transaction and vote is publicly recorded.
* **Hardhat** as our development environment for compiling, deploying, and testing the smart contracts efficiently.
* **Node.js** to handle API integrations and the interface between the blockchain and our frontend.
* **Sepolia** where the smart contracts have been deployed and connected to the web application.
### Frontend:
* **MetaMask** Integration to enable users to seamlessly connect their wallets and interact with the blockchain for transactions and voting.
* **React** and **Next.js** for building an intuitive, responsive user interface.
* **TypeScript** for type safety and better maintainability.
* **TailwindCSS** for rapid, visually appealing design.
* **Shadcn UI** for accessible and consistent component design.
# Challenges We Faced, Solutions, and Learning
### Challenge 1 - Creating a Unique Concept:
Our biggest challenge was coming up with an original, impactful idea. We explored various concepts, but many were already being implemented.
**Solution**:
After brainstorming, the idea of a DAO-driven decentralized fundraising platform emerged as the best way to democratize access to startup capital, offering a novel and innovative solution that stood out.
### Challenge 2 - DAO Governance:
Building a secure, fair, and transparent voting system within the DAO was complex, requiring deep integration with smart contracts, and we needed to ensure that all members, regardless of technical expertise, could participate easily.
**Solution**:
We developed a simple and intuitive voting interface, while implementing robust smart contracts to automate and secure the entire process. This ensured that users could engage in the decision-making process without needing to understand the underlying blockchain mechanics.
## Accomplishments that we're proud of
* **Developing a Fully Functional DAO-Driven Platform**: We successfully built a decentralized platform that allows startups to tokenize their assets and engage with a global community of investors.
* **Integration of Robust Smart Contracts for Secure Transactions**: We implemented robust smart contracts that govern token issuance, investments, and governance-based voting bhy writing extensice unit and e2e tests.
* **User-Friendly Interface**: Despite the complexities of blockchain and DAOs, we are proud of creating an intuitive and accessible user experience. This lowers the barrier for non-technical users to participate in the platform, making decentralized fundraising more inclusive.
## What we learned
* **The Importance of User Education**: As blockchain and DAOs can be intimidating for everyday users, we learned the value of simplifying the user experience and providing educational resources to help users understand the platform's functions and benefits.
* **Balancing Security with Usability**: Developing a secure voting and investment system with smart contracts was challenging, but we learned how to balance high-level security with a smooth user experience. Security doesn't have to come at the cost of usability, and this balance was key to making our platform accessible.
* **Iterative Problem Solving**: Throughout the project, we faced numerous technical challenges, particularly around integrating blockchain technology. We learned the importance of iterating on solutions and adapting quickly to overcome obstacles.
# What’s Next for DAFP
Looking ahead, we plan to:
* **Attract DAO Members**: Our immediate focus is to onboard more lenders to the DAO, building a large and diverse community that can fund a variety of startups.
* **Expand Stablecoin Options**: While USDC is our starting point, we plan to incorporate more blockchain networks to offer a wider range of stablecoin options for lenders (EURC, Tether, or Curve).
* **Compliance and Legal Framework**: Even though DAOs are decentralized, we recognize the importance of working within the law. We are actively exploring ways to ensure compliance with global regulations on securities, while maintaining the ethos of decentralized governance. | ## Inspiration
As avid readers ourselves, we love the work that authors put out, and we are deeply saddened by the relative decline of the medium. We believe that democratizing the writing process and giving power back to the writers is the way to revitalize the art form literature, and we believe that utilizing blockchain technology can help us get closer to that ideal.
## What it does
LitHub connects authors with readers through eluv.io's NFT trading platform, allowing authors to sell their literature as exclusive NFTs and readers to have exclusive access to their purchases on our platform.
## How we built it
We utilized the eluv.io API to enable upload, download, and NFT trading functionality for our backend. We leveraged CockroachDB to store user information and we used HTML/CSS to create our user-facing frontend, and deployed our application we used Microsoft Azure.
## Challenges we ran into
One of the main challenges we ran into was understanding the various APIs that we were working with over a short period of time. As this was our first time working with NFTs/blockchain, eluv.io was a particularly new experience to us, and it took some time, but we were able to overcome many of the challenges we faced thanks to the help from mentors from eluv.io. Another challenge we ran into was actually connecting the pieces of our project together as we used many different pieces of technology, but careful coordination and well-planned functional abstraction made the ease of integration a pleasant surprise.
## Accomplishments that we're proud of
We're proud of coming up with an innovative solution that can help level the playing field for writers and for creating a platform that accomplishes this using many of the platforms that event sponsors provided. We are also proud of gaining familiarity with a variety of different platforms in a short period of time and showing resilience in the face of such a large task.
## What we learned
We learned quite a few things while working on this project. Firstly, we learned a lot about blockchain space, and how to utilize this technology during development, and what problems they can solve. Before this event, nobody in our group had much exposure to this field, so it was a welcome experience In addition, some of us who were less familiar with full-stack development got exposure to Node and Express, and we all got to reapply concepts we learned when working with other databases to CockroachDB's user-friendly interface.
## What's next for LitHub
The main next step for LitHub would be to scale our application to handle a larger user base. From there we hope to share LitHub amongst authors and readers around the world so that they too can take partake in the universe of NFTs to safely share their passion. | winning |
# Inspiration
When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system.
# What it does
Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form.
# How we built it
The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable.
# Challenges we ran into
We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging.
# Accomplishments we are proud of
Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds.
# What we learned
We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles.
# Whats next for Istable
We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates. | ## Inspiration
Fall lab and design bay cleanout leads to some pretty interesting things being put out at the free tables. In this case, we were drawn in by a motorized Audi Spyder car. And then, we saw the Neurosity Crown headsets, and an idea was born. A single late night call among team members, excited about the possibility of using a kiddy car for something bigger was all it took. Why can't we learn about cool tech and have fun while we're at it?
Spyder is a way we can control cars with our minds. Use cases include remote rescue, non able-bodied individuals, warehouse, and being extremely cool.
## What it does
Spyder uses the Neurosity Crown to take the brainwaves of an individual, train an AI model to detect and identify certain brainwave patterns, and output them as a recognizable output to humans. It's a dry brain-computer interface (BCI) which means electrodes are placed against the scalp to read the brain's electrical activity. By taking advantage of these non-invasive method of reading electrical impulses, this allows for greater accessibility to neural technology.
Collecting these impulses, we are then able to forward these commands to our Viam interface. Viam is a software platform that allows you to easily put together smart machines and robotic projects. It completely changed the way we coded this hackathon. We used it to integrate every single piece of hardware on the car. More about this below! :)
## How we built it
### Mechanical
The manual steering had to be converted to automatic. We did this in SolidWorks by creating a custom 3D printed rack and pinion steering mechanism with a motor mount that was mounted to the existing steering bracket. Custom gear sizing was used for the rack and pinion due to load-bearing constraints. This allows us to command it with a DC motor via Viam and turn the wheel of the car, while maintaining the aesthetics of the steering wheel.
### Hardware
A 12V battery is connected to a custom soldered power distribution board. This powers the car, the boards, and the steering motor. For the DC motors, they are connected to a Cytron motor controller that supplies 10A to both the drive and steering motors via pulse-width modulation (PWM).
A custom LED controller and buck converter PCB stepped down the voltage from 12V to 5V for the LED under glow lights and the Raspberry Pi 4. The Raspberry Pi 4 uses the Viam SDK (which controls all peripherals) and connects to the Neurosity Crown for vision software controlling for the motors. All the wiring is custom soldered, and many parts are custom to fit our needs.
### Software
Viam was an integral part of our software development and hardware bringup. It significantly reduced the amount of code, testing, and general pain we'd normally go through creating smart machine or robotics projects. Viam was instrumental in debugging and testing to see if our system was even viable and to quickly check for bugs. The ability to test features without writing drivers or custom code saved us a lot of time. An exciting feature was how we could take code from Viam and merge it with a Go backend which is normally very difficult to do. Being able to integrate with Go was very cool - usually have to do python (flask + SDK). Being able to use Go, we get extra backend benefits without the headache of integration!
Additional software that we used was python for the keyboard control client, testing, and validation of mechanical and electrical hardware. We also used JavaScript and node to access the Neurosity Crown, Neurosity SDK and Kinesis API to grab trained AI signals from the console. We then used websockets to port them over to the Raspberry Pi to be used in driving the car.
## Challenges we ran into
Using the Neurosity Crown was the most challenging. Training the AI model to recognize a user's brainwaves and associate them with actions didn't always work. In addition, grabbing this data for more than one action per session was not possible which made controlling the car difficult as we couldn't fully realise our dream.
Additionally, it only caught fire once - which we consider to be a personal best. If anything, we created the world's fastest smoke machine.
## Accomplishments that we're proud of
We are proud of being able to complete a full mechatronics system within our 32 hours. We iterated through the engineering design process several times, pivoting multiple times to best suit our hardware availabilities and quickly making decisions to make sure we'd finish everything on time. It's a technically challenging project - diving into learning about neurotechnology and combining it with a new platform - Viam, to create something fun and useful.
## What we learned
Cars are really cool! Turns out we can do more than we thought with a simple kid car.
Viam is really cool! We learned through their workshop that we can easily attach peripherals to boards, use and train computer vision models, and even use SLAM! We spend so much time in class writing drivers, interfaces, and code for peripherals in robotics projects, but Viam has it covered. We were really excited to have had the chance to try it out!
Neurotech is really cool! Being able to try out technology that normally isn’t available or difficult to acquire and learn something completely new was a great experience.
## What's next for Spyder
* Backflipping car + wheelies
* Fully integrating the Viam CV for human safety concerning reaction time
* Integrating Adhawk glasses and other sensors to help determine user focus and control | ## Inspiration
MISSION: Our mission is to create an intuitive and precisely controlled arm for situations that are tough or dangerous for humans to be in.
VISION: This robotic arm application can be used in the medical industry, disaster relief, and toxic environments.
## What it does
The arm imitates the user in a remote destination. The 6DOF range of motion allows the hardware to behave close to a human arm. This would be ideal in environments where human life would be in danger if physically present.
The HelpingHand can be used with a variety of applications, with our simple design the arm can be easily mounted on a wall or a rover. With the simple controls any user will find using the HelpingHand easy and intuitive. Our high speed video camera will allow the user to see the arm and its environment so users can remotely control our hand.
## How I built it
The arm is controlled using a PWM Servo arduino library. The arduino code receives control instructions via serial from the python script. The python script is using opencv to track the user's actions. An additional feature use Intel Realsense and Tensorflow to detect and track user's hand. It uses the depth camera to locate the hand, and use CNN to identity the gesture of the hand out of the 10 types trained. It gave the robotic arm an additional dimension and gave it a more realistic feeling to it.
## Challenges I ran into
The main challenge was working with all 6 degrees of freedom on the arm without tangling it. This being a POC, we simplified the problem to 3DOF, allowing for yaw, pitch and gripper control only. Also, learning the realsense SDK and also processing depth image was an unique experiences, thanks to the hardware provided by Dr. Putz at the nwhacks.
## Accomplishments that I'm proud of
This POC project has scope in a majority of applications. Finishing a working project within the given time frame, that involves software and hardware debugging is a major accomplishment.
## What I learned
We learned about doing hardware hacks at a hackathon. We learned how to control servo motors and serial communication. We learned how to use camera vision efficiently. We learned how to write modular functions for easy integration.
## What's next for The Helping Hand
Improve control on the arm to imitate smooth human arm movements, incorporate the remaining 3 dof and custom build for specific applications, for example, high torque motors would be necessary for heavy lifting applications. | winning |
## Inspiration
With elections right around the corner, many young adults are voting for the first time, and may not be equipped with knowledge of the law and current domestic events. We believe that this is a major problem with our nation, and we seek to use open source government data to provide day to day citizens with access to knowledge on legislative activities and current affairs in our nation.
## What it does
OpenLegislation aims to bridge the knowledge gap by providing easy access to legislative information. By leveraging open-source government data, we empower citizens to make informed decisions about the issues that matter most to them. This approach not only enhances civic engagement but also promotes a more educated and participatory democracy Our platform allows users to input an issue they are interested in, and then uses cosine analysis to fetch the most relevant bills currently in Congress related to that issue.
## How we built it
We built this application with a tech stack of MongoDB, ExpressJS, ReactJS, and OpenAI. DataBricks' Llama Index was used to get embeddings for the title of our bill. We used a Vector Search using Atlas's Vector Search and Mongoose for accurate semantic results when searching for a bill. Additionally, Cloudflare's AI Gateway was used to track calls to GPT-4o for insightful analysis of each bill.
## Challenges we ran into
At first, we tried to use OpenAI's embeddings for each bill's title. However, this brought a lot of issues for our scraper as while the embeddings were really good, they took up a lot of storage and were heavily rate limited. This was not feasible at all. To solve this challenge, we pivoted to a smaller model that uses a pre trained transformer to provide embeddings processed locally instead of through an API call. Although the semantic search was slightly worse, we were able to get satisfactory results for our MVP and be able to expand on different, higher-quality models in the future.
## Accomplishments that we're proud of
We are proud that we have used open source software technology and data to empower the people with transparency and knowledge of what is going on in our government and our nation. We have used the most advanced technology that Cloudflare and Databricks provides and leveraged it for the good of the people. On top of that, we are proud of our technical acheivement of our semantic search, giving the people the bills they want to see.
## What we learned
During the development of this project, we learned more of how vector embeddings work and are used to provide the best search results. We learned more of Cloudflare and OpenAI's tools in this development and will definitely be using them on future projects. Most importantly, we learned the value of open source data and technology and the impact it can have on our society.
## What's next for OpenLegislation
For future progress of OpenLegislation, we plan to expand to local states! Constituents can know directly what is going on in their state on top of their country with this addition and actually be able to receive updates on what officials they elected are actually proposing. In addition, we would expand our technology by using more advanced embeddings for more tailored searches. Finally, we would implore more data anlysis methods with help from Cloudflare and DataBricks' Open-Source technologies to help make this important data more available and transparant for the good of society. | ## Inspiration
We recognized that parliament hearings can be lengthy and difficult to comprehend, yet they have a significant impact on our lives. We were inspired by the concept of a TL:DR (too long, didn't read) summary commonly seen in news articles, but not available for parliament proceedings. As news often only covers mainstream headlines, we saw the need for an all-inclusive, easy-to-access summary of proceedings, which is what led us to create co:ngress.
## What It Does
Co:ngress is an AI-powered app that summarizes the proceedings of the House of Commons of Canada for the convenience of the users. The app utilizes Cohere API for article summarization, which processes the scraped proceedings and summarizes them into a shorter, more easily understandable version. Furthermore, co:ngress fetches an AI-generated image using the Wombo API that loosely relates to the hearing's topic, adding an entertainment factor to the user experience. For scalability, we designed the app to store summaries and images in a database to minimize scraping and processing for repeat requests.
## How We Built It
The co:ngress app is built using a four-step process. First, a web scraper is used to scrape the transcripts of the parliament hearings from the House of Commons website. Next, the scraped article is passed to the Cohere API for summarization. After that, the topic of the parliament proceeding is interpreted, and the Wombo API generates an image that relates to it. Finally, the image and summary are returned to the user for easy consumption. The app was built using various tools and technologies, including web scraping, API integration, and database management.
## Challenges We Ran Into
One of the biggest challenges we faced was fine-tuning the Cohere API hyperparameters to achieve optimal results for our app. This process took up a lot of time and required a lot of trial and error. Additionally, we had to ensure that the web scraper was able to efficiently gather random samples from the parliament website without errors.
## Accomplishments That We're Proud of
We are proud of the modular approach we took in building our app and how we were able to connect all the different pieces together to create a seamless user experience. Additionally, we were able to successfully summarize parliamentary proceedings into a useful result that can be easily understood by the average Canadian.
## What We Learned
One of the key things we learned was the power and versatility of the Cohere API, which has impressive training data that helped us create accurate summaries. We also learned the importance of taking a modular approach to app development, which can make the process more efficient and less prone to errors.
## What's Next for Co:ngress
For Co:ngress, the next steps would be to implement user feedback and continually improve the summarization algorithm to ensure the accuracy and relevance of the summaries. The team could also explore expanding the app to cover other areas of government proceedings or even other countries. Finally, as mentioned, creating a scalable cloud infrastructure to support a large number of users would be an important next step for the app's growth and success. | ## Inspiration
Our team wanted to develop a application that made it easier for normal people to access information on their representatives and the policy they were pursuing. Our team initially set out to create a project that would bring all the necessary information to the citizen in one place. After brainstorming, our idea evolved into informing through a much more impartial and objective way. Unlike choosing news sources, facts regarding campaign finance and historical legislative practices can't be disputed. By presenting information about major donors and bills that representatives sponsor it is easier for the individual to discern whether their representation is working for them or those funding their campaign. In the past, unless you were willing to do deliberate and intensive research there would be no realistic way of coming across these types of statistics on your own easily.
## What it does
First it asks the user for their address which is then used to present them with their U.S. Senators and House Representative. The user can then select any of them and will then be taken to a new page. The final page currently displays the name of the representative and some links to continue exploration about donations and recent bills. The intention, and what a good amount of functionality has already been built for (Check out some of the unused functions in the code for access to recent bills and top donors/donor industry), is using the Open Secrets API to display top donors to the selected politician paired with bills the politician has sponsored or co-sponsored related to these donors using the content classification functionality with Google Could's NLP API. This would highlight how major campaign donations may influence the policy decisions your representatives are making.
## How we built it
We used Expo for development due to the appeal of a cross platform developed. We then used online API's and other public data sets to access information on policy makers, bills, and campaign donors. Most notably we used OpenSecrets's API and Phone2Action's API.
## Challenges we ran into
We all had very little experience with Javascript, so it was quite a learning experience. Additionally, we came up with the idea very late and really had to work fast. In the end it is still in development as we were unable to bring it all together. Separately all the parts worked but when they came together they fell apart i.e. the bills of each politician
## Accomplishments that we're proud of
A functional application that works on both IOS and Android Devices and is able to accurately and quickly tell a user who their representatives are with a lot of the framework in place for the next steps in development. We are proud that in a short amount of time we could make something, learn, and have a good time.
## What we learned
Expo, Native-React, Javascript, Web Development tools
## What's next for War Chest
Implementation of the NLP Analysis of recent bills and pairing with top donors. | winning |
# QThrive
Web-based chatbot to facilitate journalling and self-care. Built for QHacks 2017 @ Queen's University. | ## Inspiration
While we were doing preliminary research, we had found overwhelming amounts of evidence of mental health deterioration as a consequence of life-altering lockdown restrictions. Academic research has shown that adolescents depend on friendship to maintain a sense of self-worth and to manage anxiety and depression. Intimate exchanges and self-esteem support significantly increased long-term self worth and decreased depression.
While people do have virtual classes and social media, some still had trouble feeling close with anyone. This is because conventional forums and social media did not provide a safe space for conversation beyond the superficial. User research also revealed that friendships formed by physical proximity don't necessarily make people feel understood and resulted in feelings of loneliness anyway. Proximity friendships formed in virtual classes also felt shallow in the sense that it only lasted for the duration of the online class.
With this in mind, we wanted to create a platform that encouraged users to talk about their true feelings, and maximize the chance that the user would get heartfelt and intimate replies.
## What it does
Reach is an anonymous forum that is focused on providing a safe space for people to talk about their personal struggles. The anonymity encourages people to speak from the heart. Users can talk about their struggles and categorize them, making it easy for others in similar positions to find these posts and build a sense of closeness with the poster. People with similar struggles have a higher chance of truly understanding each other. Since ill-mannered users can exploit anonymity, there is a tone analyzer that will block posts and replies that contain mean-spirited content from being published while still letting posts of a venting nature through. There is also ReCAPTCHA to block bot spamming.
## How we built it
* Wireframing and Prototyping: Figma
* Backend: Java 11 with Spring Boot
* Database: PostgresSQL
* Frontend: Bootstrap
* External Integration: Recaptcha v3 and IBM Watson - Tone Analyzer
* Cloud: Heroku
## Challenges we ran into
We initially found it a bit difficult to come up with ideas for a solution to the problem of helping people communicate. A plan for a VR space for 'physical' chatting was also scrapped due to time constraints, as we didn't have enough time left to do it by the time we came up with the idea. We knew that forums were already common enough on the internet, so it took time to come up with a product strategy that differentiated us. (Also, time zone issues. The UXer is Australian. They took caffeine pills and still fell asleep.)
## Accomplishments that we're proud of
Finishing it on time, for starters. It felt like we had a bit of a scope problem at the start when deciding to make a functional forum with all these extra features, but I think we pulled it off. The UXer also iterated about 30 screens in total. The Figma file is *messy.*
## What we learned
As our first virtual hackathon, this has been a learning experience for remote collaborative work.
UXer: I feel like i've gotten better at speedrunning the UX process even quicker than before. It usually takes a while for me to get started on things. I'm also not quite familiar with code (I only know python), so watching the dev work and finding out what kind of things people can code was exciting to see.
# What's next for Reach
If this was a real project, we'd work on implementing VR features for those who missed certain physical spaces. We'd also try to work out improvements to moderation, and perhaps a voice chat for users who want to call. | ## Inspiration
Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate.
## What it does
We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal.
## How we built it
Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript
Backend: Python,Javascript
Server side> Nodejs, Passport js
Database> MongoDB( for user login), MySQL(for mood based music recommendations)
## Challenges we ran into
Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked .
But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally.
## Accomplishments that we're proud of
Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions.
We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body
We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor.
Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging.
## What we learned
We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists.
## What's next for Umang
While the core functionality of our app is complete, it can of course be further improved .
1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress.
2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement.
This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit | winning |
## 1. Problem Statement
Design of an AI-based webpage that allows users to use a machine-learning algorithm to determine whether their sitting posture is correct or not.
## 2. Motivation & Objectives
Things became online as a result of the Covid pandemic, including office work and student learning. Sitting in front of a computer or laptop incorrectly might cause a variety of health problems. To avoid this, we designed a system that can detect sitting posture and tell the user whether his sitting position is good or bad.
*Objectives :*
To design and develop a posture detecting system.
To integrate this system with a web page so that it is accessible to everyone.
To assist users in avoiding health issues caused by improper posture
## 3. Project Description
*Inspiration:* We are all spending more and more hours at work, sitting at our desks and using our laptops for up to 10 hours each day. Making sure that you sit correctly at your workstation is so important. Setting up your laptop incorrectly negatively affects your posture. Having a poor posture has been known to have a very poor effect on mood Also it can lead to Serious health issues like
1. Back, Neck, and Shoulder Pain
2. Poor Circulation
3. Impaired Lung Function
4. Misaligned Spine
5. Change in Curvature of the Spine
And many more
*How it works:* When you visit the webpage, you will be asked to grant camera access. This AI-based web app will continuously capture a person's video using a webcam while they are seated in front of a laptop. Our AI algorithm will then detect body postures, and if the detected body position is unsatisfactory, the web app will alert the user. The alert appears to be a blurring of the current window with an annoying sound, stopping the user from performing anything until he returns to an ergonomically correct posture.
*Accomplishments that we're proud of:* Successfully completed the frontend and backend in a given period of time.
*What's next for Fix-Posture:* Many improvements can be done in Fix-Posture. Currently, we can use this only on a single browser tab. We are planning to convert the web app into a chrome extension so that it can be used on any tab/window.
## 4. How to use the project
1. To access the web app, go to: <https://pavankhots17.github.io/FixPosture.github.io/>
2. Allow the webpage to access your webcam.
3. Simply click the 'Instant Try!' button if you want to use the web app straight away.
4. Alternatively, scroll down the main page to discover how this system works in general.
5. You may quickly learn more about good and bad postures by going to the Read Me! section.
6. Simply click on the tabs About Us and Contact Us provided in the navigation bar and footer of the website to learn more about us or to contact us. | ## Inspiration
We love music!
## What it does
It lets the user to discover a music from a country they choose (China, Japan or Korea)!
## How we built it
Using VoiceFlow!
## Challenges we ran into
We don't know how to put a skip song button while listening to a song. We still don't know how to do it! Maybe we'll find out how to next time (or never)!
## Accomplishments that we're proud of
We managed to finish a project within a short amount of time, while being busy studying! ^^
## What we learned
How to use VoiceFlow!
## What's next for EastMu
Put a skip button! | # Pose-Bot
### Inspiration ⚡
**In these difficult times, where everyone is forced to work remotely and with the mode of schools and colleges going digital, students are
spending time on the screen than ever before, it not only affects student but also employees who have to sit for hours in front of the screen. Prolonged exposure to computer screen and sitting in a bad posture can cause severe health problems like postural dysfunction and affect one's eyes. Therefore, we present to you Pose-Bot**
### What it does 🤖
We created this application to help users maintain a good posture and save from early signs of postural imbalance and protect your vision, this application uses a
image classifier from teachable machines, which is a **Google API** to detect user's posture and notifies the user to correct their posture or move away
from the screen when they may not notice it. It notifies the user when he/she is sitting in a bad position or is too close to the screen.
We first trained the model on the Google API to detect good posture/bad posture and if the user is too close to the screen. Then integrated the model to our application.
We created a notification service so that the user can use any other site and simultaneously get notified if their posture is bad. We have also included **EchoAR models to educate** the children about the harms of sitting in a bad position and importance of healthy eyes 👀.
### How We built it 💡
1. The website UI/UX was designed using Figma and then developed using HTML, CSS and JavaScript.Tensorflow.js was used to detect pose and JavaScript API to send notifications.
2. We used the Google Tensorflow.js API to train our model to classify user's pose, proximity to screen and if the user is holding a phone.
3. For training our model we used our own image as the train data and tested it in different settings.
4. This model is then used to classify the users video feed to assess their pose and detect if they are slouching or if they are too close too screen or are sitting in a generally a bad pose.
5. If the user sits in a bad posture for a few seconds then the bot sends a notificaiton to the user to correct their posture or move away from the screen.
### Challenges we ran into 🧠
* Creating a model with good acccuracy in a general setting.
* Reverse engineering the Teachable Machine's Web Plugin snippet to aggregate data and then display notification at certain time interval.
* Integrating the model into our website.
* Embedding EchoAR models to educate the children about the harms to sitting in a bad position and importance of healthy eyes.
* Deploying the application.
### Accomplishments that we are proud of 😌
We created a completely functional application, which can make a small difference in in our everyday health. We successfully made the applicaition display
system notifications which can be viewed across system even in different apps. We are proud that we could shape our idea into a functioning application which can be used by
any user!
### What we learned 🤩
We learned how to integrate Tensorflow.js models into an application. The most exciting part was learning how to train a model on our own data using the Google API.
We also learned how to create a notification service for a application. And the best of all **playing with EchoAR models** to create a functionality which could
actually benefit student and help them understand the severity of the cause.
### What's next for Pose-Bot 📈
#### ➡ Creating a chrome extension
So that the user can use the functionality on their web browser.
#### ➡ Improve the pose detection model.
The accuracy of the pose detection model can be increased in the future.
#### ➡ Create more classes to help students more concentrate.
Include more functionality like screen time, and detecting if the user is holding their phone, so we can help users to concentrate.
### Help File 💻
* Clone the repository to your local directory
* `git clone https://github.com/cryptus-neoxys/posture.git`
* `npm i -g live-server`
* Install live server to run it locally
* `live-server .`
* Go to project directory and launch the website using live-server
* Voilla the site is up and running on your PC.
* Ctrl + C to stop the live-server!!
### Built With ⚙
* HTML
* CSS
* Javascript
+ Tensorflow.js
+ Web Browser API
* Google API
* EchoAR
* Google Poly
* Deployed on Vercel
### Try it out 👇🏽
* 🤖 [Tensorflow.js Model](https://teachablemachine.withgoogle.com/models/f4JB966HD/)
* 🕸 [The Website](https://pose-bot.vercel.app/)
* 🖥 [The Figma Prototype](https://www.figma.com/file/utEHzshb9zHSB0v3Kp7Rby/Untitled?node-id=0%3A1)
### 3️⃣ Cheers to the team 🥂
* [Apurva Sharma](https://github.com/Apurva-tech)
* [Aniket Singh Rawat](https://github.com/dikwickley)
* [Dev Sharma](https://github.com/cryptus-neoxys) | losing |
## Inspiration
Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans.
## What it does
Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise.
## How we built it
At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data.
We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync.
Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase.
## Challenges we ran into
One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it.
## What's next for phys.io
<https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0> | ## Inspiration
The three of us love lifting at the gym. We always see apps that track cardio fitness but haven't found anything that tracks lifting exercises in real-time. Often times when lifting, people tend to employ poor form leading to gym injuries which could have been avoided by being proactive.
## What it does and how we built it
Our product tracks body movements using EMG signals from a Myo armband the athlete wears. During the activity, the application provides real-time tracking of muscles used, distance specific body parts travel and information about the athlete’s posture and form. Using machine learning, we actively provide haptic feedback through the band to correct the athlete’s movements if our algorithm deems the form to be poor.
## How we built it
We trained an SVM based on employing deliberately performed proper and improper forms for exercises such as bicep curls. We read properties of the EMG signals from the Myo band and associated these with the good/poor form labels. Then, we dynamically read signals from the band during workouts and chart points in the plane where we classify their forms. If the form is bad, the band provides haptic feedback to the user indicating that they might injure themselves.
## Challenges we ran into
Interfacing with the Myo bands API was not the easiest task for us, since we ran into numerous technical difficulties. However, after we spent copious amounts of time debugging, we finally managed to get a clear stream of EMG data.
## Accomplishments that we're proud of
We made a working product by the end of the hackathon (including a fully functional machine learning model) and are extremely excited for its future applications.
## What we learned
It was our first time making a hardware hack so it was a really great experience playing around with the Myo and learning about how to interface with the hardware. We also learned a lot about signal processing.
## What's next for SpotMe
In addition to refining our algorithms and depth of insights we can provide, we definitely want to expand the breadth of activities we cover too (since we’re primarily focused on weight lifting too).
The market we want to target is sports enthusiasts who want to play like their idols. By collecting data from professional athletes, we can come up with “profiles” that the user can learn to play like. We can quantitatively and precisely assess how close the user is playing their chosen professional athlete.
For instance, we played tennis in high school and frequently had to watch videos of our favorite professionals. With this tool, you can actually learn to serve like Federer, shoot like Curry or throw a spiral like Brady. | **Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?**
## The problem
A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals.
Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years.
Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution.
Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment.
[](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0)
## Our Solution!
* Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary.
We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale
This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist
## Inspiration
* The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection
* The room for AI and ML innovation within the physiotherapy market for scaling and growth
## How I built it
* Firebase hosting
* Google cloud services
* React front-end
* Tensorflow PoseNet ML model for computer vision
* Several algorithms to analyze 3d pose data.
## Challenges I ran into
* Testing in React Native
* Getting accurate angle data
* Setting up an accurate timer
* Setting up the ML model to work with the camera using React
## Accomplishments that I'm proud of
* Getting real-time 3D position data
* Supporting multiple exercises
* Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist
* Increasing the usability for senior patients by moving data analysis onto the therapist's side
* **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!!
## What I learned
* How to implement Tensorflow models in React
* Creating reusable components and styling in React
* Creating algorithms to analyze 3D space
## What's next for Physio-Space
* Implementing the sharing of the collected 3D position data with the therapist
* Adding a dashboard onto the therapist's side | winning |
## Inspiration
The inspiration for this project came during a chaotic pre-hackathon week, when two of us missed an important assignment deadline. Despite having taken pictures of the syllabus on a whiteboard, we simply forgot to transfer the dates to our calendars manually. We realized that we weren't alone in this – many people take pictures of posters, whiteboards, and syllabi, but never get around to manually adding each event or deadline to their calendar. This sparked the idea: what if we could automate the process of turning images, text, and even speech into a calendar that’s ready to use?
## What it does
Agenda automatically generates calendar events from various types of input. Whether you upload a photo of a poster, screenshot a syllabus, or take a picture of a whiteboard, the system parses the image and converts the text into a .ical file that can be imported into any calendar app. You can also type your input directly or speak to the system to request a schedule, itinerary, or event of any kind.
Need to create a workout routine? Ask Agenda. Planning a day of tourism? Agenda can build a personalized itinerary. It’s a tool designed to save you time by turning your notes, screenshots, photos, and thoughts into structured calendar events instantly.
## How we built it
We developed a Flask-based web frontend that allows users to upload an image, take a picture with their webcam, or directly type to the system. For speech transcription, we use Whisper through OpenAI's speech-to-text API. The image processing is powered by GPT-4o's vision capabilities, which recognize the text and identify the activity. The text is then transformed into a structured .ical format using GPT4o’s natural language processing and function-calling capabilities. The output .ical file can be instantly imported into any calendar app without any hassle.
## Challenges we ran into
One major challenge was dealing with large base64-encoded images. Sometimes the image requests would break due to size limits when transmitted to the vision API. It took care to architect the way we handle image and audio uploads for compatibility. Additionally, it took time to get GPT4o’s function calling and text-to-calendar processing working consistently, especially when handling diverse inputs like itineraries, academic deadlines, personal schedules, and event flyers on the street.
## Accomplishments that we're proud of
We’re proud of how intuitive and versatile the system turned out to be. It successfully parses input from images, speech, and text, creating usable .ical files. It can handle a wide variety of use cases, from converting a physical agenda into a digital calendar, to building custom itineraries for travel or fitness. Seeing this system make sense of such diverse data inputs is a real win for us.
## What we learned
We learned a great deal about handling different forms of input—especially speech and images—and converting them into structured data that people can use in a practical way. Optimizing requests and ensuring the system could handle large inputs efficiently taught us valuable lessons about image processing and API limitations. Additionally, working with function-calling in GPT4o showed us how to dynamically convert parsed text into meaningful and useful data.
## What's next for Agenda
Next up, we plan to complete the integration of speech input, allowing users to talk to the system for hands-free scheduling. We also want to extend the platform’s capabilities by adding support for recurring events, better handling of multi-page documents like syllabi, and the ability to sync directly with calendar apps in real-time. We see endless possibilities for expanding the way users can interact with their schedules, making the entire process even more seamless and automatic. | ## Inspiration
The theme of marketing innovations fascinated us because it is a less common hackathon topic compared to others, such as education and urban planning. Additionally, all of us were into ML.
## What it does
Picture this: you've just published an article, but you're left wondering - will it capture the attention of my audience? Will it get the engagement I’m hoping for? This uncertainty is a common struggle for writers and companies alike. They lack efficient tools to predict article performance, leaving them in the dark when it comes to understanding the factors influencing success. But don’t worry, because with NewsProphet, we're turning this challenge into an opportunity! In a world where data is everything, NewsProphet empowers writers and companies with insights into article performance. Using data analysis, we provide a solution that goes beyond guesswork. Our application analyzes multiple features of an article like keywords, images, and content length, revealing patterns and trends that impact success. With NewsProphet, writers and companies can make informed decisions to optimize their content strategies and drive success.
## How we built it
We used Python, JavaScript, HTML, CSS. We web scrape whatever website was input, and analyze things such as keywords, number of images, number of hyperlinks, and more. Depending on this information, we generate suggestions and a score using AI.
## Challenges we ran into
We realized after a lengthy investigation that the correlation between our features and target was not very strong, so we had to alter our idea a bit last minute.
Our original idea did not work out because it was impossible. We had to change it at the last minute. Also, we had some difficulty connecting the back end to the front end.
## Accomplishments that we're proud of
We are very proud of how well we trained our ML model.
## What we learned
A lot of us are new to hacking, so we learned a lot of new things. For some of us, it was our first-time web scraping and doing front end.
## What's next for NewsProphet
We are going to research new features that will give us more accurate results. | ## Inspiration
With the ubiquitous and readily available ML/AI turnkey solutions, the major bottlenecks of data analytics lay in the consistency and validity of datasets.
**This project aims to enable a labeller to be consistent with both their fellow labellers and their past self while seeing the live class distribution of the dataset.**
## What it does
The UI allows a user to annotate datapoints from a predefined list of labels while seeing the distribution of labels this particular datapoint has been previously assigned by another annotator. The project also leverages AWS' BlazingText service to suggest labels of incoming datapoints from models that are being retrained and redeployed as it collects more labelled information. Furthermore, the user will also see the top N similar data-points (using Overlap Coefficient Similarity) and their corresponding labels.
In theory, this added information will motivate the annotator to remain consistent when labelling data points and also to be aware of the labels that other annotators have assigned to a datapoint.
## How we built it
The project utilises Google's Firestore realtime database with AWS Sagemaker to streamline the creation and deployment of text classification models.
For the front-end we used Express.js, Node.js and CanvasJS to create the dynamic graphs. For the backend we used Python, AWS Sagemaker, Google's Firestore and several NLP libraries such as SpaCy and Gensim. We leveraged the realtime functionality of Firestore to trigger functions (via listeners) in both the front-end and back-end. After K detected changes in the database, a new BlazingText model is trained, deployed and used for inference for the current unlabeled datapoints, with the pertinent changes being shown on the dashboard
## Challenges we ran into
The initial set-up of SageMaker was a major timesink, the constant permission errors when trying to create instances and assign roles were very frustrating. Additionally, our limited knowledge of front-end tools made the process of creating dynamic content challenging and time-consuming.
## Accomplishments that we're proud of
We actually got the ML models to be deployed and predict our unlabelled data in a pretty timely fashion using a fixed number of triggers from Firebase.
## What we learned
Clear and effective communication is super important when designing the architecture of technical projects. There were numerous times where two team members were vouching for the same structure but the lack of clarity lead to an apparent disparity.
We also realized Firebase is pretty cool.
## What's next for LabelLearn
Creating more interactive UI, optimizing the performance, have more sophisticated text similarity measures. | losing |
# Flash Computer Vision®
### Computer Vision for the World
Github: <https://github.com/AidanAbd/MA-3>
Try it Out: <http://flash-cv.com>
## Inspiration
Over the last century, computers have gained superhuman capabilities in computer vision. Unfortunately, these capabilities are not yet empowering everyday people because building an image classifier is still a fairly complicated task.
The easiest tools that currently exist still require a good amount of computing knowledge. A good example is [Google AutoML: Vision](https://cloud.google.com/vision/automl/docs/quickstart) which is regarded as the "simplest" solution and yet requires an extensive knowledge of web skills and some coding ability. We are determined to change that.
We were inspired by talking to farmers in Mexico who wanted to identify ready / diseased crops easily without having to train many workers. Despite the technology existing, their walk of life had not lent them the opportunity to do so. We were also inspired by people in developing countries who want access to the frontier of technology but lack the education to unlock it. While we explored an aspect of computer vision, we are interested in giving individuals the power to use all sorts of ML technologies and believe similar frameworks could be set up for natural language processing as well.
## The product: Flash Computer Vision
### Easy to use Image Classification Builder - The Front-end
Flash Magic is a website with an extremely simple interface. It has one functionality: it takes in a variable amount of image folders and gives back an image classification interface. Once the user uploads the image folders, he simply clicks the Magic Flash™ button. There is a short training process during which the website displays a progress bar. The website then returns an image classifier and a simplistic interface for using it. The user can use the interface (which is directly built on the website) to upload and classify new images. The user never has to worry about any of the “magic” that goes on in the backend.
### Magic Flash™ - The Backend
The front end’s connection to the backend sets up a Train image folder on the server. The name of each folder is the category that the pictures inside of it belong to. The backend will take the folder and transfer it into a CSV file. From this CSV file it creates a [Pytorch.utils.Dataset](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataset.html) Class by inheriting Dataset and overriding three of its methods. When the dataset is ready, the data is augmented using a combination of 6 different augmentations: CenterCrop, ColorJitter, RandomAffine, RandomRotation, and Normalize. By doing those transformations, we ~10x the amount of data that we have for training. Once the data is in a CSV file and has been augmented, we are ready for training.
We import [SqueezeNet](https://pytorch.org/hub/pytorch_vision_squeezenet/) and use transfer learning to adjust it to our categories. What this means is that we erase the last layer of the original net, which was originally trained for ImageNet (1000 categories), and initialize a layer of size equal to the number of categories that the user defined, making sure to accurately match dimensions. We then run back-propagation on the network with all the weights “frozen” in place with exception to the ones in the last layer. As the model is training, it is informing the front-end that progress being made, allowing us to display a progress bar. Once the model converges, the final model is saved into a file that the API can easily call for inference (classification) when the user asks for prediction on new data.
## How we built it
The website was built using Node.js and javascript and it has three main functionalities: allowing the user to upload pictures into folders, sending the picture folders to the backend, making API calls to classify new images once the model is ready.
The backend is built in Python and PyTorch. On top of Python we used torch, torch.nn as nn, torch.optim as optim, torch.optim import lr\_scheduler, numpy as np, torchvision, from torchvision import datasets, models, and transforms, import matplotlib.pyplot as plt, import time, import os, import copy, import json, import sys.
## Accomplishments that we're proud of
Going into this, we were not sure if 36 hours were enough to build this product. The proudest moment of this project was the successful testing round at 1am on Sunday. While we had some machine learning experience on our team, none of us had experience with transfer learning or this sort of web application. At the beginning of our project, we sat down, learned about each task, and then drew a diagram of our project. We are especially proud of this step because coming into the coding portion with a clear API functionality understanding and delegated tasks saved us a lot of time and helped us integrate the final product.
## Obstacles we overcame and what we learned
Machine learning models are often finicky in their training patterns. Because our application allows users with little experience, we had to come up with a robust training pipeline. Designing this pipeline took a lot of thought and a few failures before we converged on a few data augmentation techniques that do the trick. After this hurdle, integrating a deep learning backend with the website interface was quite challenging, as the training pipeline requires very specific labels and file structure. Iterating the website to reflect the rigid protocol without over complicating the interface was thus a challenge. We learned a ton over this weekend. Firstly, getting the transfer learning to work was enlightening as freezing parts of the network and writing a functional training loop for specific layers required diving deep into the pytorch API. Secondly, the human-design aspect was a really interesting learning opportunity as we had to come up with the right line between abstraction and functionality. Finally, and perhaps most importantly, constantly meeting and syncing code taught us the importance of keeping a team on the same page at all times.
## What's next for Flash Computer Vision
### Application companion + Machine Learning on the Edge
We want to build a companion app with the same functionality as the website. The companion app would be even more powerful than the website because it would have the ability to quantize models (compression for ml) and to transfer them into TensorFlow Lite so that **models can be stored and used within the device.** This would especially benefit people in developing countries, where they sometimes cannot depend on having a cellular connection.
### Charge to use
We want to make a payment system within the website so that we can scale without worrying about computational cost. We do not want to make a business model out of charging per API call, as we do not believe this will pave a path forward for rapid adoption. **We want users to own their models, this will be our competitive differentiator.** We intentionally used a smaller neural network to reduce hosting and decrease inference time. Once we compress our already small models, this vision can be fully realized as we will not have to host anything, but rather return a mobile application. | ## Inspiration
Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life.
Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life.
## What it does
Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text.
## How we built it
The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app.
---
The back-end service is written in Go and is served with NGrok.
---
We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud.
---
We make use of the Google Vision API in three ways:
* To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc.
* To run Optical Character Recognition on text in the real world which is then read aloud to the user.
* For label detection, to indentify objects and surroundings in the real world which the user can then query about.
## Challenges we ran into
There were a plethora of challenges we experienced over the course of the hackathon.
1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go.
2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded.
3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go.
## Accomplishments that we're proud of
Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app.
Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software.
We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app.
## What we learned
Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack.
Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis.
Zak learned about building a native iOS app that communicates with a data-rich APIs.
We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service.
Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges.
## What's next for Sight
If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app.
Ultimately, we plan to host the back-end on Google App Engine. | ## Inspiration
There was a story about a Grandpa who learnt how to speak English by placing Post-it notes on everything he could get his hands on, just to converse with his grandsons and grand-daughters that lived far away. This let him actively remember the words because he was always looking at the words. With the invention of the Spectacle, we figured we could utilize this method and make this accessible to everyone so that everyone can actively learn while going through their daily life.
## What it does
Using the power of the Spectacles, we created an application that uses a ML model that can detect common objects and classify it under its name and also a translation in the chosen language. This would work as a flashcard so that users can gamify their learning experience. Furthermore, users can screenshot the object which will save it on their phone so that they can access the word and translation anywhere they want.
## How we built it
For the Spectacles, we used Snapchat's propietary Lens Studio to create the interface for our application. By integrating our app with SnapML and the object detection models in its asset library, we were able to create a model that could detect objects and shoot a flashcard on top of it. We also utilized PocketBase and Swift for the application part of our project which integrated quite well with the Spectacles.
## Challenges we ran into
Lens Studio recently got a major update to allow for easier development for Spectacle apps but its recency caused there to be limited documentation of what we could achieve. Alongside the fact that it was our first time working with anything similar to Lens Studio, a lot of our time was committed to learning how it worked.
## Accomplishments that we're proud of
1. We managed to create an Object Detection Model that successfully tracks the objects around us.
2. Created a cohesive Spectacle application that was able to integrate with mobile devices.
## What we learned
This was all of out first times learning how to develop for a tool like the Spectacle. It gave us a new angle to approach any future problems and give us first-hand experience to the workflow of someone developing on AR and ML. We also learnt a lot from conversing with the Snapchat team and their struggles and successes with Lens Studio.
## What's next for LingoSnap
While the app is cohesive, it is still currently in need of more features. We are planning to add voice recognition so that users will be able to practice their pronunciation as well. Furthermore, it would be nice to gamify the app to allow users to continue learning while also having fun. Also, the final goal would be to have the application fully on spectacle and this includes features such as viewing past images and conversations. | winning |
## Inspiration
Vancouver is prone to a number of natural disasters, such as Wildfires, Earthquakes and Hailstorms. We wanted to build an Alexa skill that allows families around the world to quiz themselves on safety procedures on a daily basis.
## What it does
**UBCcure** *You, be Secure* is an Alexa skill, a "Safety Trivia" that allows users to interact with Alexa, answering five, randomized trivia questions from a database collection. Users can gain points upon getting the correct answer.
## How we built it
We built UBCcure with javascript, on Amazon's very own *AWS\_developer console and \_Lambda*. We have implemented Alexa Skills Kit SDK for Node.js. We have tested on Amazon's Echo 2.
## Challenges we ran into
Encountering unexpected errors that arose in the middle of the night, and happily debugging the code.
## Accomplishments that we're proud of
Learning how to use and integrate AWS and Lambda in one day. It was our first Hackathon for most of us, and we had a lot of fun working with new technology!
## What we learned
We learned that we can do **anything**, especially when we put our minds into it. Nothing is impossible!!!
## What's next for UBCcure
We are planning on making the code open-source with detailed instructions so that anyone around the world can easily access the code and make changes that best suit their needs. | ## Inspiration
Our inspiration was Saqib Shaikh, a software engineer at Microsoft who also happens to be hard of sight. In order to write code, he used a text-to-speech engine that would read back what he wrote. That got us thinking: how do other visually impaired individuals learn how to code and develop software? What if there were not only text-to-speech plugins, but also speech-to-text plugins? That became the basis of our project, Shaikh.
## What it does
Shaikh takes in a speech input, either through your device's built-in microphone or an Amazon Echo. Sample inputs include: "declare a variable named x and initialize it to 0," or "declare a for loop that iterates from 1 to 50." Alexa will then translate these requests into the appropriate syntax (in this case, Python), and send this to the server. From the server, we make a GET request from our extension in VS Code and insert this code snippet into our text editor.
## How I built it
We used the Amazon Echo to take in speech input. Additionally, we used Google Cloud services to host our server and StdLib in order to connect the endpoints of our HTTP requests. Our backend was mostly done in Javascript (Node.js, typed.js, AJAX, jQuery, as well as Typescript). We used HTML/CSS/Javascript for our website demo.
## Challenges I ran into
Since there were so many components to our project, we had issues integrating the backend portion together cohesively. In addition, we had some difficulties with the Amazon Echo, since it was our first time using it.
## Accomplishments that I'm proud of
We built something super cool! This was our first time working with Alexa and we're excited to integrate her technology into future hacks.
## What I learned
How to use Alexa, the difference between a JAR and a runnable JAR file was, language models, integrating all the backend together, Kern's juice, how long we could stay awake without sleeping.
## What's next for Shaikh
Using Stack Overflow's API to catch errors and suggest/implement changes, expand to other code editors, | ## Inspiration
• Saw a need for mental health service provision in Amazon Alexa
## What it does
• Created Amazon Alexa skill in Node.js to enable Alexa to empathize with and help a user who is feeling low
• Capabilities include: probing user for the cause of low mood, playing soothing music, reciting inspirational quote
## How we built it
• Created Amazon Alexa skill in Node.js using Amazon Web Services (AWS) and Lambda Function
## Challenges we ran into
• Accessing the web via Alexa, making sample utterances all-encompassing, how to work with Node.js
## Accomplishments that we're proud of
• Made a stable Alexa skill that is useful and extendable
## What we learned
• Node.js, How to use Amazon Web Services
## What's next for Alexa Baymax
• Add resources to Alexa Baymax (if the user has academic issues, can provide links to helpful websites), and emergency contact information, tailor playlist to user's taste and needs, may commercialize by adding an option for the user to book therapy/massage/counseling session | losing |
## Inspiration 💡
We love playing games, and one of our childhood memories was going to Arcades. As technology advances, our favourite pastimes have evolved alongside it. This inspired us to create Two-Pac, a retro-gamified photo album and a classic arcade game inspired by Pac-Man that takes its player on an interactive journey through their photo gallery and crafts narratives from the images of their favourite memories.
## What it does 🎯
* Our application gets 3 images from user's photo gallery, and evenly split them into 4 pieces (image shards) and place them into the Pac-Man maze. The goal of the player is to successfully collect the shards from the maze without being caught by the ghosts, allowing the player to relive the memories and experiences captured in the photo.
* Upon collecting a shard (of which there are 4 in each level), a part of the narrative associated with the image the shard belongs to is revealed to the player. Once the player collects all the shards on the current level or is caught by a ghost, the player's victory/loss is noted and the player progresses to the next level. The progression of the player is dynamic and depends on whether they win or lose on different levels.
* Upon completion of the levels, Two-Pac assembles the complete picture from the shards the player has collected through their victories and losses and reveals the formed narrative.
## How we built it 🦾
We built Two-Pac using Python, PyGame, Vision Transformers, and the Cohere API.
Our project heavily relies on a base Pac-Man game, which we implemented in PyGame. To put a spin on classic Pac-Man and turn it into a gamified photo album, we had to introduce the concept of an "image shard" to the game, for which we added additional code and novel sprites.
A fundamental aspect of our project is its ability to automatically generate descriptions and narratives from the players images. First, Two-Pac uses SOTA Vision Transformers to automatically generate captions for the images and then, using the captions, utilizes Cohere's Generation API to craft a dynamic and immersive narrative that links the memories of the images together.
Since the player's progression through the game impacts the narrative they face, we utilize the Game Tree data structure to capture the various possible game states. The nodes of the Game Tree represent states, while the edges represent the player winning/losing a level. Using the Cohere Generation API, we populate the different paths of the Game Tree, ensuring consistency between generated narratives by conditioning on the generated narrative of images in the ancestor states in the data structure.
Upon initialization of the game, the player's images are decomposed into 4 sub-images and stored as image shards. Upon completion of the game, Two-Pac uses stored information to recreate the (in?)complete image and narrative using the shards the player collects.
## Challenges we ran into 🚦
There were a few challenges that were particularly difficult.
One of our biggest challenges was with trying to incorporate the Cohere API with our Game Tree generation. We had a lot of difficulty trying to ensure that the LLM's output was consistent with the desired format. None of us had any prior experience with Prompt Engineering so we had to read guides and talk to the Cohere mentors for help. Ultimately, a Reddit post resolved our issues (apparently, if you want the LLM output to be parse-able you just need to mention it in the the input).
Another big challenge was with linking different parts of the program together. Different team members worked on different parts in isolation, and trying to make everyone's code adaptable with everyone else's was surprisingly difficult.
## Accomplishments that we're proud of 🌟
We did not know each other before the hackathon, and we take pride in the fact that we were able to get together, make a plan, and stick to it while supporting each other. Additionally, we find it really cool that we **bridged the disconnect between classical AI and modern AI** by using Game Trees, LLMs, and Vision Transformers to bring new life to a classic game.
## What we learned 🔍
Cindy: During the process of making Two-Pac, I learned to make use of all the available resources. And I also realize that being a good programmer is more than just coding, there are still a lot ahead of me for me to learn, such as operating systems.
Duc: I learned the skill of combining individual features together to form a bigger product. As a result, I also learned the importance of writing clean + concise code since ~~trash~~ poorly written code causes a lot of issues when debugging.
Rohan: I learned to take in a lot of new code and find ways to add on new features. This means that writing clean, decoupled code is really important.
Rudraksh: This was my first time having to work with Prompt Engineering. I learned that its advantageous to forgo conversational instructions and give concise and specific details about the format of the input and desired output.
## What's next for Two-Pac ❔
There are many features that we want to implement in the future. One feature we wanted to include but didn't have the time to implement was "song of the year" - using AI, we sought to estimate the year the photo was taken to play some music from that year when one of that image's shards was collected. | ## Inspiration
1. We got the inspiration from photo album's function of storing people's memory and how museums on a website can combine nostalgia and the educational wisdom from the past.
2. We also came up with the idea of using logic and graph theory to construct the connections between historical tales from a csc240 assignment, which uses coloring functions and logical statements to represent different graphs.
## What it does
Our website in the form of a photo album, where visitors type in the key words and will receive related photos. Most importantly, we also provide a platform for visitors to type in their own stories, and they will receive a graph representing the logic within the story and some similar stories telling how people in the past deal with the situation or the possible trend of an event.
## How we built it
1. To build the web site, we use html and css to write the front-end of the project.
2. For back-end, we firstly use cohere generate to derive key words and connections from
the text, thus constructing a graph for the visitor. We use the same approach to deal with database of historical tales stored in the museum. After that, we convert abstract graphs into matrices. Finally, we compare the visitor's graph with all the graphs( using matrices to compute), and provide the most inspirational stories to the visitor.
## Challenges we ran into
1. Initially, we had trouble extracting logical connections from text file. We first came up with the idea of using predicate logic P(x) to represent people and events, but found it hard to formalize in the code. But we end up using nodes to solve the problem.
2. We also had trouble installing cohere package and finally tried different system environment to figure it out.
## Accomplishments that we're proud of
1. Our website boasts a unique and elegant design, enveloping both us and our users in an atmosphere of mystery and nostalgia.
2. We take pride in bridging ancient wisdom with modern-day life, providing an experience that is both educational and inspiring.
3. By using graphs and matrices to analyze language and text information, we optimized the traditional way of focusing solely on the semantics of individual words. This innovative approach allows for a deeper understanding of textual relationships.
4. Our project have other applications on history learning and essay reasoning, because it generate historical narratives as examples for reasoning. It can improve essay argument quality and provides relevant historical instances.
## What we learned
We learned how to construct a graph from a text by using APIs. Knowing more technologies ( such as natural language processing) and being able to utilize them is beneficial for future study and research.
## What's next for Visualizing the Past Wisdom
The graph could be even more specific, by adding value edges to represent how strong the connections are between different nodes(people and events). We also wish to promote the algorithm by coloring the edges, so that we could specify which type of logical connections are for each edges.
Specifying the graph enables us to obtain more information from the text, and thus better comparing and learning from the past. | ## Inspiration
This year's theme of Nostalgia reminded us of our childhoods, reading stories and being so immersed in them. As a result, we created Mememto as a way for us to collectively look back on the past from the retelling of it through thrilling and exciting stories.
## What it does
We created a web application that asks users to input an image, date, and brief description of the memory associated with the provided image. Doing so, users are then given a generated story full of emotions, allowing them to relive the past in a unique and comforting way. Users are also able to connect with others on the platform and even create groups with each other.
## How we built it
Thanks to Taipy and Cohere, we were able to bring this application to life. Taipy supplied both the necessary front-end and back-end components. Additionally, Cohere enabled story generation through natural language processing (NLP) via their POST chat endpoint (<https://api.cohere.ai/v1/chat>).
## Challenges we ran into
Mastering Taipy presented a significant challenge. Due to its novelty, we encountered difficulty freely styling, constrained by its syntax. Setting up virtual environments also posed challenges initially, but ultimately, we successfully learned the proper setup.
## Accomplishments that we're proud of
* We were able to build a web application that functions
* We were able to use Taipy and Cohere to build a functional application
## What we learned
* We were able to learn a lot about the Taipy library, Cohere, and Figma
## What's next for Memento
* Adding login and sign-up
* Improving front-end design
* Adding image processing, able to identify entities within user given image and using that information, along with the brief description of the photo, to produce a more accurate story that resonates with the user
* Saving and storing data | losing |
## Inspiration
For many of us social issues have been top of mind in the past few years. However, the method in which social media and mainstream media alerts us to these issues and their events are often through flashy, attention grabbing headlines that prey on our emotions and blind us to the founding principles and nuances of each movement. With the explosion in popularity of the subreddit r/antiwork we thought now would be a fantastic time to create an application that allows users to easily organize or join local movements for particular causes where their important information and foundational ideas are prominently on display. This idea attempts to combat the radicalization and increased society division that occurs when movements grow rapidly through the aforementioned ways. Through changeup we hope to provide a pathway for users to meetup and make change in ways that stay true to the founding principles and grow in healthy, productive ways.
## What it does
Change Up allows for an easy way for people to come together and mobilize without having to dig around for information. With the simplistic UI in combination with the local events page, people can stay up to date with social issues that have an immediate effect on them.
## How we built it
Used flutter to create the UI and app pages. Used firebase for user authentication in combination with cloud fire-store for event logging.
## Challenges we ran into
* Challenges we ran into
* Getting flutter to mesh with the database
* Learning Flutter and Dart
* Lack of experience with advanced UI design
* Member on team is new to coding and hackathons
+ Finding meaningful ways for them to contribute
+ Balancing productivity with teaching them
* Moral implications of the concept and complexities of social change
+ How we feel about the potential for the app to give further voice to movements that spread hate or disinformation
+ Susceptibility of the app to bad faith actors or bots
+ Union drives often require a level of subtle/ secrecy. Will the app provide companies with an easy way to squash union activities.
## Accomplishments that we're proud of
* Having a aesthetically pleasing finished product and a UI we can be proud of
* Learning flutter and dart
* Participating in a hackathon for the first time (one member)
* The extent to which our idea forced us to consider the moral implications of tech
## What we learned
* Hackathons are definitely more enjoyable and valuable with a more advanced level of coding experience
* Mobile development with flutter and dart
* Interaction of dart and google firebase
## What's next for Change Up!
* Counter protest option
* User authentication to ensure each person can only have one account
* Like/Dislike system for each event
* TOS to stop events based around hate speech or violence
* Auto moderation to remove events that go against TOS
* Location based organization to show events near you | ## Inspiration
Picture this, I was all ready to go to Yale and just hack away. I wanted to hack without any worry. I wanted to come home after hacking and feel that I had accomplished a functional app that I could see people using. I was not sure if I wanted to team up with anyone, but I signed up to be placed on a UMD team. Unfortunately none of the team members wanted to group up, so I developed this application alone.
I volunteered at Technica last week and saw that chaos that is team formation and I saw the team formation at YHack. I think there is a lot of room for team formation to be more fleshed out, so that is what I set out on trying to fix. I wanted to build an app that could make team building at hackathons efficiently.
## What it does
Easily set up "Event rooms" for hackathons, allowing users to join the room, specify their interests, and message other participants that are LFT (looking for team). Once they have formed a team, they can easily get out of the chatroom simply by holding their name down and POOF, no longer LFT!
## How I built it
I built a Firebase server that stores a database of events. Events hold information obviously regarding the event as well as a collection of all members that are LFT. After getting acquainted with Firebase, I just took the application piece by piece. First order of business was adding an event, and then displaying all events, and then getting an individual event, adding a user, deleting a user, viewing all users and then beautifying with lots of animations and proper Material design.
## Challenges I ran into
Android Animation still seems to be much more complex than it needs to be, so that was a challenge and a half. I want my applications to flow, but the meticulousness of Android and its fragmentation problems can cause a brain ache.
## Accomplishments that I'm proud of
In general, I am very proud with the state of the app. I think it serves a very nifty core purpose that can be utilized in the future. Additionally, I am proud of the application's simplicity. I think I set out on a really solid and feasible goal for this weekend that I was able to accomplish.
I really enjoyed the fact that I was able to think about my project, plan ahead, implement piece by piece, go back and rewrite, etc until I had a fully functional app. This project helped me realize that I am a strong developer.
## What I learned
Do not be afraid to erase code that I already wrote. If it's broken and I have lots of spaghetti code, it's much better to try and take a step back, rethink the problem, and then attempt to fix the issues and move forward.
## What's next for Squad Up
I hope to continue to update the project as well as provide more functionalities. I'm currently trying to get a published .apk on my Namecheap domain (squadupwith.me), but I'm not sure how long this DNS propagation will take. Also, I currently have the .apk to only work for Android 7.1.1, so I will have to go back and add backwards compatibility for the android users who are not on the glorious nexus experience. | ## Inspiration
Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need.
## What it does
It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive.
## How we built it
The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted.
## Challenges we ran into
There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users.
## Accomplishments that we're proud of
We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before. | losing |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.