anchor
stringlengths
1
23.8k
positive
stringlengths
1
23.8k
negative
stringlengths
1
31k
anchor_status
stringclasses
3 values
## Inspiration Two of our teammates have had experience with blockchain, and the other two of us were interested in learning more about blockchain. Thus, we decided to take this opportunity to explore blockchain by working on a project. As a multinational team, ranging from America to China to the United Arab Emirates, something we ended up discussing was our countries, their governments, and how they compared and contrasted. Discussing the government, coupled with our desire to experiment with blockchain, led us to discuss finance and how unfortunately there seems to be pervasive fraud and inefficient use of funds. The lack of trust between tax payers and government officials has always been a very profound issue that leads to ineffectiveness and insecurity of both economic and political world. Our solution to this issue is BudgetChain, a distributed web application that aims to help governments and other organizations smoothly and reliably uphold budgets based on public chain of Ethereum. In a democracy, it is fair that funds are distributed as transparently as possible so as to uphold the trust of the people. Similarly, we think it is fair that public companies' shareholders should be able to see that their resources are being used properly. In both of these scenarios, BudgetChain would enable the general public to ensure that their resources are being used efficiently, while maintaining the necessary security and privacy precautions by utilizing blockchain. ## What it does First, BudgetChain simulates the tax paying process by issuing 'campaign'. On the lower level, every time a campaign is created, a copy of our smart contract code will be deployed using the address of the campaign's issuer's public address which will be visible to everyone. It enables us to support different countries and types of organizations' budget management. Second, BudgetChain implements budget management algorithm, which limits the money that can be transferred from one address to another predetermined account within a reasonable range. How? Through a proof of work (voting system) and requests system (predetermined budget upper bound) that is controlled purely by smart contract, we can achieve the automatic money transfer or immediately cancel the requests if some potential fraud is being believed to happen. Smart contracts are used to allow safe and reliable transactions without any third-party entity. For example, one of the other features of BudgetChain is a secure payment gateway through which the people can donate directly to the government in case of any natural calamities without involving any middlemen. This enables people to become proactively involved in their community. This application strives to make an organization’s use of money much more transparent, fair, and efficient. ## How we built it The user interface of BudgetChain was developed using react-native and HTML/CSS. We used the Ethereum blockchain platform in conjunction with Solidity to write smart contracts and power our back-end. The front-end and back-end were linked together using web3, a Javascript API. For more details, please refer to our GitHub repository (linked below). ## Challenges we faced Coming up with an idea was pretty difficult! It was also challenging deciding which framework to use. For the front-end, we switched from React Native, to Vue.js, and finally back to React Native. This indecision used up time and energy. Likewise, for the back-end blockchain portion, we spent a considerable amount of time debating whether to use IBM Hyperledger or Google Cloud BlockApps, before finally going with Ethereum (in conjunction with Solidify). ## Accomplishments that we're proud of We are happy to say that we all know a little more about blockchain than we did prior to this hackathon. All-in-all, it was a great learning experience! We are also proud to be able to have a functioning demo and that we stuck with the task we initially chose to do this weekend. It was a bit of a struggle, and we even seriously considered switching to a simpler project, but in the end, we're glad we stuck with this idea. ## What we learned In addition to learning about new technologies like blockchain, we also learned time management! It was really difficult to stay focused on our project with so many interesting and educational workshops and activities going on, so we had to prioritize when to work on our project and when to attend sessions. We also learned how to work with people from different backgrounds as one cohesive team. ## What's next for BudgetChain We would like to create an authentication system so that specific members of the general public (such as citizens of a certain government or shareholders for a particular company) can voice their opinions or otherwise influence the way the budget is proportioned. This way, BudgetChain would become truly democratic.
## Inspiration As three of our team members are international students, we encountered challenges when transferring our medical records from our home countries to the United States. Because of poor transferability of medical records, we would end up retaking vaccines we had already received in our home countries and wasted a lot of our time collecting our medical records. Moreover, even within the US, we noticed that medical records were often fragmented, with each hospital maintaining its own separate records. These experiences inspired us to create a solution that simplifies the secure transfer of medical records, making healthcare more efficient and accessible for everyone. ## What it does HealthChain.ai allows for medical records to be parsed by AI, and stored as non-fungible tokens on the Ethereum blockchain. The blockchain takes away a central authority to remove long wait times for document requests and allows for easy access by record holders and hospitals. AI parsing allows for easy integration between differing hospital systems and international regulations. ## How we built it Our frontend uses Next.js with tailwind and typescript, and is deployed on Vercel. We used OpenAI to process text input into json to store on our block chain. Our backend used Solidity to write our smart contracts, and broadcast it to the Ethereum block chain using the Alchemy API. We used custom NodeJS API calls to connect the Ethereum block chain to our frontend, and deployed this on vercel as well. This all integrates into the MetaMask API on our frontend which displays NFTs in your Ethereum wallet. ## Challenges we ran into Our original plan was to use Aleo as our Blockchain of choice for our app. But after working with the Aleo’s developers we discovered that the chain's testnet was not working and we couldn't deploy our smart contract. We had to redevelop our Smart Contracts in Solidity and test on the Sepolia, one of the Ethereum testnet. Even then, learning how to navigate through the deployment process and getting the wallets to connect to the chain to interact with our Smart Contracts proved to be one of our biggest challenges. ## Accomplishments that we're proud of We were able to upload our smart contract and able to retrieve data from the chain which allowed our use of the OpenAi API to parse the information retrieved into more understandable, formatted medical records. We are able to apply decentralized finance along with AI which is a huge challenge, we are proud to be able to pull this off. ## What we learned Throughout the hacking of our project, we learned quite a lot such as concepts of Blockchain, cryptography and Machine Learning. On top of learning all these concepts we've learnt how to combine them together through frontend and API calls which is crucial in fullstack development. ## What's next for HealthChain.ai In our upcoming version, we have exciting plans to introduce additional features that we envisioned during the project's development but were constrained by time. These enhancements encompass the integration of speech-to-text for recording medical records, the implementation of OCR technology to convert traditional paper medical records into electronic formats (EMRs), and a comprehensive user interface redesign to simplify the organization of personal medical records. Last but not least, we also want to collaborate with hospitals to bring accessible medical records to the mass market and listen to our customers on what they really want and need.
## Inspiration My recent job application experience with a small company opened my eyes to the hiring challenges faced by recruiters. After taking time to thoughtfully evaluate each candidate, they explained how even a single bad hire wastes significant resources for small teams. This made me realize the need for a better system that saves time and reduces stress for both applicants and hiring teams. That sparked the idea for CareerChain. ## What it does CareerChain allows job seekers and recruiters to create verified profiles on our blockchain-based platform. For applicants, we use a microtransaction system similar to rental deposits or airport carts. A small fee is required to submit each application, refunded when checking status later. This adds friction against mass spam applications, ensuring only serious, passionate candidates apply. For recruiters, our AI prescreens applicants, filtering out unqualified candidates. This reduces time wasted on low-quality applications, allowing teams to focus on best fits. Verified profiles also prevent fraud. By addressing inefficiencies for both sides, CareerChain streamlines hiring through emerging technologies. ## How I built it I built CareerChain using: * XRP Ledger for blockchain transactions and smart contracts * Node.js and Express for the backend REST API * Next.js framework for the frontend ## Challenges we ran into Implementing blockchain was challenging as it was my first time building on the technology. Learning the XRP Ledger and wiring up the components took significant learning and troubleshooting. ## Accomplishments that I'm proud of I'm proud to have gained hands-on blockchain experience and built a working prototype leveraging these cutting-edge technologies. ## What I learned I learned so much about blockchain capabilities and got exposure to innovative tools from sponsors. The hacking experience really expanded my skills. ## What's next for CareerChain Enhancing fraud detection, improving the microtransaction UX, and exploring integrations like background checks to further optimize hiring efficiency.
losing
## Inspiration Want to take advantage of the AR and object detection technologies to help people to gain safer walking experiences and communicate distance information to help people with visual loss navigate. ## What it does Augment the world with the beeping sounds that change depending on your proximity towards obstacles and identifying surrounding objects and convert to speech to alert the user. ## How we built it ARKit; RealityKit uses Lidar sensor to detect the distance; AVFoundation, text to speech technology; CoreML with YoloV3 real time object detection machine learning model; SwiftUI ## Challenges we ran into Computational efficiency. Going through all pixels in the LiDAR sensor in real time wasn’t feasible. We had to optimize by cropping sensor data to the center of the screen ## Accomplishments that we're proud of It works as intended. ## What we learned We learned how to combine AR, AI, LiDar, ARKit and SwiftUI to make an iOS app in 15 hours. ## What's next for SeerAR Expand to Apple watch and Android devices; Improve the accuracy of object detection and recognition; Connect with Firebase and Google cloud APIs;
## Inspiration In the wake of a COVID-altered reality, where the lack of resources continuously ripples throughout communities, Heckler.AI emerges as a beacon for students, young professionals, those without access to mentorship and individuals seeking to harness vital presentation skills. Our project is a transformative journey, empowering aspiring leaders to conquer their fears. In a world, where connections faltered, Heckler.AI fills the void, offering a lifeline for growth and social resurgence. We're not just cultivating leaders; we're fostering a community of resilient individuals, breaking barriers, and shaping a future where every voice matters. Join us in the journey of empowerment, where opportunities lost become the stepping stones for a brighter tomorrow. Heckler.AI: Where anxiety transforms into confidence, and leaders emerge from every corner. ## What it does Heckler.AI is your personalized guide to mastering the art of impactful communication. Our advanced system meticulously tracks hand and arm movements, decodes facial cues from smiles to disinterest, and assesses your posture. This real-time analysis provides actionable feedback, helping you refine your presentation skills. Whether you're a student, young professional, or seeking mentorship, Heckler.AI is the transformative tool that not only empowers you to convey confidence but also builds a community of resilient communicators. Join us in this journey where your every move becomes a step towards becoming a compelling and confident leader. Heckler.AI: Precision in every gesture, resonance in every presentation. ## How we built it We used OpenCV and the MediaPipe framework to detect movements and train machine learning models. The resulting model could identify key postures that affect your presentation skills and give accurate feedback to fast-track your soft skills development. To detect pivotal postures in the presentation, we used several Classification Models, such as Logistic Regression, Ridge Classifier, Random Forest Classifier, and Gradient Boosting Classifier. Afterwards, we select the best performing model, and use the predictions personalized to our project's needs. ## Challenges we ran into Implementing webcam into taipy was difficult since there was no pre-built library. Our solution was to use a custom GUI component, which is an extension mechanism that allows developers to add their own visual elements into Taipy's GUI. In the beginning we also wanted to use a websocket to provide real-time feedback, but we deemed it too difficult to build with our limited timeline. Incorporating a custom webcam into Taipy was full of challenges, and each developer's platform was different and required different setups. In hindsight, we could've also used Docker to containerize the images for a more efficient process. Furthermore, we had difficulties deploying Taipy to our custom domain name, "hecklerai.tech". Since Taipy is built of Flask, we tried different methods, such as deploying on Vercel, using Heroku, and Gunicorn, but our attempts were in vain. A potential solution would be deploying on a virtual machine through cloud hosting platforms like AWS, which could be implemented in the future. ## Accomplishments that we're proud of We are proud of ourselves for making so much progress in such little time with regards to a project that we thought was too ambitious. We were able to clear most of the key milestones of the project. ## What we learned We learnt how to utilize mediapipe and use it as the core technology for our project. We learnt about how to manipulate the different points of the body to accurately use these quantifiers to assess the posture and other key metrics of presenters, such as facial expression and hand gestures. We also took the time to learn more about taipy and use it to power our front end, building a beautiful user friendly interface that displays a video feed through the user's webcam. ## What's next for Heckler.ai At Heckler.AI, we're committed to continuous improvement. Next on our agenda is refining the model's training, ensuring unparalleled accuracy in analyzing hand and arm movements, facial cues, and posture. We're integrating a full-scale application that not only addresses your presentation issues live but goes a step further. Post-video analysis will be a game-changer, offering in-depth insights into precise moments that need improvement. Our vision is to provide users with a comprehensive tool, empowering them not just in the moment but guiding their growth beyond, creating a seamless journey from identifying issues to mastering the art of compelling communication. Stay tuned for a Heckler.AI that not only meets your needs but anticipates and addresses them before you even realize. Elevate your communication with Heckler.AI: Your growth, our commitment.
## Inspiration We wanted a way to promote a healthy diet and lifestyle during these trying times and we thought a great way to start that was to cook your own food instead of having take out. ## What it does It's tinder but for recipes; you either like a recipe and we save it for you or you pass on it and we will remember to exclude it next time ## How we built it We used Googles Firebase to hold our user data such as the recipes they've liked and the ones they've passed on, as well as a recipe API to obtain recipes from, and we used Java on Android Studio for the front end ## Challenges we ran into Learning development on Android using Java for the first time for the majority of our group members along with implementing best practices for android such as MVVM and using the most optimal method for the problem was difficult to digest at the start. ## Accomplishments that we're proud of Having a working project, learning and implementing MVVM architectural pattern, trying out best. ## What we learned We learned how to use information from an API and present it on an application for consumers. It was a good learning experience working with firebase and experimenting with what android studio has to offer. ## What's next for Chef Swipe Since the API we use limits the number of calls we may perform, we hope to invest in a plan that would allow for more usage. We were not able to create tags that would filter through what recipes would appear on the application for each specific profile and we hope to implement that in the future.
winning
## Problem Statement As the number of the elderly population is constantly growing, there is an increasing demand for home care. In fact, the market for safety and security solutions in the healthcare sector is estimated to reach $40.1 billion by 2025. The elderly, disabled, and vulnerable people face a constant risk of falls and other accidents, especially in environments like hospitals, nursing homes, and home care environments, where they require constant supervision. However, traditional monitoring methods, such as human caregivers or surveillance cameras, are often not enough to provide prompt and effective responses in emergency situations. This potentially has serious consequences, including injury, prolonged recovery, and increased healthcare costs. ## Solution The proposed app aims to address this problem by providing real-time monitoring and alert system, using a camera and cloud-based machine learning algorithms to detect any signs of injury or danger, and immediately notify designated emergency contacts, such as healthcare professionals, with information about the user's condition and collected personal data. We believe that the app has the potential to revolutionize the way vulnerable individuals are monitored and protected, by providing a safer and more secure environment in designated institutions. ## Developing Process Prior to development, our designer used Figma to create a prototype which was used as a reference point when the developers were building the platform in HTML, CSS, and ReactJs. For the cloud-based machine learning algorithms, we used Computer Vision, Open CV, Numpy, and Flask to train the model on a dataset of various poses and movements and to detect any signs of injury or danger in real time. Because of limited resources, we decided to use our phones as an analogue to cameras to do the live streams for the real-time monitoring. ## Impact * **Improved safety:** The real-time monitoring and alert system provided by the app helps to reduce the risk of falls and other accidents, keeping vulnerable individuals safer and reducing the likelihood of serious injury. * **Faster response time:** The app triggers an alert and sends notifications to designated emergency contacts in case of any danger or injury, which allows for a faster response time and more effective response. * **Increased efficiency:** Using cloud-based machine learning algorithms and computer vision techniques allow the app to analyze the user's movements and detect any signs of danger without constant human supervision. * **Better patient care:** In a hospital setting, the app could be used to monitor patients and alert nurses if they are in danger of falling or if their vital signs indicate that they need medical attention. This could lead to improved patient care, reduced medical costs, and faster recovery times. * **Peace of mind for families and caregivers:** The app provides families and caregivers with peace of mind, knowing that their loved ones are being monitored and protected and that they will be immediately notified in case of any danger or emergency. ## Challenges One of the biggest challenges have been integrating all the different technologies, such as live streaming and machine learning algorithms, and making sure they worked together seamlessly. ## Successes The project was a collaborative effort between a designer and developers, which highlights the importance of cross-functional teams in delivering complex technical solutions. Overall, the project was a success and resulted in a cutting-edge solution that can help protect vulnerable individuals. ## Things Learnt * **Importance of cross-functional teams:** As there were different specialists working on the project, it helped us understand the value of cross-functional teams in addressing complex challenges and delivering successful results. * **Integrating different technologies:** Our team learned the challenges and importance of integrating different technologies to deliver a seamless and effective solution. * **Machine learning for health applications:** After doing the research and completing the project, our team learned about the potential and challenges of using machine learning in the healthcare industry, and the steps required to build and deploy a successful machine learning model. ## Future Plans for SafeSpot * First of all, the usage of the app could be extended to other settings, such as elderly care facilities, schools, kindergartens, or emergency rooms to provide a safer and more secure environment for vulnerable individuals. * Apart from the web, the platform could also be implemented as a mobile app. In this case scenario, the alert would pop up privately on the user’s phone and notify only people who are given access to it. * The app could also be integrated with wearable devices, such as fitness trackers, which could provide additional data and context to help determine if the user is in danger or has been injured.
## Inspiration In today's age, people have become more and more divisive on their opinions. We've found that discussion nowadays can just result in people shouting instead of trying to understand each other. ## What it does **Change my Mind** helps to alleviate this problem. Our app is designed to help you find people to discuss a variety of different topics. They can range from silly scenarios to more serious situations. (Eg. Is a Hot Dog a sandwich? Is mass surveillance needed?) Once you've picked a topic and your opinion of it, you'll be matched with a user with the opposing opinion and put into a chat room. You'll have 10 mins to chat with this person and hopefully discover your similarities and differences in perspective. After the chat is over, you ask you to rate the maturity level of the person you interacted with. This metric allows us to increase the success rate of future discussions as both users matched will have reputations for maturity. ## How we built it **Tech Stack** * Front-end/UI + Flutter and dart + Adobe XD * Backend + Firebase - Cloud Firestore - Cloud Storage - Firebase Authentication **Details** * Front end was built after developing UI mockups/designs * Heavy use of advanced widgets and animations throughout the app * Creation of multiple widgets that are reused around the app * Backend uses gmail authentication with firebase. * Topics for debate are uploaded using node.js to cloud firestore and are displayed in the app using specific firebase packages. * Images are stored in firebase storage to keep the source files together. ## Challenges we ran into * Initially connecting Firebase to the front-end * Managing state while implementing multiple complicated animations * Designing backend and mapping users with each other and allowing them to chat. ## Accomplishments that we're proud of * The user interface we made and animations on the screens * Sign up and login using Firebase Authentication * Saving user info into Firestore and storing images in Firebase storage * Creation of beautiful widgets. ## What we're learned * Deeper dive into State Management in flutter * How to make UI/UX with fonts and colour palates * Learned how to use Cloud functions in Google Cloud Platform * Built on top of our knowledge of Firestore. ## What's next for Change My Mind * More topics and User settings * Implementing ML to match users based on maturity and other metrics * Potential Monetization of the app, premium analysis on user conversations * Clean up the Coooooode! Better implementation of state management specifically implementation of provide or BLOC.
## Inspiration Being frugal students, we all wanted to create an app that would tell us what kind of food we could find around us based on a budget that we set. And so that’s exactly what we made! ## What it does You give us a price that you want to spend and the radius that you are willing to walk or drive to a restaurant; then voila! We give you suggestions based on what you can get in that price in different restaurants by providing all the menu items with price and calculated tax and tips! We keep the user history (the food items they chose) and by doing so we open the door to crowdsourcing massive amounts of user data and as well as the opportunity for machine learning so that we can give better suggestions for the foods that the user likes the most! But we are not gonna stop here! Our goal is to implement the following in the future for this app: * We can connect the app to delivery systems to get the food for you! * Inform you about the food deals, coupons, and discounts near you ## How we built it ### Back-end We have both an iOS and Android app that authenticates users via Facebook OAuth and stores user eating history in the Firebase database. We also made a REST server that conducts API calls (using Docker, Python and nginx) to amalgamate data from our targeted APIs and refine them for front-end use. ### iOS Authentication using Facebook's OAuth with Firebase. Create UI using native iOS UI elements. Send API calls to Soheil’s backend server using json via HTTP. Using Google Map SDK to display geo location information. Using firebase to store user data on cloud and capability of updating to multiple devices in real time. ### Android The android application is implemented with a great deal of material design while utilizing Firebase for OAuth and database purposes. The application utilizes HTTP POST/GET requests to retrieve data from our in-house backend server, uses the Google Maps API and SDK to display nearby restaurant information. The Android application also prompts the user for a rating of the visited stores based on how full they are; our goal was to compile a system that would incentive foodplaces to produce the highest “food per dollar” rating possible. ## Challenges we ran into ### Back-end * Finding APIs to get menu items is really hard at least for Canada. * An unknown API kept continuously pinging our server and used up a lot of our bandwith ### iOS * First time using OAuth and Firebase * Creating Tutorial page ### Android * Implementing modern material design with deprecated/legacy Maps APIs and other various legacy code was a challenge * Designing Firebase schema and generating structure for our API calls was very important ## Accomplishments that we're proud of **A solid app for both Android and iOS that WORKS!** ### Back-end * Dedicated server (VPS) on DigitalOcean! ### iOS * Cool looking iOS animations and real time data update * Nicely working location features * Getting latest data from server ## What we learned ### Back-end * How to use Docker * How to setup VPS * How to use nginx ### iOS * How to use Firebase * How to OAuth works ### Android * How to utilize modern Android layouts such as the Coordinator, Appbar, and Collapsible Toolbar Layout * Learned how to optimize applications when communicating with several different servers at once ## What's next for How Much * If we get a chance we all wanted to work on it and hopefully publish the app. * We were thinking to make it open source so everyone can contribute to the app.
winning
### Problem and Challenge Achieving 100% financial inclusion where all have access to financial services still remains a difficult challenge. Particularly, a huge percentage of the unbanked adults comprise of women [1]. There are various barriers worldwide that prevent women from accessing formal financial services, including lower levels of income, lack of financial literacy, time and mobility constraints as well as cultural constraints and an overall lack of gender parity [1]. With this problem present, our team wanted to take on Scotiabank's challenge to build a FinTech tool/hack for females. ### Our Inspiration Inspired by LinkedIn, Ten Thousands Coffee, and Forte Foundation, we wanted to build a platform that combines networking opportunities, mentorship programs, and learning resources of personal finance management and investment opportunities to empower women in managing their own finance; thereby increasing financial inclusion of females. ## What it does The three main pillars of Elevate consists of safe community, continuous learning, and mentor support with features including personal financial tracking. ### Continuous Learning Based on the participant's interests, the platform will suggest suitable learning tracks that are available on the current platform. The participant will be able to keep track of their learning progress and apply the lessons learned in real life, for example, tracking their personal financial activity. ### Safe Community The Forum will allow participants to post questions from their learning tracks, current financial news, or discuss any relevant financial topics. Upon signing up, mentors and mentees should abide by the guidelines for respectful and appropriate interactions between parties. Account will be taken off if any events occur. ### Mentor Support Elevate pairs the participant with a mentor that has expertise in the area that the mentor wishes to learn more about. The participant can schedule sessions with the mentor to discuss financial topics that they are insecure about, or discuss questions they have about their lessons learned on the Elevate platform. ### Personal Financial Activity Tracking Elevate participants will be able to track their financial expenses. They would receive notifications and analytics results to help them achieve their financial goals. ## How we built it Before we started implementing, we prototyped the workflow with Webflow. We then built the platform as a web application using html, CSS, JavaScript, and collaborated real-time using git. ## Challenges we ran into * There are a lot of features to incorporate. However we were able to demonstrate the core concepts of our project - to make financing more inclusive. ## Accomplishments that we're proud of * The idea of incorporating several features into one platform. * Deployed a demo web application. * The sophisticated design of the interface and flow of navigation. ## What we learned We learned about the gender parity in finance, and how technology can remove the barriers and create a strong and supportive community for all to understand the important role that finance plays in their lives. ## What's next for Elevate * Partner with financial institutions to create and curate a list of credible learning tracks/resources for mentees * Recruit financial experts as mentees to help enable the program * Add credit/debit cards onto the system to make financial tracking easier. Security issues should be addressed. * Strengthen and implement the backend of the platform to include: Instant messaging, admin page to monitor participants ## Resources and Citation [1] 2020. REMOVING THE BARRIERS TO WOMEN’S FINANCIAL INCLUSION. [ebook] Toronto: Toronto Centre. Available at: <https://res.torontocentre.org/guidedocs/Barriers%20to%20Womens%20Financial%20Inclusion.pdf>.
## What Does "Catiator" Mean? **Cat·i·a·tor** (*noun*): Cat + Gladiator! In other words, a cat wearing a gladiator helmet 🐱 ## What It Does *Catiator* is an educational VR game that lets players battle gladiator cats by learning and practicing American Sign Language. Using finger tracking, players gesture corresponding letters on the kittens to fight them. In order to survive waves of fierce but cuddly warriors, players need to leverage quick memory recall. If too many catiators reach the player, it's game over (and way too hard to focus with so many chonky cats around)! ## Inspiration There are approximately 36 million hard of hearing and deaf individuals live in the United States, and many of them use American Sign Language (ASL). By learning ASL, you'd be able to communicate with 17% more of the US population. For each person who is hard of hearing or deaf, there are many loved ones who hope to have the means to communicate effectively with them. ### *"Signs are to eyes as words are to ears."* As avid typing game enthusiasts who have greatly improved typing speeds ([TypeRacer](https://play.typeracer.com/), [Typing of the Dead](https://store.steampowered.com/agecheck/app/246580/)), we wondered if we could create a similar game to improve the level of understanding of common ASL terms by the general populace. Through our Roman Vaporwave cat-gladiator-themed game, we hope to instill a low barrier and fun alternative to learning American Sign Language. ## Features **1. Multi-mode gameplay.** Learn the ASL alphabet in bite sized Duolingo-style lessons before moving on to "play mode" to play the game! Our in-app training allows you to reinforce your learning, and practice your newly-learned skills. **2. Customized and more intuitive learning.** Using the debug mode, users can define their own signs in Catiator to practice and quiz on. Like Quizlet flash cards, creating your own gestures allows you to customize your learning within the game. In addition to this, being able to see a 3D model of the sign you're trying to learn gives you a much better picture on how to replicate it compared to a 2D image of the sign. ## How We Built It * **VR**: Oculus Quest, Unity3D, C# * **3D Modeling & Animation**: Autodesk Maya, Adobe Photoshop, Unity3D * **UX & UI**: Figma, Unity2D, Unity3D * **Graphic Design**: Adobe Photoshop, Procreate ## Challenges We Ran Into **1. Limitations in gesture recognition.** Similar gestures that involve crossing fingers (ASL letters M vs. N) were limited by Oculus' finger tracking system in differential recognition. Accuracy in finger tracking will continue to improve, and we're excited to see the capabilities that could bring to our game. **2. Differences in hardware.** Three out of four of our team members either own a PC with a graphics card or an Oculus headset. Since both are necessary to debug live in Unity, the differences in hardware made it difficult for us to initially get set up by downloading the necessary packages and get our software versions in sync. **3. Lack of face tracking.** ASL requires signers to make facial expressions while signing which we unfortunately cannot track with current hardware. The Tobii headset, as well as Valve's next VR headset both plan to include eye tracking so with the increased focus on facial tracking in future VR headsets we would better be able to judge signs from users. ## Accomplishments We're Proud Of We're very proud of successfully integrating multiple artistic visions into one project. From Ryan's idea of including chonky cats to Mitchell's idea of a learning game to Nancy's vaporwave aesthetics to Jieying's concept art, we're so proud to see our game come together both aesthetically and conceptually. Also super proud of all the ASL we learned as a team in order to survive in *Catiator*, and for being a proud member of OhYay's table1. ## What We Learned Each member of the team utilized challenging technology, and as a result learned a lot about Unity during the last 36 hours! We learned how to design, train and test a hand recognition system in Unity and build 3D models and UI elements in VR. This project really helped us have a better understanding of many of the capabilities within Oculus, and in utilizing hand tracking to interpret gestures to use in an educational setting. We learned so much through this project and from each other, and had a really great time working as a team! ## Next Steps * Create more lessons for users * Fix keyboard issues so users can define gestures without debug/using the editor * Multi-hand gesture support * Additional mini games for users to practice ASL ## Install Instructions To download, use password "Treehacks" on <https://trisol.itch.io/catiators>, because this is an Oculus Quest application you must sideload the APK using Sidequest or the Oculus Developer App. ## Project Credits/Citations * Thinker Statue model: [Source](https://poly.google.com/u/1/view/fEyCnpGMZrt) * ASL Facts: [ASL Benefits of Communication](https://smackhappy.com/2020/04/asl-benefits-communication/) * Music: [Cassette Tape by Blue Moon](https://youtu.be/9lO_31BP7xY) | [RESPECOGNIZE by Diamond Ortiz](https://www.youtube.com/watch?v=3lnEIXrmxNw) | [Spirit of Fire by Jesse Gallagher](https://www.youtube.com/watch?v=rDtZwdYmZpo) | * SFX: [Jingle Lose](https://freesound.org/people/LittleRobotSoundFactory/sounds/270334/) | [Tada2 by jobro](https://freesound.org/people/jobro/sounds/60444/) | [Correct by Eponn](https://freesound.org/people/Eponn/sounds/421002/) | [Cat Screaming by InspectorJ](https://freesound.org/people/InspectorJ/sounds/415209/) | [Cat2 by Noise Collector](https://freesound.org/people/NoiseCollector/sounds/4914/)
## Inspiration Every year, millions of people suffer from concussions without even realizing it. We wanted to tackle this problem by offering a simple solution that could be used by everyone to offer these people proper care. ## What it does Harnessing computer vision and NLP, we have developed a tool that can administer a concussion test in under 2 minutes. It can be easily done anywhere using just a smartphone. The test consists of multiple cognitive tasks in order to get a complete assessment of the situation. ## How we built it The application consists of a react-native app that can be installed on both iOS and Android devices. The app communicates with our servers, running a complex neural network model, which analyzes the user's pupils. It is then passed through a computer vision algorithm written in opencv. Next, there is a reaction game built in react-native to test reflexes. A speech analysis tool that uses the Google Cloud Platform's API is also used. Finally, there is a questionnaire regarding the symptoms of a user's concussion. ## Challenges we ran into It was very complicated to get a proper segmentation of the pupil because of how small it is. We also ran into some minor Google Cloud bugs. ## Accomplishments that we're proud of We are very proud of our handmade opencv algorithm. We also love how intuitive our UI looks. ## What we learned It was Antonio's first time using react-native and Alex's first time using torch. ## What's next for BrainBud By obtaining a bigger data set of pupils, we could improve the accuracy of the algorithm.
winning
## Inspiration As a team, we've been increasingly concerned about the *data privacy* of users on the internet in 2022. There’s no doubt that the era of vast media consumption has resulted in a monopoly of large tech firms who hold a strong grasp over each and every single user input. When every action you make results in your data being taken and monetized to personalize ads, it’s clear to see the lack of security and protection this can create for unsuspecting everyday users. That’s why we wanted to create an all-in-one platform to **decentralize the world of advertising** and truly give back **digital data ownership** to users. Moreover, we wanted to increase the transparency of what happens with this data, should the user opt-in to provide this info to advertisers. That’s where our project, **Sonr rADar**, comes in. ## What it does As the name suggests—Sonr rADar, integrated into the **Sonr** blockchain ecosystem, is a mobile application which aims to decentralize the advertising industry and offer a robust system for users to feel more empowered about their digital footprint. In addition to the security advantages and more meaningful advertisements, users can also monetarily benefit from their interactions with advertisers. We incentivize users by rewarding them with **cryptocurrency tokens** (such as **SNR**) for their time spent on ads, creating a **win/win situation**. Not only that, but it can send advertisers anonymous info and analytics about how successful their ads are, helping them to improve further. Upon opening the app, users are met with a clean dashboard UI displaying their current token balance, as well as transfer history. In the top right corner, there exists a dropdown menu where the user can choose from various options. This includes the opportunity to enter and update their personal information that they would like to share with advertisers (for ad personalization), an ads dashboard where they can view all relevant ads, and a permissions section for them to opt in/out of where and how their data is being shared (e.g. third parties). Their information is then carefully stored using **data schemas, buckets, and objects**, which directly utilize Sonr’s proprietary **blockchain** technology to offer a *fully transparent* data management solution. ## How we built it We built this application using the Flutter SDK (more specifically, **motor\_flutter**), programming languages such as Dart & C++, and a suite of tools associated with Sonr; including the Speedway CLI! We also utilized a **testing dataset of 500** user profiles (names, emails, phone numbers, locations) for the **schema architecture**. ## Challenges we ran into Since our project leverages tools from an early-stage startup, some documentation was quite hard to find, so there were many instances when we were stuck during the backend implementation. Luckily, we were able to visit the Sonr sponsor booth and get valuable assistance with any troubleshooting—as they were very helpful. It was also largely our first time working with Flutter, Go, and a lot of the other components of blockchain and Sonr, so naturally there was a lot to process and overcome. ## Accomplishments that we're proud of We're proud to have created a proof-of-concept prototype for our ambitious idea; working together as a team to overcome the many obstacles we faced along the way! We hope that this project will create a lasting impact in the advertising space. ## What we learned How to work together as a team and delegate tasks, balance our time amongst in-person activities + working on our project, and create something exciting in only 36 hours! By working with feedback and workshops offered by the sponsor CEO and Chief of Staff, we got an insightful look into the eventful startup experience and what that journey may look like. It's hard to think of a better way to dive into the exciting opportunities of Blockchain! ## What's next for Sonr rADar The state of our project is only the beginning; we plan to further build out the individual pages on the application and eventually deploy it to the App Store! We also hope to receive guidance and mentorship from the Sonr team, so that we can utilize their products in the most effective way possible. As we gain more traction, more advertisers and users will join—making our app even better. It's a positive feedback loop that can one day allow us to even bring the decentralized advertisement platform into AI! ## Team Member Badge IDs **Krish:** TSUNAMI TRAIT CUBE SCHEMA **Mihir:** STADIUM MOBILE LAP BUFFER **Amire:** YAM MANOR LIABILITY BATH **Robert:** CRACKERS TRICK GOOD SKEAN
## Inspiration The goal of Sonr is in empowering users to be able to have true digital ownership of their data connected with our team. By tapping on Sonr's network and analyzing the data in the hands of their owners, we generate value and offer users insights into themselves as digitally connected beings, all in a decentralized manner. ## What it does Diffusion is a data visualization platform for users to visualize data from their other decentralized applications (dApps) on the Sonr blockchain. Users can choose which dApps they wish to connect to Diffusion, before the AI model analyzes the data and brings attention to potential trends in the user's behavior on the dApps. At the end of the analysis, the input data is not stored on our platform to ensure that the privacy of the user is maintained. ## How we built it Our brainstorming sessions were conducted in Miro before we began building a lofi prototype of the mobile application in Figma. Sentence-transformers were used to build our AI model and also perform the chat summary feature. Google app engine was then used to deploy our AI model and allow our Flutter application to make API calls to fetch the data before visualizing it on the application using Syncfusion. ## Challenges we ran into As Sonr's native messaging application Beam was not yet released to the public, we exported chat history from Telegram to act as the pseudo-dataset as a proof-of-concept. We were also limited by what the Sonr blockchain could offer. For example, we initially intended to enable users to voluntarily share their data and use that information to aggregate averages and provide more meaningful insights into their personal habits. However, there is currently no way for you to give permission to others to access your data. Additionally, the existing schemas available on the Sonr blockchain were not suitable for our use case which required machine learning i.e. Lists were not yet supported. Hence, we had to switch to deploying APIs with Flask / App Engine to allow our frontend to query data schemas. ## Accomplishments that we're proud of Given the limited time and many issues we faced, we are proud that we still managed to adapt our application to still function as intended. Additionally, the team was relatively new to Flutter and we had to quickly learn and adapt to the native Dart programming language. Through pair programming, we managed to code quickly and efficiently, allowing us to cover the many components of our application. ## What we learned With how new the Web3 space is, we have to constantly learn and adapt to problems and bugs that we face as there is not much help and documentation available for reference. ## What's next for Diffusion.io As Sonr rolls out more dApps in their ecosystem, support for these dApps should follow suit. Additional metrics that the model will be able to predict can also be constantly developed by improving our AI model.
## Inspiration We have two bitcoin and cryptocurrency enthusiasts on our team, and only of us made money during its peak earlier this month. Cryptocurrencies are just too volatile, and its value depends too much on how the public feels about it. How people think and talk about a cryptocurrency affects it price to a large extent, unlike stocks which also have the support of the market, shareholders and the company itself. ## What it does Our website scrapes for thousands of social media posts and news articles to get information about the required cryptocurrency. We then analyse it using NLP and ML and determine whether the price is likely to go up or down in the very near future. We also display the current price graphs, social media and news trends (if they are positive, neutral or negative) and the popularity ranking of the selected currency on social platforms. ## How I built it The website is mostly built using node.js and bootstrap. We use chart.js for a lot of our web illustrations, as well as python for web scraping, performing sentimental analysis and text processing. NLKT and Google Cloud Natural Language API were especially useful with this. We also stored our database on firebase. **Google Cloud**: We used firebase to efficiently store and manage our database, and Google Cloud Natural Language API to perform sentimental analysis on hundreds of social media posts efficiently. ## Challenges I ran into It was especially hard to create, store and process the large datasets we made consisting of social media posts and news articles. Even though we only needed data from the past few weeks, it was a lot since so many people post online. Getting relevant data, free of spam and repeated posts, and actually getting useful information out of it was hard. ## Accomplishments that I'm proud of We are really proud that we were able to connect multiple streams of data, analyse them and display all relevant information. It was amazing to see when our results matched the past peaks and crashes in bitcoin price. ## What I learned We learned how to scrape relevant data from the web, clean it and perform sentimental analysis on it to make predictions about future prices. Most of this was new to our team members and we definitely learned a lot. ## What's next for We hope to further increase the functionality of our website. We want users to have an option to give the website permission to automatically buy and sell cryptocurrencies when it determines it is the best time to do so. ## Domain name We bought the domain name get-crypto-insights.online for the best domain name challenge since it is relevant to our project. If I found a website of this name on the internet, I would definitely visit it to improve my cryptocurrency trading experience. ## About Us We are Discord team #1, with @uditk, @soulkks, @kilobigeye and @rakshaa
partial
## Inspiration *How might we keep track of the "good old days" before we leave them?* Take Knote was born out of the absolute mess of our camera rolls and message groups as our [nwPlus](https://nwplus.io/) crew travelled to Hack the North 2022. We had plenty of iconic moments and memories we never want to forget, and the way we're "documenting" our trip right now is simply not cutting it. Take Knote (the name and the app!) is inspired by two things: 1. Memory-keeping (ex. string around your finger to remember something) 2. Connection (ex. the red thread of fate between two people) ## What it does Take Knote is a shared digital scrapbook for you and your crew to share, whether that's a significant other, your BFFs, or a beloved hackathon team. With Take Knote, you can enjoy an easy, straightforward experience in: 1. Memory-keeping with scrapbook entries of your outings, favourite moments, photos and notes 2. Planning with a wishlist for your outings, whether that's a cool new museum you saw on TikTok or a restaurant you've been dying to bring your friends to 3. Decision-making itineraries based on preferences in budget, mobility, food cravings or fun levels (for when your crew inevitably can't come to an agreement. ## How we built it Take Knote is powered by the MERN stack: * MongoDB to store our memories and wishlist destinations * Express and Node JS * React native * Google's Places API and Google Maps for our destinations ## Challenges we ran into * new technologies: first time with React Native mobile app, attempted to use CockroachDB Serverless * first hackathon for half of our team! * stomachaches, sleep deprivation and snoring roommates ## Accomplishments that we're proud of * We made a mobile app! * Really solid design from solo designer at her first hackathon (can you tell who's writing this?) * Thoughtful brand concept ## What we learned * Mobile development skills * API design * Adaptability is very important in a short time frame * Importance of good communication between design and developers * How to work super speed like Lightning McQueen ## What's next for Take Knote * Improving on our main flows and overall functionality * Introduce a "year in review"/"wrapped" feature for your most frequented places
## Inspiration Huggie is built off of the concept that no one is alone. There is always someone out there that will show you support. Whether you simply need a hug or just want to give hugs, Huggies has got you covered. Let us be wholesome, together, with Huggies. ## What it does Huggies searches and matches you with another user. It then gives both of you a destination to meet up and hug. Premature cancellation notifies the other user. ## How I built it I wrote a back-end using python Django, and the front-end in Java Android ## Challenges I ran into The latitude was being rounded to the nearest integer, making the meetup point 3km away. That was hard to find. Using an emulator as my second phone was also hard. ## Accomplishments that I'm proud of I'm proud of finishing a complicated back-end using python and writing an entire android application in one hackathon. ## What I learned This took a lot longer than I thought due to configuring the python server models. The android center on user button was also tricky to change. ## What's next for Huggies Optimizing the search and find huggie feature, right now it only searches for the closes match, but it should be easy to have requests time-out and be limited by distance.
## Inspiration Amidst the hectic lives and pandemic struck world, mental health has taken a back seat. This thought gave birth to our inspiration of this web based app that would provide activities customised to a person’s mood that will help relax and rejuvenate. ## What it does We planned to create a platform that could detect a users mood through facial recognition, recommends yoga poses to enlighten the mood and evaluates their correctness, helps user jot their thoughts in self care journal. ## How we built it Frontend: HTML5, CSS(frameworks used:Tailwind,CSS),Javascript Backend: Python,Javascript Server side> Nodejs, Passport js Database> MongoDB( for user login), MySQL(for mood based music recommendations) ## Challenges we ran into Incooperating the open CV in our project was a challenge, but it was very rewarding once it all worked . But since all of us being first time hacker and due to time constraints we couldn't deploy our website externally. ## Accomplishments that we're proud of Mental health issues are the least addressed diseases even though medically they rank in top 5 chronic health conditions. We at umang are proud to have taken notice of such an issue and help people realise their moods and cope up with stresses encountered in their daily lives. Through are app we hope to give people are better perspective as well as push them towards a more sound mind and body We are really proud that we could create a website that could help break the stigma associated with mental health. It was an achievement that in this website includes so many features to help improving the user's mental health like making the user vibe to music curated just for their mood, engaging the user into physical activity like yoga to relax their mind and soul and helping them evaluate their yoga posture just by sitting at home with an AI instructor. Furthermore, completing this within 24 hours was an achievement in itself since it was our first hackathon which was very fun and challenging. ## What we learned We have learnt on how to implement open CV in projects. Another skill set we gained was on how to use Css Tailwind. Besides, we learned a lot about backend and databases and how to create shareable links and how to create to do lists. ## What's next for Umang While the core functionality of our app is complete, it can of course be further improved . 1)We would like to add a chatbot which can be the user's guide/best friend and give advice to the user when in condition of mental distress. 2)We would also like to add a mood log which can keep a track of the user's daily mood and if a serious degradation of mental health is seen it can directly connect the user to medical helpers, therapist for proper treatement. This lays grounds for further expansion of our website. Our spirits are up and the sky is our limit
losing
![alt tag](https://raw.githubusercontent.com/zackharley/QHacks/develop/public/pictures/logoBlack.png) # What is gitStarted? GitStarted is a developer tool to help get projects off the ground in no time. When time is of the essence, devs hate losing time to setting up repositories. GitStarted streamlines the repo creation process, quickly adding your frontend tools and backend npm modules. ## Installation To install: ``` npm install ``` ## Usage To run: ``` gulp ``` ## Credits Created by [Jake Alsemgeest](https://github.com/Jalsemgeest), [Zack Harley](https://github.com/zackharley), [Colin MacLeod](https://github.com/ColinLMacLeod1) and [Andrew Litt](https://github.com/andrewlitt)! Made with :heart: in Kingston, Ontario for QHacks 2016
## Inspiration Nowadays, we have been using **all** sorts of development tools for web development, from the simplest of HTML, to all sorts of high-level libraries, such as Bootstrap and React. However, what if we turned back time, and relived the *nostalgic*, good old times of programming in the 60s? A world where the programming language BASIC was prevalent. A world where coding on paper and on **office memo pads** were so popular. It is time, for you all to re-experience the programming of the **past**. ## What it does It's a programming language compiler and runtime for the BASIC programming language. It allows users to write interactive programs for the web with the simple syntax and features of the BASIC language. Users can read our sample the BASIC code to understand what's happening, and write their own programs to deploy on the web. We're transforming code from paper to the internet. ## How we built it The major part of the code is written in TypeScript, which includes the parser, compiler, and runtime, designed by us from scratch. After we parse and resolve the code, we generate an intermediate representation. This abstract syntax tree is parsed by the runtime library, which generates HTML code. Using GitHub actions and GitHub Pages, we are able to implement a CI/CD pipeline to deploy the webpage, which is **entirely** written in BASIC! We also have GitHub Dependabot scanning for npm vulnerabilities. We use Webpack to bundle code into one HTML file for easy deployment. ## Challenges we ran into Creating a compiler from scratch within the 36-hour time frame was no easy feat, as most of us did not have prior experience in compiler concepts or building a compiler. Constructing and deciding on the syntactical features was quite confusing since BASIC was such a foreign language to all of us. Parsing the string took us the longest time due to the tedious procedure in processing strings and tokens, as well as understanding recursive descent parsing. Last but **definitely not least**, building the runtime library and constructing code samples caused us issues as minor errors can be difficult to detect. ## Accomplishments that we're proud of We are very proud to have successfully "summoned" the **nostalgic** old times of programming and deployed all the syntactical features that we desired to create interactive features using just the BASIC language. We are delighted to come up with this innovative idea to fit with the theme **nostalgia**, and to retell the tales of programming. ## What we learned We learned the basics of making a compiler and what is actually happening underneath the hood while compiling our code, through the *painstaking* process of writing compiler code and manually writing code samples as if we were the compiler. ## What's next for BASIC Web This project can be integrated with a lot of modern features that is popular today. One of future directions can be to merge this project with generative AI, where we can feed the AI models with some of the syntactical features of the BASIC language and it will output code that is translated from the modern programming languages. Moreover, this can be a revamp of Bootstrap and React in creating interactive and eye-catching web pages.
## Inspiration The project is a culmination of version control (Git), design software (Figma) and document editing (Notion, Google Docs). It was inspired by an understanding of a few common pain points team members have faced: 1. Losing information about previous iterations of a file in a linear version control structure (Google Docs). 2. File cluttering with drafts, versions of similar documents (resumes, essays, research papers), leading to unorganized and difficult to manage files along with less room for smooth ideation from previous work. ## What it does JamScribe provides a new approach to version history and document editing. For users familiar with Git, the platform offers 4 features: 1. **Branch** - create a copy of the current document into a new document that acts as a child. Users can make any edits to this new document without disturbing the original document, and can create as many branches as they want. This feature helps users keep different versions of a document that stem from the same base writing. A great example of this is having multiple resumes for different jobs such as Software, ML, Data Science etc. 2. **Merge** - merge two or more documents to create a new document. This feature helps users combine their work with other documents around in the tree, allowing them to merge different documents into one. This feature can be used in situations where users work on different sections in different branches, leading to a merge at the end. This is dealt with conflict resolution techniques (such as those seen in VSCode) and AI integration (future work where an AI can judge how best to combine documents). 3. **Cohere AI Autocomplete** - A major part of ideation today includes running ideas by an LLM model. Our prompt option allows users to expand of written work using the NLP model provided by Cohere, which can answer questions, and auto complete versions of the document. These autocompletions are new branches that stem from the point of request. ## How we built it We built the application using Typescript and Python, and used dependencies Novel.sh and ReactFlow to create an infinite platform to design on. We created a new workflow for editing and writing documents using the branch merge model inspired by version control. ## Challenges we ran into * Navigating around new technologies (Slower ramp up time). * We had to switch to a different text editing library so we needed to quickly adapt our codebase. * Deciding on a UX workflow for a new form of document editing. ## Accomplishments that we're proud of ``` - Designing a brand new workflow for document editing, re-imagining how people write and document. - Creating an MVP from scratch. - Engineering an application that addresses a gap in the current architecture of text editing software today. - Communicating well throughout the entire design process, from idea generation to prototyping, everyone was on the same page. ``` ## What we learned * Teamwork makes the dream work. ## What's next for JamScribe * Interactive merging with AI grammar checks and various other parameters * AI tags to alter style of writing. * Support collaborative development over parallel branches simultaneously. * Integrate current document styles to be imported into JamScribe.
winning
## Inspiration It currently costs the US $1.5 billion per year to run street lights. We thought that there had to be a way to reduce that cost and save electricity ## What it does Smart City Lights is creating a smart street light that ## How we built it We used TRCT 5000 ir sensors to detect the presence of activity. Each sensor were addressable and when a sensor was triggered, it would communicate in MQTT to another micro controller and update associated ## Challenges we ran into We had a major block in progress when we had communication issues between our python code and arduino ## Accomplishments that we're proud of Everyone's commitment to completing the work ## What we learned Teamwork is key to success ## What's next for Smart City Lights Data analytics as well as optimizing the process of processing addresses.
## Inspiration About 0.2 - 2% of the population suffers from deaf-blindness and many of them do not have the necessary resources to afford accessible technology. This inspired us to build a low cost tactile, braille based system that can introduce accessibility into many new situations that was previously not possible. ## What it does We use 6 servo motors controlled by Arduino that mimic braille style display by raising or lowering levers based upon what character to display. By doing this twice per second, even long sentences can be transmitted to the person. All the person needs to do is put their palm on the device. We believe this method is easier to learn and comprehend as well as way cheaper than refreshable braille displays which usually cost more than $5,000 on an average. ## How we built it We use Arduino and to send commands, we use PySerial which is a Python Library. To simulate the reader, we have also build a smartbot with it that relays information to the device. For that we have used Google's Dialogflow. We believe that the production cost of this MVP is less than $25 so this product is commercially viable too. ## Challenges we ran into It was a huge challenge to get the ports working with Arduino. Even with the code right, pyserial was unable to send commands to Arduino. We later realized after long hours of struggle that the key to get it to work is to give some time to the port to open and initialize. So by adding a wait of two seconds and then sending the command, we finally got it to work. ## Accomplishments that we're proud of This was our first hardware have to pulling something like that together a lot of fun! ## What we learned There were a lot of things that were learnt including the Arduino port problem. We learnt a lot about hardware too and how serial ports function. We also learnt about pulses and how sending certain amount of pulses we are able to set the servo to a particular position. ## What's next for AddAbility We plan to extend this to other businesses by promoting it. Many kiosks and ATMs can be integrated with this device at a very low cost and this would allow even more inclusion in the society. We also plan to reduce the prototype size by using smaller motors and using steppers to move the braille dots up and down. This is believed to further bring the cost down to around $15.
## Inspiration Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation. We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently. ## What it does An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible! ## How we built it ![flowchart](https://i.imgur.com/R5Vbpn6.png) *So how does CharactAR work?* To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit! In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment. The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!). In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models. Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen. Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment. For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate. ## Challenges we ran into One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E. The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning. Below, we have “banana man” renders from our hosted Shap-E model. ![bananaman_left](https://i.imgur.com/9i94Jme.jpg) ![bananaman_right](https://i.imgur.com/YJyRLKF.jpg) *Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).* Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats. ## Accomplishments that we're proud of There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing. Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query. We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription. The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability. For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly. ## What we learned For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming. We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience. Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character. ## What's next for CharactAR Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable. While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations.
partial
## Inspiration In the United States, 250,000 drivers fall asleep at the wheel every day. This translates to more than 72,000 crashes each year. I've personally felt road fatigue and it's incredibly hard to fight, so dozing off while on the highway can happen to even the best of us. Wayke could be the app that could save your friend, your parent, your child, or even you. ## What it does We've created a mobile app that is integrated with the MUSE headband, pebble watch, and Grove so that it monitors your brainwaves. You can get in your car, put the MUSE headband on, turn the app on, and start driving. We use your Alpha brainwaves to gauge your drowsiness, and while your driving, if your Alphas rise above our "Doze threshold" for a prolonged period of time, your phone will emit sharp sounds and vibrate crazily while reminding you to take a break. ## How we built it The MUSE headband is a recently released innovation that includes sensors in a headband that monitors your brainwaves - it's basically like a portable EEG. However right now, it's mainly used for meditation. We've hacked it to the point where there's nothing chill or relaxing about Wayze. We want to keep you wide awake and on the edge of your seat. I'm personally not completely sure about the details of the app development and hardware hacking, so if you have any questions, you should Alex Jin, Jofo Domingo, or Michael Albinson. ## Challenges we ran into Alex and Jofo ran into a bunch of bugs while figuring out how to get the output from the MUSE to show up properly as well as automating the connection from the android app all the way to the Grove. Understanding how brainwaves work hurt our brains. ## Accomplishments that we're proud of Actually building the damn thing! ## What we learned You can do amazing things in 2 days. The fact that you can hack 4-5 different things to fit together perfectly is phenomenal! ## What's next for Wayke We think there are so many practical applications for Wayke. Moving forward, I think our product could be used for pilots or any workers that operate heavy machinery or any situations where you can't fall asleep. Additionally, we want to find a way to get something to physically vibrate on your body (like integrating it with a Jawbone or Fitbit)!
## Inspiration We have all had that situation where our alarm goes off and we wake up super drowsy despite getting at least 8 hours of sleep. It turns out, this drowsiness is due to sleep inertia, which is at its worst when we are woken up in REM or deep sleep. ## What it does Our device uses an 3-axis accelerometer and a noise sensor to detect when a user is likely in light sleep, which is often indicated by movement and snoring. We eliminate the use of a phone by placing all of the electronics within a stuffed bird that can be easily placed on the bed. The alarm only goes off if light sleep is detected within 30 minutes of a user-defined alarm time. The alarm time is set by the user through an app on their phone, and analytics and patterns about their sleep can be viewed on a phone app or through our web application. ## How we built it We first started with the hardware components of the system. All of the sensors are sourced from the MLH Hardware store, so we then needed to take the time to figure out how to properly integrate all the sensors with the Arduino/Genuino 101. Following this, we began development of the Arduino code, which involved some light filtering and adjustment for ambient conditions. A web application was simultaneously developed using React. After some feedback from mentors, we determined that putting our electronics in a watch wasn't the best approach and a separate object would be the most ideal for comfort. We determined that a stuffed animal would be the best option. At the time, we did not have a stuffed animal, however we figured out that we could win one from Hootsuite. We managed to get one of these animals, which we then used as housing for our electronics. Unfortunately, this was the farthest we could get with our technical progress, so we only had time to draw some mock-ups for the mobile application. ## Challenges we ran into We had some issues getting the sensors to work, which were fixed by using the datasheets provided at Digikey. Additionally, we needed to find a way to eliminate some of the noise from the sensors, which we eventually fixed using some low-pass, software filtering. A lot of time was invested in attempting to connect the web application to the electronics, unfortunately we did not have enough time to finish this. ## Accomplishments that we're proud of We're proud that we were able to integrate all of the electronics into a small stuffed bird that is self-powered. ## What we learned We learned quite a bit about the impact of sleep cycle on sleep inertia, and how this impacts daily activities and long-term health. We also learned a lot about software filtering and how to properly identify the most important information in a datasheet. ## What's next for Sleep Sweet We would first develop the connection between the electronics and the web application, and eventually the mobile application. After this, we would prefer to get more robust electronics that are smaller in size. Lastly, we would like to integrate the electronics into a pillow or mattress which is more desirable for adults.
## Inspiration Globally, one in ten people do not know how to interpret their feelings. There's a huge global shift towards sadness and depression. At the same time, AI models like Dall-E and Stable Diffusion are creating beautiful works of art, completely automatically. Our team saw the opportunity to leverage AI image models and the emerging industry of Brain Computer Interfaces (BCIs) to create works of art from brainwaves: enabling people to learn more about themselves and ## What it does A user puts on a Brain Computer Interface (BCI) and logs in to the app. As they work in front of front of their computer or go throughout their day, the user's brainwaves are measured. These differing brainwaves are interpreted as indicative of different moods, for which key words are then fed into the Stable Diffusion model. The model produces several pieces, which are sent back to the user through the web platform. ## How we built it We created this project using Python for the backend, and Flask, HTML, and CSS for the frontend. We made use of a BCI library available to us to process and interpret brainwaves, as well as Google OAuth for sign-ins. We made use of an OpenBCI Ganglion interface provided by one of our group members to measure brainwaves. ## Challenges we ran into We faced a series of challenges throughout the Hackathon, which is perhaps the essential route of all Hackathons. Initially, we had struggles setting up the electrodes on the BCI to ensure that they were receptive enough, as well as working our way around the Twitter API. Later, we had trouble integrating our Python backend with the React frontend, so we decided to move to a Flask frontend. It was our team's first ever hackathon and first in-person hackathon, so we definitely had our struggles with time management and aligning on priorities. ## Accomplishments that we're proud of We're proud to have built a functioning product, especially with our limited experience programming and operating under a time constraint. We're especially happy that we had the opportunity to use hardware in our hack, as it provides a unique aspect to our solution. ## What we learned Our team had our first experience with a 'real' hackathon, working under a time constraint to come up with a functioning solution, which is a valuable lesson in and of itself. We learned the importance of time management throughout the hackathon, as well as the importance of a storyboard and a plan of action going into the event. We gained exposure to various new technologies and APIs, including React, Flask, Twitter API and OAuth2.0. ## What's next for BrAInstorm We're currently building a 'Be Real' like social media plattform, where people will be able to post the art they generated on a daily basis to their peers. We're also planning integrating a brain2music feature, where users can not only see how they feel, but what it sounds like as well
losing
# The project HomeSentry is an open source platform that turns your old phones or any devices into a distributed security camera system. Simply install our app on any mobile device and start monitoring your home (or any other place). HomeSentry gives a new life to your old devices while bringing you the peace of mind you deserve! ## Inspiration We all have old phones stored in the bottom of a drawer waiting to be used for something. This is where the inspiration for HomeSentry came from. Indeed, We wanted to give our old cellphone and electronic devices a new use so they don't only collect dust over time. Generally speaking, every cellphone have a camera that could be used for something, and we thought using it for security reason would be a great idea. Home surveillance camera systems are often very expensive, or are complicated to set up. Our solution is very simple and costs close to nothing since it's equipment you already have. ## How it works HomeSentry turns your old cellphones into a complete security system for your home. It's a modular solution where you can register as many devices as you have at your disposition. Every device are linked to your account and automatically stream their camera feed to your personnal dashboard in real time. You can view your security footage from anywhere by logging in to your HomeSentry dashboard. ## How we built it The HomeSentry platform is constituted of 3 main components : #### 1. HomeSentry Server The server main responsibility is responsible to handle all authentication requests and orchestrate cameras connections. It is indeed in charge of connecting the mobile app to users' dashboard so that it may get the live stream footage. This server is built with node.js and uses a MongoDB to store user accounts. #### 2. HomeSentry Mobile app The user goes on the app from his cellphone and enters his credentials. He may then start streaming the video from his camera to the server. The app is currently a web app build with the Angular Framework. We plan to convert it to an Android/iOS application using Apache Cordova at a later stage. #### 3. HomeSentry Dashboard The dashboar is the user main management panel. It allows the user to watch all of the streams he his receiving from the connected cellphones. The website was also built with Angular. ## Technology On a more technical note, this app uses several open sources framework and library in order to accomplish it work. Here's a quick summary. The NodeJS server is built with TypeScript, Express.JS. We use Passport.JS + MongoDB as our authentication system and SocketIO to exchange real time data between every user's devices (cameras) and the dashboard. On mobile side we are using WebRTC to access the devices' camera stream and to link it to the dashboard. Every camera stream is distributed by a peer to peer connection with the web dashboard when it become active. This ensures the streams privacy and reduce video latency. We used Peer.JS and SocketIO to implement this mecanism. Just like the mobile client, the web dashboard is built with Angular and frontend libraries such as Bootstrap or feather-icons. ## Challenges we ran into (and what we've learned) Overall we've learned that sending live streams is quite complicated ! We had underestimated the effort required to send and manage this feed. While working with this type of media, we learned how to communicate with WebRTC. At the begining, we tried to do all the stuff by oursef and use different protocols such as RTMP, but we come to a point where it was a little buggy. Late in the event, we found and used the PeerJS lib to manage those streams and it considerably simplified our code. We found that working with mobile applications like Xamarin is much more complicated for this kind of project. The easiest way was clearly javascript, and it allow a greater type of device to be registered as cameras. The project also help us improved our knowledge of real time messaging and WebSocket by using SocketIO to add a new stream without having to refresh the web page. We also used an authentication library we haven't used yet, called PassportJS for Node. With this we were able to show only the streams of a specific user. We hosted for the first time an appservice with NodeJS on Azure and we configure the CI from GitHub. It's nice to see that they use Github Action to automate this process. We've finally perfected ourselves on various frontend technologies such as Angular. ## What's next for HomeSentry HomeSentry works very well for displaying feeds of a specific user. Now, what might be cool is to add some analytics on that feed to detect motion and different events. We could send a notification of these movements by sms/email or even send a push notification if we could compile this application in Cordova and distribute it to the AppStore and Google Play. Adding the ability to record and save the feed when motion is detected could be a great addition. With detection, we should store this data in local storage and in the cloud. Working offline could also be a great addition. At last improve quality assurance to ensure that the panel works on any device will be a great idea. We believe that HomeSentry can be used in many residences and we hope this application will help people secure their homes without having to invest in expensive equipments.
## Inspiration 🍪 We’re fed up with our roommates stealing food from our designated kitchen cupboards. Few things are as soul-crushing as coming home after a long day and finding that someone has eaten the last Oreo cookie you had been saving. Suffice it to say, the university student population is in desperate need of an inexpensive, lightweight security solution to keep intruders out of our snacks... Introducing **Craven**, an innovative end-to-end pipeline to put your roommates in check and keep your snacks in stock. ## What it does 📸 Craven is centered around a small Nest security camera placed at the back of your snack cupboard. Whenever the cupboard is opened by someone, the camera snaps a photo of them and sends it to our server, where a facial recognition algorithm determines if the cupboard has been opened by its rightful owner or by an intruder. In the latter case, the owner will instantly receive an SMS informing them of the situation, and then our 'security guard' LLM will decide on the appropriate punishment for the perpetrator, based on their snack-theft history. First-time burglars may receive a simple SMS warning, but repeat offenders will have a photo of their heist, embellished with an AI-generated caption, posted on [our X account](https://x.com/craven_htn) for all to see. ## How we built it 🛠️ * **Backend:** Node.js * **Facial Recognition:** OpenCV, TensorFlow, DLib * **Pipeline:** Twilio, X, Cohere ## Challenges we ran into 🚩 In order to have unfettered access to the Nest camera's feed, we had to find a way to bypass Google's security protocol. We achieved this by running an HTTP proxy to imitate the credentials of an iOS device, allowing us to fetch snapshots from the camera at any time. Fine-tuning our facial recognition model also turned out to be a bit of a challenge. In order to ensure accuracy, it was important that we had a comprehensive set of training images for each roommate, and that the model was tested thoroughly. After many iterations, we settled on a K-nearest neighbours algorithm for classifying faces, which performed well both during the day and with night vision. Additionally, integrating the X API to automate the public shaming process required specific prompt engineering to create captions that were both humorous and effective in discouraging repeat offenders. ## Accomplishments that we're proud of 💪 * Successfully bypassing Nest’s security measures to access the camera feed. * Achieving high accuracy in facial recognition using a well-tuned K-nearest neighbours algorithm. * Fine-tuning Cohere to generate funny and engaging social media captions. * Creating a seamless, rapid security pipeline that requires no legwork from the cupboard owner. ## What we learned 🧠 Over the course of this hackathon, we gained valuable insights into how to circumvent API protocols to access hardware data streams (for a good cause, of course). We also deepened our understanding of facial recognition technology and learned how to tune computer vision models for improved accuracy. For our X integration, we learned how to engineer prompts for Cohere's API to ensure that the AI-generated captions were both humorous and contextual. Finally, we gained experience integrating multiple APIs (Nest, Twilio, X) into a cohesive, real-time application. ## What's next for Craven 🔮 * **Multi-owner support:** Extend Craven to work with multiple cupboards or fridges in shared spaces, creating a mutual accountability structure between roommates. * **Machine learning improvement:** Experiment with more advanced facial recognition models like deep learning for even better accuracy. * **Social features:** Create an online leaderboard for the most frequent offenders, and allow users to vote on the best captions generated for snack thieves. * **Voice activation:** Add voice commands to interact with Craven, allowing roommates to issue verbal warnings when the cupboard is opened.
## Inspiration To develop an extremely inconvenient (read: unbelievably secure) way to send files between android devices. Learn about android development. ## What it does The app has two actives: send and receive files. The send activity allows you to pick a file that will be encoded into a byte string, which is then chunked and compiled into QR codes. You can press a 'play' button to begin displaying 1 code every 2 seconds. The receive activity scans qr codes using the camera preview, saves them as it sees them, then compiles them into a byte string to be saved on the device. ## How we built it We used Android studio and a library that can generate QR codes. ## Challenges we ran into Neither of us have every built anything that runs on android. Android development presents special challenges: managing memory and low cpu power make it difficult to generate and display QR codes quickly. You can only encode like 500 bytes in a QR code, so while we envisioned sending MP3s, this would likely result in thousands of codes to be scanned, and missing or misinterpreting a single one would ruin EVERYTHING. ## Accomplishments that we're proud of Highly multi-threaded parsing of files and generation of QR codes. And, it works about as well as you'd think it would. ## What we learned We knew this was not going to work well, but we both learned a lot about android app development, so we're happy. ## What's next for QRCFTP Target the tin-foil-hat wearing demographic. The NSA can't spy on a stream of highly ambiguous visual information.
winning
## Inspiration One of our team members was stunned by the number of colleagues who became self-described "shopaholics" during the pandemic. Understanding their wishes to return to normal spending habits, we thought of a helper extension to keep them on the right track. ## What it does Stop impulse shopping at its core by incentivizing saving rather than spending with our Chrome extension, IDNI aka I Don't Need It! IDNI helps monitor your spending habits and gives recommendations on whether or not you should buy a product. It also suggests if there are local small business alternatives so you can help support your community! ## How we built it React front-end, MongoDB, Express REST server ## Challenges we ran into Most popular extensions have company deals that give them more access to product info; we researched and found the Rainforest API instead, which gives us the essential product info that we needed in our decision algorithm. However this proved, costly as each API call took upwards of 5 seconds to return a response. As such, we opted to process each product page manually to gather our metrics. ## Completion In its current state IDNI is able to perform CRUD operations on our user information (allowing users to modify their spending limits and blacklisted items on the settings page) with our custom API, recognize Amazon product pages and pull the required information for our pop-up display, and dynamically provide recommendations based on these metrics. ## What we learned Nobody on the team had any experience creating Chrome Extensions, so it was a lot of fun to learn how to do that. Along with creating our extensions UI using React.js this was a new experience for everyone. A few members of the team were also able to spend the weekend learning how to create an express.js API with a MongoDB database, all from scratch! ## What's next for IDNI - I Don't Need It! We plan to look into banking integration, compatibility with a wider array of online stores, cleaner integration with small businesses, and a machine learning model to properly analyze each metric individually with one final pass of these various decision metrics to output our final verdict. Then finally, publish to the Chrome Web Store!
## Inspiration We wanted to allow financial investors and people of political backgrounds to save valuable time reading financial and political articles by showing them what truly matters in the article, while highlighting the author's personal sentimental/political biases. We also wanted to promote objectivity and news literacy in the general public by making them aware of syntax and vocabulary manipulation. We hope that others are inspired to be more critical of wording and truly see the real news behind the sentiment -- especially considering today's current events. ## What it does Using Indico's machine learning textual analysis API, we created a Google Chrome extension and web application that allows users to **analyze financial/news articles for political bias, sentiment, positivity, and significant keywords.** Based on a short glance on our visualized data, users can immediately gauge if the article is worth further reading in their valuable time based on their own views. The Google Chrome extension allows users to analyze their articles in real-time, with a single button press, popping up a minimalistic window with visualized data. The web application allows users to more thoroughly analyze their articles, adding highlights to keywords in the article on top of the previous functions so users can get to reading the most important parts. Though there is a possibilitiy of opening this to the general public, we see tremendous opportunity in the financial and political sector in optimizing time and wording. ## How we built it We used Indico's machine learning textual analysis API, React, NodeJS, JavaScript, MongoDB, HTML5, and CSS3 to create the Google Chrome Extension, web application, back-end server, and database. ## Challenges we ran into Surprisingly, one of the more challenging parts was implementing a performant Chrome extension. Design patterns we knew had to be put aside to follow a specific one, to which we gradually aligned with. It was overall a good experience using Google's APIs. ## Accomplishments that we're proud of We are especially proud of being able to launch a minimalist Google Chrome Extension in tandem with a web application, allowing users to either analyze news articles at their leisure, or in a more professional degree. We reached more than several of our stretch goals, and couldn't have done it without the amazing team dynamic we had. ## What we learned Trusting your teammates to tackle goals they never did before, understanding compromise, and putting the team ahead of personal views was what made this Hackathon one of the most memorable for everyone. Emotional intelligence played just as an important a role as technical intelligence, and we learned all the better how rewarding and exciting it can be when everyone's rowing in the same direction. ## What's next for Need 2 Know We would like to consider what we have now as a proof of concept. There is so much growing potential, and we hope to further work together in making a more professional product capable of automatically parsing entire sites, detecting new articles in real-time, working with big data to visualize news sites differences/biases, topic-centric analysis, and more. Working on this product has been a real eye-opener, and we're excited for the future.
## Inspiration Lyft's round up and donate system really inspired us here. We wanted to find a way to benefit both users and help society. We all want to give back somehow, but don't know how sometimes or maybe we want to donate, but don't really know how much to give back or if we could afford it. We wanted an easy way incorporated into our lives and spending habits. This would allow us to reach a wider amount of users and utilize the power of the consumer society. ## What it does With a chrome extension like "Heart of Gold", the user gets every purchase's round up to nearest dollar (for example: purchase of $9.50 has a round up of $10, so $0.50 gets tracked as the "round up") accumulated. The user gets to choose when they want to donate and which organization gets the money. ## How I built it We built a web app/chrome extension using Javascript/JQuery, HTML/CSS. Firebase javascript sdk library helped us store the calculations of the accumulation of round up's. We make an AJAX call to the Paypal API, so it took care of payment for us. ## Challenges I ran into For all of the team, it was our first time creating a chrome app extension. For most of the team, it was our first time heavily working with javascript let alone using technologies like Firebase and the Paypal API. Choose what technology/platform would make the most sense was tough, but the chrome extension would allow for more relevance since a lot of people make more online purchases nowadays and an extension can run in the background/seem omnivalent. So we picked up the javascript language to start creating the extension. Lisa Lu integrated the PayPal API to handle donations and used HTML/CSS/JavaScript to create the extension pop-up. She also styled the user interface. Firebase was also completely new to us, but we chose to use it because it didn't require us to have a two step process: a server (like Flask) + a database (like mySQL or MongoDB). It also helped that we had a mentor guide us through. We learned a lot about the Javascript language (mostly that we haven't even really scratched the surface of it), and the importance of avoiding race conditions. We also learned a lot about how to strategically structure our code system (having a background.js to run firebase database updates ## Accomplishments that I'm proud of Veni, vidi, vici. We came, we saw, we conquered. ## What I learned We all learned that there are multiple ways to create a product to solve a problem. ## What's next for Heart of Gold Heart of Gold has a lot of possibilities: partnering with companies that want to advertise to users and social good organizations, making recommendations to users on charities as well as places to shop, game-ify the experience, expanding capabilities of what a user could do with the round up money they accumulate. Before those big dreams, cleaning up the infrastructure would be very important too.
winning
## Inspiration We saw a short video on a Nepalese boy who had to walk 10 miles each way for school. From this video, we wanted to find a way to bring unique experiences to students in constrained locations. This could be for students in remote locations, or in cash strapped low income schools. We learned that we all share a passion for creating fair learning opportunities for everyone, which is why we created Magic School VR. ## What it does Magic School VR is an immersive virtual reality educational platform where you can attend one-on-one lectures with historical figures, influential scientists, or the world's best teachers. You can have Albert Einstein teach you quantum theory, Bill Nye the Science Guy explain the importance of mitochondria, or Warren Buffet educate you on investing. **Step 1:** Choose a subject *physics, biology, history, computer science, etc.* **Step 2:** Choose your teacher *(Elon Musk, Albert Einstein, Neil Degrasse Tyson, etc.* **Step 3:** Choose your specific topic *Quantum Theory, Data Structures, WWII, Nitrogen cycle, etc.* **Step 4:** Get immersed in your virtual learning environment **Step 5:** Examination *Small quizzes, short answers, etc.* ## How we built it We used Unity, Oculus SDK, and Google VR to build the VR platform as well as a variety of tools and APIs such as: * Lyrebird API to recreate Albert Einstein's voice. We trained the model by feeding it with audio data. Through machine learning, it generated audio clips for us. * Cinema 4D to create and modify 3D models. * Adobe Premiere to put together our 3D models and speech, as well to chroma key masking objects. * Adobe After Effects to create UI animations. * C# to code camera instructions, displays, and interactions in Unity. * Hardware used: Samsung Gear VR headset, Oculus Rift VR Headset. ## Challenges we ran into We ran into a lot of errors with deploying Magic School VR to the Samsung Gear Headset, so instead we used Oculus Rift. However, we had hardware limitations when it came to running Oculus Rift off our laptops as we did not have HDMI ports that connected to dedicated GPUs. This led to a lot of searching around trying to find a desktop PC that could run Oculus. ## Accomplishments that we're proud of We are happy that we got the VR to work. Coming into QHacks we didn't have much experience in Unity so a lot of hacking was required :) Every little accomplishment motivated us to keep grinding. The moment we manged to display our program in the VR headset, we were mesmerized and in love with the technology. We experienced first hand how impactful VR can be in education. ## What we learned * Developing with VR is very fun!!! * How to build environments, camera movements, and interactions within Unity * You don't need a technical background to make cool stuff. ## What's next for Magic School VR Our next steps are to implement eye-tracking engagement metrics in order to see how engaged students are to the lessons. This will help give structure to create more engaging lesson plans. In terms of expanding it as a business, we plan on reaching out to VR partners such as Merge VR to distribute our lesson plans as well as to reach out to educational institutions to create lesson plans designed for the public school curriculum. [via GIPHY](https://giphy.com/gifs/E0vLnuT7mmvc5L9cxp)
## Off The Grid Super awesome offline, peer-to-peer, real-time canvas collaboration iOS app # Inspiration Most people around the world will experience limited or no Internet access at times during their daily lives. We could be underground (on the subway), flying on an airplane, or simply be living in areas where Internet access is scarce and expensive. However, so much of our work and regular lives depend on being connected to the Internet. I believe that working with others should not be affected by Internet access, especially knowing that most of our smart devices are peer-to-peer Wifi and Bluetooth capable. This inspired me to come up with Off The Grid, which allows up to 7 people to collaborate on a single canvas in real-time to share work and discuss ideas without needing to connect to Internet. I believe that it would inspire more future innovations to help the vast offline population and make their lives better. # Technology Used Off The Grid is a Swift-based iOS application that uses Apple's Multi-Peer Connectivity Framework to allow nearby iOS devices to communicate securely with each other without requiring Internet access. # Challenges Integrating the Multi-Peer Connectivity Framework into our application was definitely challenging, along with managing memory of the bit-maps and designing an easy-to-use and beautiful canvas # Team Members Thanks to Sharon Lee, ShuangShuang Zhao and David Rusu for helping out with the project!
## Inspiration ☁️: Our initial inspiration came from the infamous desktop goose “virus”. The premise of the app revolves around a virtual goose causing mayhem on the user’s computer screen, annoying them as much as possible. We thought it was really interesting how the goose was able to take control of the user’s peripheral inputs, and decided to base our project around a similar concept, but to be used for goodwill. Of course, the idea of creating a goose app was certainly enthralling as well. Unfortunately, our designer mistook a goose for a duck, so some plans had to be changed (probably for the better). Currently, one of the most common cybercrimes is phishing (social engineering attack to steal user data). Certain groups, especially children, are often vulnerable to these attacks, so we decided to create a duck companion, one that will protect the user from malicious phishing content. ## What it does💁‍♂️: "Detective Duck" is a real-time phishing detection application that prevents children from clicking misleading links on the web. It uses a combination of machine learning, cloud technologies, and various python libraries to determine fraudulent links, and where they are located on a page. ## Challenges we ran into 😳: * Examining and weighing bias during the preprocessing step of the data * Web scraping; difficulties in extracting html template links, search engine URL, and finding the exact coordinates of these links * Working with new libraries such as pyautogui, mouse events, etc ## Accomplishments that we're proud of 💪: * Successfully transferred ML model on Jupyter notebook to Google Cloud Platform (creating bucket, deploying functions, maintaining servers) * Coding a responsive GUI (duck figure) based on mouse and keyboard detection from scratch, using OOP * Working effectively as a team with a project that demanded a variety of sections (UI / UX Design, Building ML model, Developing python scripts) ## What we learned 🧠: * Implementing and deploying ML model on the cloud ​ * Working with new python tools, like selenium, to extract detailed web page information ## What's next for Detective Duck - Phishing Detection Application 💼: * Finetuning machine learning model * Adding more features to our GUI * Making our coordinate detection more accurate
winning
## Inspiration College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user. Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health. ## What it does Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline. The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity. The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress. ## How we built it The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python. ## Challenges we ran into The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here. ## Accomplishments that we're proud of We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack. ## What we learned We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space. ## What's next for Clockwork Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
## Inspiration The most important part in any quality conversation is knowledge. Knowledge is what ignites conversation and drive - knowledge is the spark that gets people on their feet to take the first step to change. While we live in a time where we are spoiled by the abundance of accessible information, trying to keep up and consume information from a multitude of sources can give you information indigestion: it can be confusing to extract the most relevant points of a new story. ## What it does Macaron is a service that allows you to keep track of all the relevant events that happen in the world without combing through a long news feed. When a major event happens in the world, news outlets write articles. Articles are aggregated from multiple sources and uses NLP to condense the information, classify the summary into a topic, extracts some keywords, then presents it to the user in a digestible, bite-sized info page. ## How we built it Macaron also goes through various social media platforms (twitter at the moment) to perform sentiment analysis to see what the public opinion is on the issue: displayed by the sentiment bar on every event card! We used a lot of Google Cloud Platform to help publish our app. ## What we learned Macaron also finds the most relevant charities for an event (if applicable) and makes donating to it a super simple process. We think that by adding an easy call-to-action button on an article informing you about an event itself, we'll lower the barrier to everyday charity for the busy modern person. Our front end was built on NextJS, with a neumorphism inspired design incorporating usable and contemporary UI/UX design. We used the Tweepy library to scrape twitter for tweets relating to an event, then used NLTK's vader to perform sentiment analysis on each tweet to build a ratio of positive to negative tweets surrounding an event. We also used MonkeyLearn's API to summarize text, extract keywords and classify the aggregated articles into a topic (Health, Society, Sports etc..) The scripts were all written in python. The process was super challenging as the scope of our project was way bigger than we anticipated! Between getting rate limited by twitter and the script not running fast enough, we did hit a lot of roadbumps and had to make quick decisions to cut out the elements of the project we didn't or couldn't implement in time. Overall, however, the experience was really rewarding and we had a lot of fun moving fast and breaking stuff in our 24 hours!
## Inspiration We've noticed that many educators draw common structures on boards, just to erase them and redraw them in common ways to portray something. Imagine your CS teacher drawing an array to show you how bubble sort works, and erasing elements for every swap. This learning experience can be optimized with AI. ## What It Does Our software recognizes digits drawn and digitizes the information. If you draw a list of numbers, it'll recognize it as an array and let you visualize bubble sort automatically. If you draw a pair of axes, it'll recognize this and let you write an equation that it will automatically graph. The voice assisted list operator allows one to execute the most commonly used list operation, "append" through voice alone. A typical use case would be a professor free to roam around the classroom and incorporate a more intimate learning experience, since edits need no longer be made by hand. ## How We Built It The digits are recognized using a neural network trained on the MNIST hand written digits data set. Our code scans the canvas to find digits written in one continuous stroke, puts bounding boxes on them and cuts them out, shrinks them to run through the neural network, and outputs the digit and location info to the results canvas. For the voice driven list operator, the backend server's written in Node.js/Express.js. It accepts voice commands through Bixby and sends them to Almond, which stores and updates the list in a remote server, and also in the web user interface. ## Challenges We Ran Into * The canvas was difficult to work with using JavaScript * It is unbelievably hard to test voice-driven applications amidst a room full of noisy hackers haha ## Accomplishments that We're Proud Of * Our software can accurately recognize digits and digitize the info! ## What We Learned * Almond's, like, *really* cool * Speech recognition has a long way to go, but is also quite impressive in its current form. ## What's Next for Super Smart Board * Recognizing trees and visualizing search algorithms * Recognizing structures commonly found in humanities classes and implementing operations for them * Leveraging Almond's unique capabilities to facilitate operations like inserting at a specific index and expanding uses to data structures besides lists * More robust error handling, in case the voice command is misinterpreted (as it often is) * Generating code to represent the changes made alongside the visual data structure representation
winning
## Inspiration As spring course registration approaches, I find myself constantly searching up classes and figuring out which classes satisfy which graduation requirements. Often, I find myself trying to view classes.berkeley.edu on my phone but it is extremely difficult to navigate the website UI on mobile. This inspired me to create a new and innovative way of navigating Berkeley's course catalogue that is more efficient on mobile devices. As I delved deeper into the chatbot development process, I realized that I didn't need to narrow my chatbot down to only a course finder. I wanted to make a chatbot that could perform a wide range of actions to best support the students here at Berkeley. An all purpose, know-it-all being that could do everything and answer all questions directed at it. ## What it does The chatbot currently has 4 main features. First, it has a help command that displays all of the other features that it contains. Second, it can provide course recommendations to students based on what graduation requirements they need to fulfill. Third, it can set reminders on students' Google Calendars for important events such as midterms and finals. Finally, it can direct students to lecture videos for courses they are taking. ## How we built it We used built the chatbot logic using Javascript and connected it to the Cisco Webex Teams messaging application through Cisco's webhooks. The app is deployed to the web via Microsoft Azure at <https://oskibot.azurewebsites.net>. The website serves as an entry point for the chat bot to connect to the Cisco Webex application. To test the application, we used ngrok to create a temporary URL/tunnel that would expose our local computer (where the chatbot is located during testing) to the Internet. Ngrok provides a much faster alternative to testing web apps because redeploying web apps after every modification is extremely inefficient. To provide course recommendations, we used Google Firebase to store information about Berkeley courses. It sorts the courses into different categories to allow the user to quickly filter down to the most useful classes. We connected our chatbot to the database using Google's Firebase API to get read access to the course information. To set reminders on students' Google Calendar, we utilized Transposit to connect the Google Calendar API to an event planning form to automate the event creation process. The form specifies that it should be used to keep track of test dates but it can technically be used to create any type of Google Calendar event. To direct students to online lecture videos, we attempted to use Eluvio's API to optimize the video sharing process with blockchain technology, dubbed the Content Fabric. However, we were unable to upload our own videos to the Fabric so we created a demo website that uses a sample video from Eluvio. ## Challenges we ran into One of our biggest challenges was deciding what we wanted to create. When we first came into CalHacks, we didn't really have an idea of what we wanted to build. Our main goal was to attend as many workshops as we could to learn new skills. We tinkered around with many different technologies and failed a bunch until we finally decided to focus on one. Once we finally decided on our chatbot idea, some challenges that we ran into was connecting the Webex application to our chatbot code. Since our bot is running on a local computer it is not accessible from Cisco Webex Cloud Platform. To work around this limitation, we had to create a tunnel between our development environment (local computer) to the Internet. Thankfully, ngrok takes care of this for us. ngrok is a tunneling technology that exposes our local port (which the chatbot is listening to) to the Internet through a randomly generated public URL. Also, we wanted to utilize official Berkeley API's to retrieve course information, but the process requires going through many bureaucratic gates to receive authorization to use the API. We ended up manually entering some course information to the Google database to run our initial tests and demos. If we were to continue this project after CalHacks, we request access to the Berkeley API and connect it to our chatbot for a more complete database. We also struggled with learning how to use the API's and how to connect different technologies together. However, after grinding through many online tutorials, we were able to figure out how to make them communicate correctly with each other. ## Accomplishments that we're proud of and What we learned We are most proud of all the skills that we learned here at CalHacks. We learned in-depth how to use Node.js, connect different technologies together using API's, and how to use Git to divide tasks between team members. We gained some intuition and understanding of network architecture through usage of ngrok and Microsoft Azure. We also gained a brief introduction to blockchain and this exposure has definitely piqued our interest in this emerging technology and its potential use cases. ## What's next for OskiBot - UC Berkeley Course Recommendation Chatbot We would like to utilize the official UC Berkeley Classes API to have access to the most complete dataset. We would also like to use Machine Learning algorithms on the roster and signup data for each class to analyze classes more in-depth and create more filters/classifiers. For example, we can use the class registration data to figure out which classes have the highest dropout rate in the first few weeks to help users learn about which classes might be more difficult. We would also like to utilize machine learning to automate newer features and develop more capabilities into the chatbot to make it even more helpful for Berkeley students.
## Inspiration Access to course resources is often fragmented and lacks personalization, making it difficult for students to optimize their learning experiences. When students use Large Language Models (LLMs) for academic insights, they often encounter limitations due to LLMs’ inability to interpret various data formats—like lecture audio or photos of notes. Additionally, context often gets lost between sessions, leading to fragmented study experiences. We created OpenContext to offer a comprehensive solution that enables students to manage, customize, and retain context across multiple learning resources. ## What it does OpenContext is an all-in-one platform designed to provide students with personalized, contextually-aware resources using LLMs. It aims to make course data open source by allowing students to share their course materials. ### Features: * Upload and Process Documents: Users can upload or record audio, PDF, and image files related to their classes. * Chat Assistant: Users can chat with an assistant which will have the context of all the uploaded documents, and will be able to refer to course materials on any given questions. * Real-Time Audio Transcription: Record lectures directly in the browser, and the audio is transcribed and processed in real time. * Document Merging and Quiz Generation: Users can combine documents from different formats and generate quizzes that mimic Quizlet-style flashcards. * Progress Tracking: After completing quizzes, users receive a detailed summary of their performance. **Full User Flow**: The user faces a landing page where they are prompted with three options: Create a new chat, Upload documents, Generate quizzes. As the user navigates to Upload documents, they have the option to record their current lecture real time from their browser. Or, they can upload documents ranging from audio, pdf and image. We are using Tesseract for Optical Character Recognition on image files, OpenAI’s SpeechtoText API on audio files, and our own PDF parser for other class documents. The user can also record the lecture in real time which will be transcribed and processed in real time. After the transcription of a lecture or another class document is finished, it is displayed to the user. They will be able to create a new chat and ask our AI assistant anything related to their course materials. The assistant will have the full context of uploaded documents and will be able to answer with references to those documents. The user will also have the option to generate a quiz based on the transcription of the lecture that just got recorded. They will also be able to merge multiple class documents and generate a custom quiz out of all. The quizzes will have the format of quizlet flashcards, where a question is asked, 4 answers are provided as options and after an option is chosen, the website will prompt with either the chosen answer is correct or incorrect. The score for each question is calculated and at the end of the quiz a summary of the performance is written for the student user to track their progress. ## How We Built It * Frontend: Built with React for a responsive and dynamic user interface. * Backend: Developed with FastAPI, handling various tasks from file processing to vector database interactions. * AI Integration: Utilized OpenAI's Whisper for real-time speech-to-text transcription and embedding functionalities. * OCR: Tesseract is used for Optical Character Recognition on uploaded images, allowing us to convert handwritten or printed text into machine-readable text. Infrastructure: Hosted with Defang for production-grade API management, alongside Cloudflare for data operations and performance optimization. ### Tech Stack: * Figma: We used Figma to design our warm color palette and simplistic theme for pages and logo. * Python: Python scripts are used ubiquitously in OpenContext whether it’s for our deployment scripts for Defang or pipelines to our Vector Databases, our API servers, OpenAI wrappers or other miscellaneous tasks, we utilize Python. * React: React framework is used to build the entire components, pages, and routes on the frontend. * Tesseract OCR: For converting images to text. * FastAPI: We have multiple FastAPI apps that we use for multiple purposes. Having endpoints to respond to different types of file requests from the user, making connections to our Vector DB, and other scripting tasks are all handled by the FastAPI endpoints that we have built. * OpenAI API: We are using multiple services of OpenAI such as the Whisper for ASR or Text Embeddings for later function calling and vector storing. ### Sponsor Products: * Defang: We used Defang in order to test and host both of our API systems in a production environment. Here is the production PDF API: <https://alpnix-pdf-to-searchable--8080.prod1b.defang.dev/docs> * Terraform: We used Terraform, a main.tf script, to test and validate our configuration for our deployment services such as our API hosting with Defang and nginx settings. * Midnight: For open-source data sharing, Midnight provides us the perfect tool to encrypt the shared information. We created our own wallet as the host server and each user is able to create their own private wallets to share files securely. * Cloudflare: We are using multiple services of Cloudflare… + Vectorize: In addition to using Pinecone, we have fully utilized Vectorize, Cloudflare’s Vector DB, at a high level. + Cloudflare Registrar: We are using Cloudflare’s domain registration to buy our domain. + Proxy Traffic: We are using Cloudflare’s proxy traffic service to handle requests in a secure and efficient manner. + Site Analytics: Cloudflare’s data analytics tool to help us analyze the traffic as the site is launched. * Databricks: We have fully utilized Databricks Starter Application to familiarize ourselves with efficient open source data sharing feature of our product. After running some tests, we also decided to integrate LangChain in the future to enhance the context-aware nature of our system. ## Challenges we ran into One significant challenge was efficiently extracting text from images. This required converting images to PDFs, running OCR to overlay text onto the original document, and accurately placing the text for quiz generation. Ensuring real-time transcription accuracy and managing the processing load on our servers were also challenging. ## Accomplishments that we're proud of * Tool Mastery: In a short time, we learned and successfully implemented production environment tools like NGINX, Pinecone, and Terraform. * API Integration: Seamlessly integrated OpenAI’s Whisper and Tesseract for multi-format document processing, enhancing the utility of LLMs for students. * Quiz Generation Pipeline: Developed an efficient pipeline for custom quiz generation from multiple class resources. ## What we learned * Infrastructure Management: Gained experience using Defang, Terraform, and Midnight to host and manage a robust data application. * Prompt Engineering: Through David Malan’s session, we enhanced our ability to prompt engineer, configuring ChatGPT’s API to fulfill specific roles and restrictions effectively. ## What's next for Open Context We aim to develop a secure information-sharing system within the platform, enabling students to share their study materials safely and privately with their peers. Additionally, we plan to introduce collaborative study sessions where students can work together on quizzes and share real-time notes. This could involve shared document editing and group quiz sessions to enhance the sense of open source.
## Inspiration Blockchain has created new opportunities for financial empowerment and decentralized finance (DeFi), but it also introduces several new considerations. Despite its potential for equitability, malicious actors can currently take advantage of it to launder money and fund criminal activities. There has been a recent wave of effort to introduce regulations for crypto, but the ease of money laundering proves to be a serious challenge for regulatory bodies like the Canadian Revenue Agency. Recognizing these dangers, we aimed to tackle this issue through BlockXism! ## What it does BlockXism is an attempt at placing more transparency in the blockchain ecosystem, through a simple verification system. It consists of (1) a self-authenticating service, (2) a ledger of verified users, and (3) rules for how verified and unverified users interact. Users can "verify" themselves by giving proof of identity to our self-authenticating service, which stores their encrypted identity on-chain. A ledger of verified users keeps track of which addresses have been verified, without giving away personal information. Finally, users will lose verification status if they make transactions with an unverified address, preventing suspicious funds from ever entering the verified economy. Importantly, verified users will remain anonymous as long as they are in good standing. Otherwise, such as if they transact with an unverified user, a regulatory body (like the CRA) will gain permission to view their identity (as determined by a smart contract). Through this system, we create a verified market, where suspicious funds cannot enter the verified economy while flagging suspicious activity. With the addition of a legislation piece (e.g. requiring banks and stores to be verified and only transact with verified users), BlockXism creates a safer and more regulated crypto ecosystem, while maintaining benefits like blockchain’s decentralization, absence of a middleman, and anonymity. ## How we built it BlockXism is built on a smart contract written in Solidity, which manages the ledger. For our self-authenticating service, we incorporated Circle wallets, which we plan to integrate into a self-sovereign identification system. We simulated the chain locally using Ganache and Metamask. On the application side, we used a combination of React, Tailwind, and ethers.js for the frontend and Express and MongoDB for our backend. ## Challenges we ran into A challenge we faced was overcoming the constraints when connecting the different tools with one another, meaning we often ran into issues with our fetch requests. For instance, we realized you can only call MetaMask from the frontend, so we had to find an alternative for the backend. Additionally, there were multiple issues with versioning in our local test chain, leading to inconsistent behaviour and some very strange bugs. ## Accomplishments that we're proud of Since most of our team had limited exposure to blockchain prior to this hackathon, we are proud to have quickly learned about the technologies used in a crypto ecosystem. We are also proud to have built a fully working full-stack web3 MVP with many of the features we originally planned to incorporate. ## What we learned Firstly, from researching cryptocurrency transactions and fraud prevention on the blockchain, we learned about the advantages and challenges at the intersection of blockchain and finance. We also learned how to simulate how users interact with one another blockchain, such as through peer-to-peer verification and making secure transactions using Circle wallets. Furthermore, we learned how to write smart contracts and implement them with a web application. ## What's next for BlockXism We plan to use IPFS instead of using MongoDB to better maintain decentralization. For our self-sovereign identity service, we want to incorporate an API to recognize valid proof of ID, and potentially move the logic into another smart contract. Finally, we plan on having a chain scraper to automatically recognize unverified transactions and edit the ledger accordingly.
partial
## Source Code <https://github.com/khou22/SoFly-Scanner> ## Inspiration The motivation for our application came when we realized how much our college uses flyers to advertise events. From dance recitals, to scientific talks, events are neatly summarized and hung on campus in visible areas. A huge part of our sense of community comes from these events, and as excursions into Princeton township have shown us, events planned in non-centralized communities rely on flyers and other written media to communicate activities to others. Both of us have fond memories attending community events growing up, and we think (through some surveying of our student body) that a cause of decreased attendance at such events is due to a few factors. (1) People forget. Its not a flyer they can always take with them, and so what they think is instantaneously exciting soon fades from their memory. (2) It is not digital – in a world where everything else is. ## What it does Our vision is an application that can allow a user to snap a picture of a flyer, and have their phone extract relevant information, and make a calendar event based on that. This will allow users to digitize flyers, and hopefully provide a decentralized mechanism for communities to grow close again. ## How I built it Our application uses Optical Character Recognition techniques (along with Otsu’s method to preprocess a picture, and an exposure and alignment adjustment algorithm) to extract a dump of recognized text. This text is error prone, and quite messy, and so we use canonical Natural Language Processing algorithms to tokenize the text, and “learn” which terms are important. The Machine Learning component in this project involves a Naive Bayesian Classifier, which can categorize and weight these terms for (as of now) internal use. This compared with a “loose NFA” implementation (we coined the term to describe an overly general regex with multiple matches) whose matches were processed using an algorithm that determined the most probable match. From the flyers we extract date, time, location, and our best guess at the title of the text. We made a design choice to limit the time our OCR took, which leads to worse holistic text recognition, but still allows us to extract theses fields using our NLP methods. ## Challenges I ran into There were a ton of challenges working on this. Optical Character Recognition, Machine Learning, and Natural Language Processing are all open fields of research, and our project drew on all of them. The debugging process was brutal, especially since given their nature, bad input would mess up even the best written code. ## Accomplishments that I'm proud of This was an ambitious project, and we brought it to the finish line. We are both very proud of that. ## What I learned Team chemistry is a huge part of success. If my partner and I were any less compatible this would definitely not have worked. ## What's next for SoFly Scanner We need more time to train our ML algorithm, and want to give it a wider geographic functionality. Additionally, we want to incorporate more complex image processing techniques to give our OCR the best chance of success!
## Inspiration Imagine a world where your best friend is standing in front of you, but you can't see them. Or you go to read a menu, but you are not able to because the restaurant does not have specialized brail menus. For millions of visually impaired people around the world, those are not hypotheticals, they are facts of life. Hollywood has largely solved this problem in entertainment. Audio descriptions allow the blind or visually impaired to follow the plot of movies easily. With Sight, we are trying to bring the power of audio description to everyday life. ## What it does Sight is an app that allows the visually impaired to recognize their friends, get an idea of their surroundings, and have written text read aloud. The app also uses voice recognition to listen for speech commands to identify objects, people or to read text. ## How we built it The front-end is a native iOS app written in Swift and Objective-C with XCode. We use Apple's native vision and speech API's to give the user intuitive control over the app. --- The back-end service is written in Go and is served with NGrok. --- We repurposed the Facebook tagging algorithm to recognize a user's friends. When the Sight app sees a face, it is automatically uploaded to the back-end service. The back-end then "posts" the picture to the user's Facebook privately. If any faces show up in the photo, Facebook's tagging algorithm suggests possibilities for who out of the user's friend group they might be. We scrape this data from Facebook to match names with faces in the original picture. If and when Sight recognizes a person as one of the user's friends, that friend's name is read aloud. --- We make use of the Google Vision API in three ways: * To run sentiment analysis on people's faces, to get an idea of whether they are happy, sad, surprised etc. * To run Optical Character Recognition on text in the real world which is then read aloud to the user. * For label detection, to indentify objects and surroundings in the real world which the user can then query about. ## Challenges we ran into There were a plethora of challenges we experienced over the course of the hackathon. 1. Each member of the team wrote their portion of the back-end service a language they were comfortable in. However when we came together, we decided that combining services written in different languages would be overly complicated, so we decided to rewrite the entire back-end in Go. 2. When we rewrote portions of the back-end in Go, this gave us a massive performance boost. However, this turned out to be both a curse and a blessing. Because of the limitation of how quickly we are able to upload images to Facebook, we had to add a workaround to ensure that we do not check for tag suggestions before the photo has been uploaded. 3. When the Optical Character Recognition service was prototyped in Python on Google App Engine, it became mysteriously rate-limited by the Google Vision API. Re-generating API keys proved to no avail, and ultimately we overcame this by rewriting the service in Go. ## Accomplishments that we're proud of Each member of the team came to this hackathon with a very disjoint set of skills and ideas, so we are really glad about how well we were able to build an elegant and put together app. Facebook does not have an official algorithm for letting apps use their facial recognition service, so we are proud of the workaround we figured out that allowed us to use Facebook's powerful facial recognition software. We are also proud of how fast the Go back-end runs, but more than anything, we are proud of building a really awesome app. ## What we learned Najm taught himself Go over the course of the weekend, which he had no experience with before coming to YHack. Nathaniel and Liang learned about the Google Vision API, and how to use it for OCR, facial detection, and facial emotion analysis. Zak learned about building a native iOS app that communicates with a data-rich APIs. We also learned about making clever use of Facebook's API to make use of their powerful facial recognition service. Over the course of the weekend, we encountered more problems and bugs than we'd probably like to admit. Most of all we learned a ton of valuable problem-solving skills while we worked together to overcome these challenges. ## What's next for Sight If Facebook ever decides to add an API that allows facial recognition, we think that would allow for even more powerful friend recognition functionality in our app. Ultimately, we plan to host the back-end on Google App Engine.
## Inspiration As spring course registration approaches, I find myself constantly searching up classes and figuring out which classes satisfy which graduation requirements. Often, I find myself trying to view classes.berkeley.edu on my phone but it is extremely difficult to navigate the website UI on mobile. This inspired me to create a new and innovative way of navigating Berkeley's course catalogue that is more efficient on mobile devices. As I delved deeper into the chatbot development process, I realized that I didn't need to narrow my chatbot down to only a course finder. I wanted to make a chatbot that could perform a wide range of actions to best support the students here at Berkeley. An all purpose, know-it-all being that could do everything and answer all questions directed at it. ## What it does The chatbot currently has 4 main features. First, it has a help command that displays all of the other features that it contains. Second, it can provide course recommendations to students based on what graduation requirements they need to fulfill. Third, it can set reminders on students' Google Calendars for important events such as midterms and finals. Finally, it can direct students to lecture videos for courses they are taking. ## How we built it We used built the chatbot logic using Javascript and connected it to the Cisco Webex Teams messaging application through Cisco's webhooks. The app is deployed to the web via Microsoft Azure at <https://oskibot.azurewebsites.net>. The website serves as an entry point for the chat bot to connect to the Cisco Webex application. To test the application, we used ngrok to create a temporary URL/tunnel that would expose our local computer (where the chatbot is located during testing) to the Internet. Ngrok provides a much faster alternative to testing web apps because redeploying web apps after every modification is extremely inefficient. To provide course recommendations, we used Google Firebase to store information about Berkeley courses. It sorts the courses into different categories to allow the user to quickly filter down to the most useful classes. We connected our chatbot to the database using Google's Firebase API to get read access to the course information. To set reminders on students' Google Calendar, we utilized Transposit to connect the Google Calendar API to an event planning form to automate the event creation process. The form specifies that it should be used to keep track of test dates but it can technically be used to create any type of Google Calendar event. To direct students to online lecture videos, we attempted to use Eluvio's API to optimize the video sharing process with blockchain technology, dubbed the Content Fabric. However, we were unable to upload our own videos to the Fabric so we created a demo website that uses a sample video from Eluvio. ## Challenges we ran into One of our biggest challenges was deciding what we wanted to create. When we first came into CalHacks, we didn't really have an idea of what we wanted to build. Our main goal was to attend as many workshops as we could to learn new skills. We tinkered around with many different technologies and failed a bunch until we finally decided to focus on one. Once we finally decided on our chatbot idea, some challenges that we ran into was connecting the Webex application to our chatbot code. Since our bot is running on a local computer it is not accessible from Cisco Webex Cloud Platform. To work around this limitation, we had to create a tunnel between our development environment (local computer) to the Internet. Thankfully, ngrok takes care of this for us. ngrok is a tunneling technology that exposes our local port (which the chatbot is listening to) to the Internet through a randomly generated public URL. Also, we wanted to utilize official Berkeley API's to retrieve course information, but the process requires going through many bureaucratic gates to receive authorization to use the API. We ended up manually entering some course information to the Google database to run our initial tests and demos. If we were to continue this project after CalHacks, we request access to the Berkeley API and connect it to our chatbot for a more complete database. We also struggled with learning how to use the API's and how to connect different technologies together. However, after grinding through many online tutorials, we were able to figure out how to make them communicate correctly with each other. ## Accomplishments that we're proud of and What we learned We are most proud of all the skills that we learned here at CalHacks. We learned in-depth how to use Node.js, connect different technologies together using API's, and how to use Git to divide tasks between team members. We gained some intuition and understanding of network architecture through usage of ngrok and Microsoft Azure. We also gained a brief introduction to blockchain and this exposure has definitely piqued our interest in this emerging technology and its potential use cases. ## What's next for OskiBot - UC Berkeley Course Recommendation Chatbot We would like to utilize the official UC Berkeley Classes API to have access to the most complete dataset. We would also like to use Machine Learning algorithms on the roster and signup data for each class to analyze classes more in-depth and create more filters/classifiers. For example, we can use the class registration data to figure out which classes have the highest dropout rate in the first few weeks to help users learn about which classes might be more difficult. We would also like to utilize machine learning to automate newer features and develop more capabilities into the chatbot to make it even more helpful for Berkeley students.
winning
## Inspiration Have you ever been lying on the bed and simply cannot fall asleep? Statistics show that listening to soothing music while getting to sleep improves sleeping quality because the music helps to release stress and anxiety accumulated from the day. However, do you really want to get off your cozy little bed to turn off the music, or you wanna keep it playing for the entire night? Neither sounds good, right? ## What it does We designed a sleep helper: SleepyHead! This product would primarily function as a music player, but with a key difference: it would be designed specifically to help users fall asleep. SleepyHead connects with an audio device that plays a selection of soft music. It is connected to three sensors that detect the condition of acceleration, sound, and light in the environment. Once SleepyHead detects the user has fallen asleep, it will tell the audio device to get into sleeping mode so that it wouldn’t disrupt the user's sleep. ## How we built it The following are the main components of our product: Sensors: We implement three sensors in SleepyHead: Accelerator to detect movement of the user Sound detector to detect sound made by the user Timer: Give a 20 mins time loop. The data collected by the sensor will be generated every 20 mins. If no activity is detected, SleepyHead tells the Audio device to enter sleeping mode. If activity is detected, SleepyHead will start another 2o min loop. Microcontroller board: We use Arduino Uno as the processor. It is the motherboard of SleepyHead. It connects all the above-mentioned elements and process algorithms. Audio device: will be connected with SleepyHead by Bluetooth. ## Challenges we ran into We had this idea of the sleepyhead to create a sleep speaker that could play soft music to help people fall asleep, and even detect when they were sleeping deeply to turn off the music automatically. But as when we started working on the project, we realized our team didn't get all the electrical equipment and kits we needed to build it. Unfortunately, some of the supplies we had were too old to use, and some of them weren't working properly. It was frustrating to deal with these obstacles, but we didn't want to give up on my idea. As a novice in connecting real products and C++, we struggled to connect all the wires and jumpers to the circuit board, and figuring out how to use the coding language C++ to control all the kits properly and make sure they functioned well was challenging. However, our crew didn't let the difficulties discourage us. With some help and lots of effort, we eventually overcame all the challenges and made our sleepyhead a reality. Now, it feels amazing to see people using it to improve their sleep and overall health. ## Accomplishments that we're proud of One of the most outstanding aspects of the Sleepyhead is that it’s able to use an accelerometer and a sound detector to detect user activity, and therefore decide if the user has fallen asleep based on the data. Also, coupled with its ability to display the time and activate a soft LED light, it’s a stylish and functional addition to any bedroom. These features, combined with its ability to promote healthy sleep habits, make Sleepyhead a truly outstanding and innovative product. ## What we learned Overall, there are many things that we have learned through this hackathon. First, We learned how to discuss the ideas and thoughts with each member more effectively. In addition, we acknowledged how to put our knowledge into practice, and create this actual product in life. Finally, We understand that good innovation can be really useful and essential through the hackathon. Choosing the right direction can save you tons of time. ## What's next for SleepyHead Sleepyhead's next goals are: Gather more feedback from users. This feedback can help sleepyhead determine which functions are feasible and which are not, and can provide insights on what changes or improvements need to be made in sleepyhead. Conduct research. Conducting research can help our product identify trends and gaps in our future potential market. Iterate and test. Once We have a prototype of the product, it is important to iterate and test it to see how it performs in the real world. Stay up to date with industry trends. It's important for us to keep the innovation on top of industry trends and emerging technologies as this can provide SleepyHead with new ideas and insights to improve the product.
## Inspiration We have all had that situation where our alarm goes off and we wake up super drowsy despite getting at least 8 hours of sleep. It turns out, this drowsiness is due to sleep inertia, which is at its worst when we are woken up in REM or deep sleep. ## What it does Our device uses an 3-axis accelerometer and a noise sensor to detect when a user is likely in light sleep, which is often indicated by movement and snoring. We eliminate the use of a phone by placing all of the electronics within a stuffed bird that can be easily placed on the bed. The alarm only goes off if light sleep is detected within 30 minutes of a user-defined alarm time. The alarm time is set by the user through an app on their phone, and analytics and patterns about their sleep can be viewed on a phone app or through our web application. ## How we built it We first started with the hardware components of the system. All of the sensors are sourced from the MLH Hardware store, so we then needed to take the time to figure out how to properly integrate all the sensors with the Arduino/Genuino 101. Following this, we began development of the Arduino code, which involved some light filtering and adjustment for ambient conditions. A web application was simultaneously developed using React. After some feedback from mentors, we determined that putting our electronics in a watch wasn't the best approach and a separate object would be the most ideal for comfort. We determined that a stuffed animal would be the best option. At the time, we did not have a stuffed animal, however we figured out that we could win one from Hootsuite. We managed to get one of these animals, which we then used as housing for our electronics. Unfortunately, this was the farthest we could get with our technical progress, so we only had time to draw some mock-ups for the mobile application. ## Challenges we ran into We had some issues getting the sensors to work, which were fixed by using the datasheets provided at Digikey. Additionally, we needed to find a way to eliminate some of the noise from the sensors, which we eventually fixed using some low-pass, software filtering. A lot of time was invested in attempting to connect the web application to the electronics, unfortunately we did not have enough time to finish this. ## Accomplishments that we're proud of We're proud that we were able to integrate all of the electronics into a small stuffed bird that is self-powered. ## What we learned We learned quite a bit about the impact of sleep cycle on sleep inertia, and how this impacts daily activities and long-term health. We also learned a lot about software filtering and how to properly identify the most important information in a datasheet. ## What's next for Sleep Sweet We would first develop the connection between the electronics and the web application, and eventually the mobile application. After this, we would prefer to get more robust electronics that are smaller in size. Lastly, we would like to integrate the electronics into a pillow or mattress which is more desirable for adults.
## Inspiration Humans left behind nearly $1,000,000 in change at airports worldwide in 2018. Imagine what that number is outside of airports. Now, imagine the impact we could make if that leftover money went toward charities that make the world a better place. This is why we should SpareChange - bringing change, with change. ## What it does This app rounds up spending from each purchase to the nearest dollar and uses that accumulated money to donate it to charities & nonprofits. ## How we built it We built a cross platform mobile app using Flutter, powered by a Firebase backend. We setup Firebase authentication to ensure secure storage of user info, and used Firebase Cloud Functions to ensure we keep developer credentials locked away in our secure cloud. We used the CapitalOne Hackathon API to simulate bank accounts, transactions between bank accounts and withdrawals. ## Challenges we ran into 1. Implementing a market place or organizations that could automatically update the user of tangible items to donate to the non-profits in lieu of directly gifting money. 2. Getting cloud functions to work. 3. Properly implementing the API's the way we need them to function. ## Accomplishments that we're proud of Making a functioning app. This was some of our first times at a hackathon so it was amazing to have such a first great experience. Overcoming obstacles - it was empowering to prevail over hardships that we previously thought would be impossible to hurdle. Creating something with the potential to help other people live better lives. ## What we learned On the surface, software such as Flutter, Dart & Firebase. We found them very useful. More importantly, realizing how quickly an idea can come to fruition. The perspective was really enlightening. If we could make something this helpful in a weekend, what could we do in a year? Or a lifetime? ## What's next for SpareChange We believe that SpareChange could be a large scale application. We would love to experiment and try out with real bank accounts and see how it works, and also build more features, such as using Machine Learning to provide the best charities to donate to based on real-time news.
losing
## Inspiration My passion to learn Web development. ## What it does It is a website using HTML and CSS. ## How we built it I learned HTML and CSS from YouTube tutorials and implemented them. ## Challenges we ran into There were several challenges during the learning phase a i was a beginner yo coding and all. ## Accomplishments that we're proud of My friends , cheered me up for the website also I am very satisfied to have the built. ## What we learned I learnt web development basics , also there is a long way ahead as I still need to learn several things like Javascript and Nodejs . ## What's next for Infinity Store lets see.
## Inspiration Our hackathon draws its inspiration from the sheer exhaustion of mere existence during this event. The vibrant purple of our t-shirts was clearly the pinnacle of our design prowess, and our remarkable lack of coding expertise has contributed to the masterpiece you see before you. ## What it does After a marathon of a whopping 36 hours spent trying things out, we've astonishingly managed to emerge with absolutely nothing by way of an answer. Clearly, our collective genius knows no bounds. But it represents the **experience** we had along the way. The workshops, the friendships we created, the things we learned, all amounted into this. ## How we built it At the 35th hour of our time in the Engineering Quad, after several attempted projects that we were unable to fulfill due to timeless debugging, lack of experience, and failed partnerships, we realized we had to . ## Challenges we ran into We had difficulty implementing the color-changing background using CSS, but managed to figure it out. We originally used Bootstrap Studio 5 as our visual website builder, but we got too confused on how to use it. ## Accomplishments that we're proud of As a first time in-person hacker and first time ever hacker, we are both very proud that we got to meet a lot of great people and take advantage of our experience at the hackathon. We are also incredibly proud that we have SOMETHING to submit. ## What we learned We learned that the software developer industry is one that is extremely competitive in today's market, having a referral is essential to stick out, and taking the extra step when it comes to researching companies is also essential. ## What's next for Cool Stuff Keep tuned for more useless products coming your way!!
## Inspiration The 2016 presidential election was marred by the apparent involvement of Russian hackers who built bots that spewed charged political discourse and made it their goal to create havoc on internet forums. Hacking of this kind will never be completely stopped, but if you can't beat them... why not join them? The Russian IRA shouldn't be the only one making politically motivated bots! This bot informs voters of their congresspeople's contact information so they can call their representative to lobby for issues they are passionate about. ## What it does Contactyourrep is a reddit bot that uses sentiment analysis and location information to comment congresspeople's contact information. If a comment contains a key word (i.e. city, or congressperson's name), and it registers a low sentiment score, the bot will comment the relevant contact information of the relative U.S. Representative's or Senator's constituent hotline. Try it out on r/contactyourrep ## How I built it The bot was created using the PRAW API, which allowed an easier connection with reddit's interface. I then integrated Phone2action's legislature look API which can provide location specific legislature information based on address. u/contactyourrep is triggered by the presence of a state or city keyword and politically charged words in the comment's body. ## Challenges I ran into Finding specific key words is not as easy as you would think. Also one occasion the bot would run a little wild during testing causing the PRAW API to disallow posting for periods of time. This meant it was harder to debug and I often had to edit large blocks of code at once. ## Accomplishments that I'm proud of This is my first reddit bot! It was a very fun project and now that I've been through the whole process I'm excited about the possibility of additional project's in various reddit communities. ## What I learned It is surprisingly easy to make a bot that can post relative content on social media. This has powerful implications for the integrity of our political discourse online. If I can make a bot that can inform angry voters in under 36 hours, imagine what a country's dedicated hacking force can accomplish (I'm looking at your Russia). Though I'm hoping to use this bot to inform people of contact information regardless of political creed, it would be simple to reconfigure this bot to only comment based on pro-democrat/republican content. ## What's next for Contact Your Rep After demoing at treehacks, I plan to keep u/contactyourrep active on r/politics, hopefully increasing political involvement.
losing
## Inspiration Last year we did a project with our university looking to optimize the implementation of renewable energy sources for residential homes. Specifically, we determined the best designs for home turbines given different environments. In this project, we decided to take this idea of optimizing the implementation of home power further. ## What it does A web application allows users to enter an address and determine if installing a backyard wind turbine or solar panel is more profitable/productive for their location. ## How we built it Using an HTML front-end we send the user's address to a python flask back end where we use a combination of external APIs, web scraping, researched equations, and our own logic and math to predict how the selected piece of technology will perform. ## Challenges we ran into We were hoping to use Google's Earth Engine to gather climate data, but were never approved fro the $25 credit so we had to find alternatives. There aren't alot of good options to gather the nessesary solar and wind data, so we had to use a combination of API's and web scraping to gather the required data which ended up being a bit more convulted than we hoped. Also integrating the back-end with the front-end was very difficult because we don't have much experience with full-stack development working end to end. ## Accomplishments that we're proud of We spent a lot of time coming up with idea for EcoEnergy and we really think it has potential. Home renewable energy sources are quite an investment, so having a tool like this really highlights the benefits and should incentivize people to buy them. We also think it's a great way to try to popularize at-home wind turbine systems by directly comparing them to the output of a solar panel because depending on the location it can be a better investment. ## What we learned During this project we learned how to predict the power output of solar panels and wind turbines based on windspeed and sunlight duration. We learned how to combine a back-end built in python to a front-end built in HTML using flask. We learned even more random stuff about optimizing wind turbine placement so we could recommend different turbines depending on location. ## What's next for EcoEnergy The next step for EcoEnergy would be to improve the integration between the front and back end. As well as find ways to gather more location based climate data which would allow EcoEnergy to predict power generation with greater accuracy.
## Inspiration The US Energy Information Administration provides a large data set of over 12,000 different houses across the country spanning 900 different factors of a household (e.g number of rooms, size of the garage, average temperature) related to annual energy consumption. The data seemed very useful, but too expansive to use. So we wanted to see if we could make it easier to find which factors influence your energy consumption the most. ## What it does We built a website where a user can submit some basic information about their household and living situation. From there, it sends that information to our backend where it is processed by our pre-trained deep neural network. The neural network then predicts their expected annual energy usage in Kilo-Watt Hours based on that array of input, which it compares to the over 12,000 other samples we analyzed from previous years. ## How we built it The website was built using Vue.js, the backend written in Flask, and the machine learning was done using Keras, a high level Python library that runs on top of Tensorflow. Additional regression was conducted using R. ## Challenges we ran into Training the neural network on a multi-dimensional vectorized input was difficult because it required normalizing each of the varying inputs to between 0 and 1, retrieving an output, and then returning it back to its original range. Additionally, we ran into some trouble actually converting the JSON format of the client's data from the post request to a Pandas Data Frame that we needed to feed into the DNN. We needed to reduce the dimensionality of the dataset from 900 distinct variables to a sizeable number that a user could reliably input into a webapp. We tackled this problem using two approaches. The first was through a machine learning technique called Principle Component Analysis (PCA) that aims to find uncorrelated components within the dataset, and it helps to reduce the dimensionality along combinations of the dataset. The second method we used was to perform a series of regressions between each variable and the observation of energy usage, to find the variables with the highest correlation values. From these two lists we then picked a good list of variables that we thought would be the best fit for our project. ## Accomplishments that we're proud of Incorporating machine learning and statistical analysis on a large dataset was both challenging and rewarding for all of us because we had never done it before and learned a lot on the way! ## What we learned We learned how to use machine learning libraries in Python and what things to consider when training a Neural Network to perform linear regression. ## What's next for Uncover Your Usage After improving our model's accuracy and front end UI, we can expand our product to take in a larger set of features and better help users determine the cause of their continually increasing energy bill.
## What is 'Titans'? VR gaming shouldn't just be a lonely, single-player experience. We believe that we can elevate the VR experience by integrating multiplayer interactions. We imagined a mixed VR/AR experience where a single VR player's playing field can be manipulated by 'Titans' -- AR players who can plan out the VR world by placing specially designed tiles-- blocking the VR player from reaching the goal tile. ## How we built it We had three streams of development/design to complete our project: the design, the VR experience, and the AR experience. For design, we used Adobe Illustrator and Blender to create the assets that were used in this project. We had to be careful that our tile designs were recognizable by both human and AR standards, as the tiles would be used by the AR players to lay our the environment the VR players would be placed in. Additionally, we pursued a low-poly art style with our 3D models, in order to reduce design time in building intricate models and to complement the retro/pixel-style of our eventual AR environment tiles. For building the VR side of the project, we selected to build a Unity VR application targeting Windows and Mac with the Oculus Rift. One of our most notable achievements here is a custom terrain tessellation and generation engine that mimics several environmental biomes represented in our game as well as integrating a multiplayer service powered by Google Cloud Platform. The AR side of the project uses Google's ARCore and Google Cloud Anchors API to seamlessly stream anchors (the tiles used in our game) to other devices playing in the same area. ## Challenges we ran into Hardware issues were one of the biggest time-drains in this project. Setting up all the programs-- Unity and its libraries, blender, etc...-- took up the initial hours following the brainstorming session. The biggest challenge was our Alienware MLH laptop resetting overnight. This was a frustrating moment for our team, as we were in the middle of testing our AR features such as testing the compatibility of our environment tiles. ## Accomplishments that we're proud of We're proud of the consistent effort and style that went into the game design, from the physical environment tiles to the 3D models, we tried our best to create a pleasant-to-look at game style. Our game world generation is something we're also quite proud of. The fact that we were able to develop an immersive world that we can explore via VR is quite surreal. Additionally, we were able to accomplish some form of AR experience where the phone recognizes the environment tiles. ## What we learned All of our teammates learned something new: multiplayer in unity, ARCore, Blender, etc... Most importantly we learned the various technical and planning challenges involved in AR/VR game development ## What's next for Titans AR/VR We hope to eventually connect the AR portion and VR portion of the project together the way we envisioned: where AR players can manipulate the virutal world of the VR player.
losing
[Example brainrot output](https://youtube.com/shorts/vmTmjiyBTBU) [Demo](https://youtu.be/W5LNiKc7FB4) ## Inspiration Graphic design is a skill like any other, that if honed, allows us to communicate and express ourselves in marvellous ways. To do so, it's massively helpful to receive specific feedback on your designs. Thanks to recent advances in multi-modal models, such as GPT-4o, even computers can provide meaningful design feedback. What if we put a sassy spin on it? ## What it does Adobe Brainrot is an unofficial add-on for Adobe Express that analyzes your design, creates a meme making fun of it, and generates a TikTok-subway-surfers-brainrot-style video with a Gordon Ramsey-esque personality roasting your design. (Watch the attached video for an example!) ## How we built it The core of this app is an add-on for Adobe Express. It talks to a server (which we operate locally) that handles AI, meme-generation, and video-generation. Here's a deeper breakdown: 1. The add-on screenshots the Adobe Express design and passes it to a custom-prompted session of GPT-4o using the ChatGPT API. It then receives the top design issue & location of it (if applicable). 2. It picks a random meme format, and asks ChatGPT for the top & bottom text of said meme in relation to the design flaw (e.g. "Too many colours"). Using the memegen.link API, it then generates the meme on-the-fly and insert it into the add-on UI. 3. Using yt-dlp, it downloads a "brainrot" background clip (e.g. Subway Surfers gameplay). It then generates a ~30-second roast using ChatGPT based on the design flaw & creates a voiceover using it, using OpenAI Text-to-Speech. Finally, it uses FFmpeg to overlay the user's design on top of the "brainrot" clip, add the voiceover in the background, and output a video file to the user's computer. ## Challenges we ran into We were fairly unfamiliar with the Adobe Express SDK, so it was a learning curve getting the hang of it! It was especially hard due to having two SDKs (UI & Sandbox). Thankfully, it makes use of existing standards like JSX. In addition, we researched prompt-engineering techniques to ensure that our ChatGPT API calls would return responses in expected formats, to avoid unexpected failure. There were quite a few challenges generating the video. We referenced a project that did something similar, but we had to rewrite most of it to get it working. We had to use a different yt-dl core due to extraction issues. FFmpeg would often fail even with no changes to the code or parameters. ## Accomplishments that we're proud of * It generates brainrot (for better or worse) videos * Getting FFmpeg to work (mostly) * The AI outputs feedback * Working out the SDK ## What we learned * FFmpeg is very fickle ## What's next for Adobe Brainrot We'd like to flesh out the UI further so that it more *proactively* provides design feedback, to become a genuinely helpful (and humorous) buddy during the design process. In addition, adding subtitles matching the text-to-speech would perfect the video.
## Inspiration We both came to the hackathon without a team and couldn't generate ideas about our project in the Hackathon. So we went to 9gag to look into memes and it suddenly struck us to make an Alexa skill that generates memes. The idea sounded cool and somewhat within our skill-sets so we went forward with it. We procrastinated till the second day and eventually started coding in the noon. ## What it does The name says it all, you basically scream at Alexa to make memes (Well not quite, speaking would work too) We have working website [www.memescream.net](http://www.memescream.net) which can generate memes using Alexa or keyboard and mouse. We also added features to download memes and share to Facebook and Twitter. ## How we built it We divided the responsibilities such that one of us handled the entire front-end, while the other gets his hands dirty with the backend. We used HTML, CSS, jQuery to make the web framework for the app and used Alexa, Node.js, PHP, Amazon Web Services (AWS), FFMPEG to create the backend for the skill. We started coding by the noon of the second day and continued uninterruptible till the project was concluded. ## Challenges we ran into We ran into challenges with understanding the GIPHY API, Parsing text into GIFs and transferring data from Alexa to web-app. (Well waking up next day was also a pretty daunting challenge all things considered) ## Accomplishments that we're proud of We're proud our persistence to actually finish the project on time and fulfill all requirements that were formulated in the planning phase. We also learned about GIPHY API and learned a lot from the workshops we attended (Well we're also proud that we could wake up a bit early the following day) ## What we learned Since we ran into issues when we began connecting the web app with the skill, we gained a lot of insight into using PHP, jQuery, Node.js, FFMPEG and GIPHY API. ## What's next for MemeScream We're eager to publish the more robust public website and the Alexa skill-set to Amazon.
## Inspiration We often found ourselves stuck at the start of the design process, not knowing where to begin or how to turn our ideas into something real. In large organisations these issues are not only inconvenient and costly, but also slow down development. That is why we created ConArt AI to make it easier. It helps teams get their ideas out quickly and turn them into something real without all the confusion. ## What it does ConArt AI is a gamified design application that helps artists and teams brainstorm ideas faster in the early stages of a project. Teams come together in a shared space where each person has to create a quick sketch and provide a prompt before the timer runs out. The sketches are then turned into images and everyone votes on their team's design where points are given from 1 to 5. This process encourages fast and fun brainstorming while helping teams quickly move from ideas to concepts. It makes collaboration more engaging and helps speed up the creative process. ## How we built it We built ConArt AI using React for the frontend to create a smooth and responsive interface that allows for real-time collaboration. On the backend, we used Convex to handle game logic and state management, ensuring seamless communication between players during the sketching, voting, and scoring phases. For the image generation, we integrated the Replicate API, which utilises AI models like ControlNet with Stable Diffusion to transform the sketches and prompts into full-fledged concept images. These API calls are managed through Convex actions, allowing for real-time updates and feedback loops. The entire project is hosted on Vercel, which is officially supported by Convex, ensuring fast deployment and scaling. Convex especially enabled us to have a serverless experience which allowed us to not worry about extra infrastructure and focus more on the functions of our app. The combination of these technologies allows ConArt AI to deliver a gamified, collaborative experience. ## Challenges we ran into We faced several challenges while building ConArt AI. One of the key issues was with routing in production, where we had to troubleshoot differences between development and live environments. We also encountered challenges in managing server vs. client-side actions, particularly ensuring smooth, real-time updates. Additionally, we had some difficulties with responsive design, ensuring the app looked and worked well across different devices and screen sizes. These challenges pushed us to refine our approach and improve the overall performance of the application. ## Accomplishments that we're proud of We’re incredibly proud of several key accomplishments from this hackathon. Nikhil: Learned how to use a new service like Convex during the hackathon, adapting quickly to integrate it into our project. Ben: Instead of just showcasing a local demo, he managed to finish and fully deploy the project by the end of the hackathon, which is a huge achievement. Shireen: Completed the UI/UX design of a website in under 36 hours for the first time, while also planning our pitch and brand identity, all during her first hackathon. Ryushen: He worked on building React components and the frontend, ensuring the UI/UX looked pretty, while also helping to craft an awesome pitch. Overall, we’re most proud of how well we worked as a team. Every person filled their role and brought the project to completion, and we’re happy to have made new friends along the way! ## What we learned We learned how to effectively use Convex by studying its documentation, which helped us manage real-time state and game logic for features like live sketching, voting, and scoring. We also learned how to trigger external API calls, like image generation with Replicate, through Convex actions, making the integration of AI seamless. On top of that, we improved our collaboration as a team, dividing tasks efficiently and troubleshooting together, which was key to building ConArt AI successfully. ## What's next for ConArt AI We plan to incorporate user profiles in order to let users personalise their experience and track their creative contributions over time. We will also be adding a feature to save concept art, allowing teams to store and revisit their designs for future reference or iteration. These updates will enhance collaboration and creativity, making ConArt AI even more valuable for artists and teams working on long-term projects.
partial
## Inspiration We were planning on how to go forward with our project from last Pennapps [link](http://devpost.com/software/foodship) (a web app that donates leftover food from restaurants to homeless shelters) and we wanted to know how organizations with similar aims have gone about it in the the past. Moreover, we wanted to know how efficient they were, and how their actions were perceived. What we wanted was an opinion. Then it hit us that we're *not* the only ones looking for an opinion. So many companies spend billions of dollars on consumer research to find out whether the public loves or hates their products. We forget that Twitter is filled to the brim with the very data we're looking for: opinions on literally everything. ## What it does Emogram is a web app that performs sentiment analysis on recent tweets using Natural Language Processing (NLP) and aggregates this data to put on a map. The map shows the population density of each emotion connected to the topic at hand (be it a company's product or any other issue). ## How we built it We took tweets from twitter using the Twitter Search API and sorted them by location. We used the IBM Watson Alchemy API to analyze the tweets for their emotion. We used d3.js for the data visualization aspect, Linode to host our whole web app.
## Inspiration Violence is running rampant in Canada, & even more so in many other places in the world. To address this insecurity, we devised the solution SafeWay, which guides us home safely. ## What it does SafeWays is a webapp that allows you to browse your surroundings and also determines the safest and closest possible route you can take. This allows for easy and peaceful navigation even in unfamiliar places. ## How we built it We used a front end stack of React with vanilla CSS that made api calls to the Google Maps API for the mapview & search function. There are two servers, one of which is a flask server that uses selenium to scrape the Ottawa Crime Map for data, & an express server that uses axios to query said server as well as Google Maps API & GeoNames to compile the raw data into useful data for the web application. ## Challenges we ran into The Ottawa Crime Map had a lot of bloat, so it was hard to navigate in order to find out how to acquire the data we need. There was also discrepancy between location data, names in particular, between the Ottawa Crime Map's & Google Maps. Devising our own Map API was also a thrill. ## Accomplishments that we're proud of Devising the web scraper to get data from the Ottawa Crime Map, which had so much bloat network IMO. Meshing together different Map APIs to create something new, example being getting all potential roads between two points. Last but not least, creating our mapview that we can call ours. ## What we learned We learned how to scrape data from the monstrous government site, how to work with maps & their respective APIs, as well as struggle with CORS. We also learned that there is a lot of crime & we should try to stay safe out there. ## What's next for SafeWays As availability of crime data varies in format across municipalities, we wish to devise a standard protocol to centralize these data, & whilst doing so improve its feature by having more frequent & detailed updates that are based on individual events, as these seem to be updated within set intervals. With that, we would also be able to implement Solace. Another step would be to implement a feature that actually highlights redzones or something of the like to sort of show the severity of crimes in areas.
## Inspiration Want to see how a product, service, person or idea is doing in the court of public opinion? Market analysts are experts at collecting data from a large array of sources, but monitoring public happiness or approval ratings is notoriously difficult. Usually, focus groups and extensive data collection is required before any estimates can be made, wasting both time and money. Why bother with all of this when the data you need can be easily mined from social media websites such as Twitter? Through aggregating tweets, performing sentiment analysis and visualizing the data, it would be possible to observe trends on how happy the public is about any topic, providing a valuable tool for anybody who needs to monitor customer satisfaction or public perception. ## What it does Queries Twitter Search API to return relevant tweets that are sorted into buckets of time. Sentiment analysis is then used to categorize whether the tweet is positive or negative in regards to the search term. The collected data is visualized with graphs such as average sentiment over time, percentage of positive to percentage of negative tweets, and other in depth trend analyses. An NLP algorithm that involves the clustering of similar tweets was developed to return a representative summary of good and bad tweets. This can show what most people are happy or angry about and can provide insight on how to improve public reception. ## How we built it The application is split into a **Flask** back-end and a **ReactJS** front-end. The back-end queries the Twitter API, parses and stores relevant information from the received tweets, and calculates any extra statistics that the front-end requires. The back-end then provides this information in a JSON object that the front-end can access through a `get` request. The React front-end presents all UI elements in components styled by [Material-UI](https://material-ui.com/). [React-Vis](https://uber.github.io/react-vis/) was utilized to compose charts and graphs that presents our queried data in an efficient and visually-appealing way. ## Challenges we ran into Twitter API throttles querying to 1000 tweets per minute, a number much less than what this project needs in order to provide meaningful data analysis. This means that by itself, after returning 1000 tweets we would have to wait another minute before continuing to request tweets. With some keywords returning hundreds of thousands of tweets, this was a huge problem. In addition, extracting a representative summary of good and bad tweet topics was challenging, as features that represent contextual similarity between words are not very well defined. Finally, we found it difficult to design a user interface that displays the vast amount of data we collect in a clear, organized, and aesthetically pleasing manner. ## Accomplishments that we're proud of We're proud of how well we visualized our data. In the course of a weekend, we managed to collect and visualize a large sum of data in six different ways. We're also proud that we managed to implement the clustering algorithm. In addition, the application is fully functional with nothing manually mocked! ## What we learned We learnt about several different natural language processing techniques. We also learnt about the Flask REST framework and best practices for building a React web application. ## What's next for Twitalytics We plan on cleaning some of the code that we rushed this weekend, implementing geolocation filtering and data analysis, and investigating better clustering algorithms and big data techniques.
losing
Shashank Ojha, Sabrina Button, Abdellah Ghassel, Joshua Gonzales # ![](https://drive.google.com/uc?export=view&id=1admSn1s1K2eioqtCisLGZD4zXZjGWsy8) "Reduce Reuse Recoin" ## Theme Covered: The themes covered in this project include post pandemic restoration for both the environment, small buisnesses, and personal finance! The app pitched uses an extensivly trained AI system to detect trash and sort it to the proper bin from your smartphone. While using the app, users will be incentivized to use the app and recover the environment through the opportunity to earn points, which will be redeemable in partnering stores. ## Problem Statment: As our actions continue to damage the environment, it is important that we invest in solutions that help restore our community in more sustainable practices. Moreover, an average person creates over 4 pounds of trash a day, and the EPA has found that over 75% of the waste we create are recyclable. As garbage sorting is so niche from town-to-town, students have reportable agreed to the difficulty of accurately sorting garbage, thus causing this significant misplacement of garbage. Our passion to make our community globally and locally more sustainable has fueled us to use artificial intelligence to develop an app that not only makes sorting garbage as easy as using Snapchat, but also rewards individuals for sorting their garbage properly. For this reason, we would like to introduce Recoin. This intuitive app allows a person to scan any product and easily find the bin that the trash belongs based off their location. Furthermore, if they attempt to sell their product, or use our app, they will earn points which will be redeemable in partnering stores that advocate for the environment. The more the user uses the app, the more points they receive, resulting in better items to redeem in stores. With this app we will not only help recover the environment, but also increase sales in small businesses which struggled during the pandemic to recover. ## About the App: ### Incentive Breakdown: ![](https://drive.google.com/uc?export=view&id=1CU2JkOJqplaTxNo7B8s_UXN_UaEydsd3) Please note that these expenses are estimated expectations for potential benefit packages but are not defined yet. We are proposing a $1 discount for participating small businesses when 100 coffee/drink cups are returned to participating restaurants. This will be easy for small companies to uphold financially, while providing a motivation for individuals to use our scanner. Amazon costs around $0.5 to $2 on packaging, so we are proposing that Amazon provides a $15 gift card per 100 packages returned to Amazon. As the 100 packages can cost from $50 to $200, this incentive will save Amazon resources by 5 to 100 times the amount, while providing positive public perception for reusing. As recycling plastic for 3D filament is an up-and-coming technology that can revolutionize environment sustainability, we would like to create a system where providing materials for such causes can give the individuals benefits. Lastly, as metals become more valuable, we hope to provide recyclable metals to companies to reduce their expenses through our platform. The next steps to this endeavor will be to provide benefits for individuals that provide batteries and electronics with some sort of incentive as well. ## User Interface: ![](https://drive.google.com/uc?export=view&id=1QR2fNvrkpB7q_PAI_5iZfL3M7i9nV6m5) ## Technological Specifics and Next Steps: ![](https://drive.google.com/uc?export=view&id=1vlfSjhtg-_JZZzVITaRMiOsXC-9TwyNS) ### Frontend ![](https://drive.google.com/uc?export=view&id=1TSAcKAPLFtZdJrn8OZVRfkBvS_29Y8Dk) We used to React.JS to develop components for the webcam footage and capture screen shots. It was also utilized to create the rest of the overall UI design. ### Backend #### Waste Detection AI: ![](https://drive.google.com/uc?export=view&id=1Fx8uAW3I_OntNn74aZXYQbThnfPkZs56) On Pytorch, we utilized an open-source trash detection AI software and data, to train the trash detection system originally developed by IamAbhinav03. The system utilizes over 2500 images to train, test, and validate the system. To improve the system, we increased the number of epochs to 8 rather than 5 (number of passes the training system has completed). This allowed the accuracy to increase by 4% more than the original system. We also modified the test/train ratio and split amounts to 70%, 10%, and 20% respectively, as more prominent AI studies have found this distribution to receive the best results. Currently, the system is predicted to have a 94% accuracy, but in the future, we plan on using reinforcement learning in our beta testing to continuously improve our algorithm. Reinforcement learning allows for the data to be more accurate, through learning from user correction. This will allow AI to become more precise as it gains more popularity. A flask server is used to make contact with the waste detection neural network; an image is sent from the front end as a post request, the Flask server generates a tensor and runs that through the neural net, then sends the response from the algorithm back to the front end. This response is the classification of the waste as either cardboard, glass, plastic, metal, paper or trash. #### Possible next steps: By using Matbox API and the Google Suite/API, we will be creating maps to find recycling locations and an extensively thorough Recoin currency system that can easily be transferred to real time money for consumers and businesses (as shown in the user interface above). ## Stakeholders: After the completion of this project, we intend to continue to pursue the app to improve our communities’ sustainability. After looking at the demographic of interest in our school itself, we know that students will be interested in this app, not only from convenience but also through the reward system. Local cafes and Starbucks already have initiatives to improve public perspective and support the environment (i.e., using paper straws and cups), therefore supporting this new endeavor will be an interest to them. As branding is everything in a business, having a positive public perspective will increase sales. ![](https://drive.google.com/uc?export=view&id=189sA5C0KDT8VIaRdD6jQQtnN32qO87h8) ## Amazon: As Amazon continues to be the leading online marketplace, more packages will continue to be made, which can be detrimental to the world's limited resources. We will be training the UI to track packages that are Amazon based. With such training, we would like to be able to implement a system where the packaging can be sent back to Amazon to be reused for credit. This will allow Amazon to form a more environmentally friendly corporate image, while also saving on resources. ## Small Businesses: As the pandemic has caused a significant decline in small business revenue, we intend to mainly partner with small businesses in this project. The software will also help increase small business sales as by supporting the app, students will be more inclined to go to their store due to a positive public image, and the additive discounts will attract more customers. In the future, we wish to train AI to also detect trash of value (i.e.. Broken smartphones, precious metals), so that consumers can sell it in a bundle to local companies that can benefit from the material (ex: 3D-printing companies that convert used plastic to filament) ## Timeline: The following timeline will be used to ensure that our project will be on the market as soon as possible: ![](https://drive.google.com/uc?export=view&id=1jA6at4g31KpTOJdwna-10UwP_U-z4JUp) ## About the Team: We are first and second year students from Queen's University who are very passionate about sustainbility and designing of innovative solutions to modern day problems. We all have the mindset to give any task our all and obtain the best results. We have a diverse skillset in the team and throughout the hackathon we utlized it to work efficienty. We are first time hackathoners, so even though we all had respective expierence in our own fields, this whole expierence was very new and educationally rewarding for us. We would like to thank the organisers and mentors for all thier support and for organizing the event. ## Code References • <https://medium.datadriveninvestor.com/deploy-your-pytorch-model-to-production-f69460192217> • <https://narainsreehith.medium.com/upload-image-video-to-flask-backend-from-react-native-app-expo-app-1aac5653d344> • <https://pytorch.org/tutorials/beginner/saving_loading_models.html> • <https://pytorch.org/tutorials/intermediate/flask_rest_api_tutorial.html> • <https://pytorch.org/get-started/locally/> • <https://www.kdnuggets.com/2019/03/deploy-pytorch-model-production.html> ## References for Information • <https://www.rubicon.com/blog/trash-reason-statistics-facts/> • <https://www.dosomething.org/us/facts/11-facts-about-recycling> • <https://www.forbes.com/sites/forbesagencycouncil/2016/10/31/why-brand-image-matters-more-than-you-think/?sh=6a4b462e10b8> • <https://www.channelreply.com/blog/view/ebay-amazon-packaging-costs>
## Inspiration Every few days, a new video of a belligerent customer refusing to wear a mask goes viral across the internet. On neighborhood platforms such as NextDoor and local Facebook groups, neighbors often recount their sightings of the mask-less minority. When visiting stores today, we must always remain vigilant if we wish to avoid finding ourselves embroiled in a firsthand encounter. With the mask-less on the loose, it’s no wonder that the rest of us have chosen to minimize our time spent outside the sanctuary of our own homes. For anti-maskers, words on a sign are merely suggestions—for they are special and deserve special treatment. But what can’t even the most special of special folks blow past? Locks. Locks are cold and indiscriminate, providing access to only those who pass a test. Normally, this test is a password or a key, but what if instead we tested for respect for the rule of law and order? Maskif.ai does this by requiring masks as the token for entry. ## What it does Maskif.ai allows users to transform old phones into intelligent security cameras. Our app continuously monitors approaching patrons and uses computer vision to detect whether they are wearing masks. When a mask-less person approaches, our system automatically triggers a compatible smart lock. This system requires no human intervention to function, saving employees and business owners the tedious and at times hopeless task of arguing with an anti-masker. Maskif.ai provides reassurance to staff and customers alike with the promise that everyone let inside is willing to abide by safety rules. In doing so, we hope to rebuild community trust and encourage consumer activity among those respectful of the rules. ## How we built it We use Swift to write this iOS application, leveraging AVFoundation to provide recording functionality and Socket.io to deliver data to our backend. Our backend was built using Flask and leveraged Keras to train a mask classifier. ## What's next for Maskif.ai While members of the public are typically discouraged from calling the police about mask-wearing, businesses are typically able to take action against someone causing a disturbance. As an additional deterrent to these people, Maskif.ai can be improved by providing the ability for staff to call the police.
## Inspiration Meet Josh. It is his first hackathon. During his first hack, he notices that there is a lot of litter around his work area. From cans to water bottles to everything else in between, there was a lot of garbage. He realizes that hackathons are a great place to create something that can impact and change people, the world, or even themselves. So why not go to the source and help hackers **be** that change? ## What it does Our solution is to create a mobile application that leverages the benefits of AI and Machine Learning (using Google Vision AI from Google Cloud) to enable the user to scan items and let the user know which bin (i.e. Blue, Green or Black) to sort the item to, as well as providing more information on the item. However, this is not merely a simple item scanner. We also gamified the design to create a more encouraging experience for hackers to be more environmentally friendly during a hackathon. We saw that by incentivizing hackers with awards such as "Earthcoins" whenever they pick up and recycle their litter, they can redeem these "coins" (or credits) for things like limited edition stickers, food (Red Bull anyone?) or more swag. Ultimately, our goal is to create a collaborative and clean space for hackers to more effectively interact with each other, creating relationships and helping each other out, while also maintaining a better environment. ## How we built it We first prototyped the solution in Adobe XD. Then, we attempted to implement the idea with Android Studio and Kotlin for the main app, as well as Google Vision AI for the Machine Learning models. ## Challenges we ran into We ran into a lot of challenges. We wanted to train our own AI by feeding it images of various things such as coffee cups, coffee lids, and things that are recyclable and compostable. Although, we realized that training the AI would be a lengthy process. Another challenge was to efficiently gathering images to feed into the AI, we used Python Selenium to automate this but it required a lot of coding. In order to increase the success rate of the AI identifying certain things, we would have to take our own pictures to train it. With this in mind, we quickly shifted to Google Cloud's Vision AI API, though we got a rough version working but was not able to code everything in time to make a prototype. One more challenge was the coding aspect of the project. Our coder had problems converting Java code into Kotlin, in addition to integrating the Google Cloud Vision API into the app via Android Studio. We had further challenges with the idea that we had. How do we incentivize those who don't care much about the environment to use this app? What is the motivation to do so? We had to answer these questions and eventually used the idea of hackers wanting to create change. ## Accomplishments that we're proud of We're proud of each other for attempting to build such an idea from scratch, especially since this was their first hackathon for two of our team members. Trying to build an app using AI and training it ourselves is a big idea to tackle, considering our limited exposure to machine learning and unfamiliarity with new languages. We would say that our accomplishment of creating an actual product, although it may have been incomplete, was a significant achievement within this hack. ## What we learned We gained a lot of first-hand insight into how machine learning is complex and takes a while to implement. We learned that building an app with external APIs such as Google Vision AI can be difficult to do compared to simply creating a standalone app. We also learned how to automate web browser tasks with Python Selenium so that we could be much more efficient with training our AI. The most important thing that we learned was from our mentors was regarding the "meta" of a hackathon. We learnt that we have to always seriously consider our audience, the scope of the problem, and the feasibility of the solution. The usability, motivation, and the design are all major factors that we realized are game changers as one certain thing can completely overturn our idea. We gathered a lot of insight from our mentors from their past experiences and w are inspired to use what we learned through DeltaHacks 6 in other hackathons. ## What's next for BottleBot The aim for BottleBot's future is to fully integrate the Google Cloud Vision API into the app, as well as to finish and polish the app in Android Studio.
winning
## Inspiration From fashion shows to strutting down downtown Waterloo, you **are** the main character… Unless… your walk is less of a confident strut but more a hobble . I’ve always wanted to know if my walking/running form is bad, and *Runway Ready* is here to tell me just that. ## What it does Your phone *is* the hardware. By leveraging the built in accelerometer within our mobile devices, we’ve built an app that sends the accelerometer data in x,y, and z directions collected from our phones to our server, which then analyzes the accelerometer patterns to deduce whether your walk is either a ‘strut’ or a ‘limp’ without any additional hardware other than the phone in your pocket! ## How we built it We used Swift to build the simple mobile app that gathers and transmits the accelerometer data to our server. The server, powered by Node.js, collects, processes, and analyzes the data in real time using JavaScript. Finally, the website used to display the results is built on Express. ## Challenges we ran into Signal processing was a huge challenge. A seemingly easy task of differentiating between a good or bad walk is not straightforward. After hours of graphing and signal analysis, we created an algorithm that can separate the two. Connecting the data sent from the mobile app to integrating the data into our backend signal processing took way too much time. ## Accomplishments that we're proud of After hours of rigorous data collection and graphical and algebraic signal analysis, we *finally* came up with a simple algorithm that works with basically zero previous signal processing knowledge! ## What we learned While we've used languages such as Swift and Javascript separately, this project was a chance for us to combine all our knowledge and skills in these software tools to create something with potential to be expanded across various fields. ## What's next for *Runway Ready*? It’s a silly idea; we know. But it actually sparked from our hatred of the piercing pain of shin splints that occurs on long runs. Shin splints are partially caused due to improper running form, which *Runway Ready* could be used to fix. *Runway Ready* has a diverse range of applications including sport performance, physiotherapy, injury/surgery recovery, and of course, making you an **everyday runway supermodels**. Given a bit more time, we could collect more data, and employ ML to make a more robust algorithm that can detect even the most subtle issues in walking/running form.
# CreateFlow: AI-Driven Content Scheduling and Generation for Creators ## Inspiration **CreateFlow** was inspired by the pressure content creators face managing ideas, drafting, and posting across platforms, leading to burnout. I wanted to create a tool that helps creators focus on their craft while automating repetitive tasks using **AI-driven agents** and **LLMs** to streamline content creation and scheduling. ## What We Learned As a team, we explored **LLMs** for generating human-like text and **AI agents** for automating scheduling and trend analysis. We also learned to design a user-friendly dashboard for manual review due to challenges with social media API access. ## What It Does **CreateFlow** automates content generation based on user preferences (interest, type, keywords, frequency). It creates a content schedule, generates posts, and suggests future topics, all displayed on a dashboard for review and manual posting. ## How We Built It We used **Python** for the backend, **Next.js** for the dashboard, and the **Gemini API** to generate content. **Fetch AI agents** handle scheduling and topic suggestions, while the dashboard offers full control to users. ## Challenges We Ran Into API limitations for social media posting led us to build a manual dashboard. Ensuring AI-generated content was natural and engaging required fine-tuning language models. ## Accomplishments We successfully integrated AI to reduce mental load for creators, automating content generation while giving users control through the dashboard. ## What's Next We plan to integrate more platforms, enhance AI trend detection, and expand dashboard features, including analytics and automated posting as API access improves.
## Inspiration You see a **TON** of digital billboards at NYC Time Square. The problem is that a lot of these ads are **irrelevant** to many people. Toyota ads here, Dunkin' Donuts ads there; **it doesn't really make sense**. ## What it does I built an interactive billboard that does more refined and targeted advertising and storytelling; it displays different ads **based on who you are** ~~(NSA 2.0?)~~ The billboard is equipped with a **camera**, which periodically samples the audience in front of it. Then, it passes the image to a series of **computer vision** algorithm (Thank you *Microsoft Cognitive Services*), which extracts several characteristics of the viewer. In this prototype, the billboard analyzes the viewer's: * **Dominant emotion** (from facial expression) * **Age** * **Gender** * **Eye-sight (detects glasses)** * **Facial hair** (just so that it can remind you that you need a shave) * **Number of people** And considers all of these factors to present with targeted ads. **As a bonus, the billboard saves energy by dimming the screen when there's nobody in front of the billboard! (go green!)** ## How I built it Here is what happens step-by-step: 1. Using **OpenCV**, billboard takes an image of the viewer (**Python** program) 2. Billboard passes the image to two separate services (**Microsoft Face API & Microsoft Emotion API**) and gets the result 3. Billboard analyzes the result and decides on which ads to serve (**Python** program) 4. Finalized ads are sent to the Billboard front-end via **Websocket** 5. Front-end contents are served from a local web server (**Node.js** server built with **Express.js framework** and **Pug** for front-end template engine) 6. Repeat ## Challenges I ran into * Time constraint (I actually had this huge project due on Saturday midnight - my fault -, so I only **had about 9 hours to build** this. Also, I built this by myself without teammates) * Putting many pieces of technology together, and ensuring consistency and robustness. ## Accomplishments that I'm proud of * I didn't think I'd be able to finish! It was my first solo hackathon, and it was much harder to stay motivated without teammates. ## What's next for Interactive Time Square * This prototype was built with off-the-shelf computer vision service from Microsoft, which limits the number of features for me to track. Training a **custom convolutional neural network** would let me track other relevant visual features (dominant color, which could let me infer the viewers' race - then along with the location of the Billboard and pre-knowledge of the demographics distribution, **maybe I can infer the language spoken by the audience, then automatically serve ads with translated content**) - ~~I know this sounds a bit controversial though. I hope this doesn't count as racial profiling...~~
losing
## Inspiration Other educational games like [OhMyGit](https://ohmygit.org/). Some gameplay aspects of a popular rhythm game: Friday Night Funkin' ## What it does Players will be able to select a topic to battle in and enter a pop-quiz style battle where getting correct answers attacks your opponent and getting incorrect ones allow the opponent to attack you. Ideally, the game would have fun animations and visual cues to show the user was correct or not, these effects mainly in seeing the player attack the enemy or vice versa if the answer was incorrect. ## How we built it Using the Unity game engine. ## Challenges we ran into Team members we're quite busy over the weekend, and a lack of knowledge in Unity. ## Accomplishments that we're proud of The main gameplay mechanic is mostly refined, and we were able to complete it in essentially a day's work with 2 people. ## What we learned Some unique Unity mechanics and planning is key to put out a project this quickly. ## What's next for Saturday Evening Studyin' Adding the other question types specified in our design (listed in the README.md), adding animations to the actual battle, adding character dialogue and depth. One of the original ideas that were too complex to implement in this short amount of time is for 2 human players to battle each other, where the battle is turn-based and players can select the difficulty of question to determine the damage they can deal, with some questions having a margin of error for partial correctness to deal a fraction of the normal amount.
## Inspiration: Millions of active software developers use GitHub for managing their software development, however, our team believes that there is a lack of incentive and engagement factor within the platform. As users on GitHub, we are aware of this problem and want to apply a twist that can open doors to a new innovative experience. In addition, we were having thoughts on how we could make GitHub more accessible such as looking into language barriers which we ultimately believed wouldn't be that useful and creative. Our offical final project was inspired by how we would apply something to GitHub but instead llooking into youth going into CS. Combining these ideas together, our team ended up coming up with the idea of DevDuels. ## What It Does: Introducing DevDuels! A video game located on a website that's goal is making GitHub more entertaining to users. One of our target audiences is the rising younger generation that may struggle with learning to code or enter the coding world due to complications such as lack of motivation or lack of resources found. Keeping in mind the high rate of video gamers in the youth, we created a game that hopes to introduce users to the world of coding (how to improve, open sources, troubleshooting) with a competitive aspect leaving users wanting to use our website beyond the first time. We've applied this to our 3 major features which include our immediate AI feedback to code, Duo Battles, and leaderboard. In more depth about our features, our AI feedback looks at the given code and analyses it. Very shortly after, it provides a rating out of 10 and comments on what it's good and bad about the code such as the code syntax and conventions. ## How We Built It: The web app is built using Next.js as the framework. HTML, Tailwind CSS, and JS were the main languages used in the project’s production. We used MongoDB in order to store information such as the user account info, scores, and commits which were pulled from the Github API using octokit. Langchain API was utilised in order to help rate the commit code that users sent to the website while also providing its rationale for said rankings. ## Challenges We Ran Into The first roadblock our team experienced occurred during ideation. Though we generated multiple problem-solution ideas, our main issue was that the ideas either were too common in hackathons, had little ‘wow’ factor that could captivate judges, or were simply too difficult to implement given the time allotted and team member skillset. While working on the project in specific, we struggled a lot with getting MongoDB to work alongside the other technologies we wished to utilise (Lanchain, Github API). The frequent problems with getting the backend to work quickly diminished team morale as well. Despite these shortcomings, we consistently worked on our project down to the last minute to produce this final result. ## Accomplishments that We're Proud of: Our proudest accomplishment is being able to produce a functional game following the ideas we’ve brainstormed. When we were building this project, the more we coded, the less optimistic we got in completing it. This was largely attributed to the sheer amount of error messages and lack of progression we were observing for an extended period of time. Our team was incredibly lucky to have members with such high perseverance which allowed us to continue working on, resolving issues and rewriting code until features worked as intended. ## What We Learned: DevDuels was the first step into the world hackathons for many of our team members. As such, there was so much learning throughout this project’s production. During the design and ideation process, our members learned a lot about website UI design and Figma. Additionally, we learned HTML, CSS, and JS (Next.js) in building the web app itself. Some members learned APIs such as Langchain while others explored Github API (octokit). All the new hackers navigated the Github website and Git itself (push, pull, merch, branch , etc.) ## What's Next for DevDuels: DevDuels has a lot of potential to grow and make it more engaging for users. This can be through more additional features added to the game such as weekly / daily goals that users can complete, ability to create accounts, and advance from just a pair connecting with one another in Duo Battles. Within these 36 hours, we worked hard to work on the main features we believed were the best. These features can be improved with more time and thought put into it. There can be small to large changes ranging from configuring our backend and fixing our user interface.
## Inspiration it's really fucking cool that big LLMs (ChatGPT) are able to figure out on their own how to use various tools to accomplish tasks. for example, see Toolformer: Language Models Can Teach Themselves to Use Tools (<https://arxiv.org/abs/2302.04761>) this enables a new paradigm self-assembling software: machines controlling machines. what if we could harness this to make our own lives better -- a lil LLM that works for you? ## What it does i made an AI assistant (SMS) using GPT-3 that's able to access various online services (calendar, email, google maps) to do things on your behalf. it's just like talking to your friend and asking them to help you out. ## How we built it a lot of prompt engineering + few shot prompting. ## What's next for jarbls shopping, logistics, research, etc -- possibilities are endless * more integrations !!! the capabilities explode exponentially with the number of integrations added * long term memory come by and i can give you a demo
partial
## Inspiration: As per the Stats provided by Annual Disability Statistics Compendium, 19,344,883 civilian veterans ages 18 years and over live in the community in 2013, of which 5,522,589 were individuals with disabilities . DAV - Disabled American Veterans organization has spent about $ 61.8 million to buy and operate vehicles to act as a transit service for veterans but the reach of this program is limited. Following these stats we wanted to support Veterans with something more feasible and efficient. ## What it does: It is a web application that will serve as a common platform between DAV and Uber. Instead of spending a huge amount on buying cars the DAV instead pay Uber and Uber will then provide free rides to veterans. Any veteran can register with his Veteran ID and SSN. During the application process our Portal matches the details with DAV to prevent non-veterans from using this service. After registration, Veterans can request rides on our website, that uses Uber API and can commute free. ## How we built it: We used the following technologies: Uber API ,Google Maps, Directions, and Geocoding APIs, WAMP as local server. Boot-Strap to create website, php-MyAdmin to maintain SQL database and webpages are designed using HTML, CSS, Javascript, Python script etc. ## Challenges we ran into: Using Uber API effectively, by parsing through data and code to make javascript files that use the API endpoints. Also, Uber API has problematic network/server permission issues. Another challenge was to figure out the misuse of this service by non-veterans. To save that, we created a dummy Database, where each Veteran-ID is associated with corresponding 4 digits SSN. The pair is matched when user registers for free Uber rides. For real-time application, the same data can be provided by DAV and that can be used to authenticate a Veteran. ## Accomplishments that we're proud of: Finishing the project well in time, almost 4 hours before. From a team of strangers, brainstorming ideas for hours and then have a finished product in less than 24 hours. ## What we learned: We learnt to use third party APIs and gained more experience in web-development. ## What's next for VeTransit: We plan to launch a smartphone app that will be developed for the same service. It will also include Speech recognition. We will display location services for nearby hospitals and medical facilities based on veteran’s needs. Using APIs of online job providers, veterans will receive data on jobs. To access the website, Please register as user first. During that process, It will ask Veteran-ID and four digits of SSN. The pair should match for successful registration. Please use one of the following key pairs from our Dummy Data, to do that: VET00104 0659 VET00105 0705 VET00106 0931 VET00107 0978 VET00108 0307 VET00109 0674
## Inspiration The inspiration for this project came from the group's passion to build health related apps. While blindness is not necessarily something we can heal, it is something that we can combat with technology. ## What it does This app gives blind individuals the ability to live life with the same ease as any other person. Using beacon software, we are able to provide users with navigational information in heavily populated areas such as subways or or museums. The app uses a simple UI that includes the usage of different numeric swipes or taps to launch certain features of the app. At the opening of the app, the interface is explained in its entirety in a verbal manner. One of the most useful portions of the app is a camera feature that allows users to snap a picture and instantly receive verbal cues depicting what is in their environment. The navigation side of the app is what we primarily focused on, but as a fail safe method the Lyft API was implemented for users to order a car ride out of a worst case scenario. ## How we built it ## Challenges we ran into We ran into several challenges during development. One of our challenges was attempting to use the Alexa Voice Services API for Android. We wanted to create a skill to be used within the app; however, there was a lack of documentation at our disposal and minimal time to bring it to fruition. Rather than eliminating this feature all together, we collaborated to develop a fully functional voice command system that can command their application to call for a Lyft to their location through the phone rather than the Alexa. Another issue we encountered was in dealing with the beacons. In a large area like what would be used in a realistic public space and setting, such as a subway station, the beacons would be placed at far enough distances to be individually recognized. Whereas, in such a confined space, the beacon detection overlapped, causing the user to receive multiple different directions simultaneously. Rather than using physical beacons, we leveraged a second mobile application that allows us to create beacons around us with an Android Device. ## Accomplishments that we're proud of As always, we are a team of students who strive to learn something new at every hackathon we attend. We chose to build an ambitious series of applications within a short and concentrated time frame, and the fact that we were successful in making our idea come to life is what we are the most proud of. Within our application, we worked around as many obstacles that came our way as possible. When we found out that Amazon Alexa wouldn't be compatible with Android, it served as a minor setback to our plan, but we quickly brainstormed a new idea. Additionally, we were able to develop a fully functional beacon navigation system with built in voice prompts. We managed to develop a UI that is almost entirely nonvisual, rather used audio as our only interface. Given that our target user is blind, we had a lot of difficulty in developing this kind of UI because while we are adapted to visual cues and the luxury of knowing where to tap buttons on our phone screens, the visually impaired aren't. We had to keep this in mind throughout our entire development process, and so voice recognition and tap sequences became a primary focus. Reaching out of our own comfort zones to develop an app for a unique user was another challenge we successfully overcame. ## What's next for Lantern With a passion for improving health and creating easier accessibility for those with disabilities, we plan to continue working on this project and building off of it. The first thing we want to recognize is how easily adaptable the beacon system is. In this project we focused on the navigation of subway systems: knowing how many steps down to the platform, when they've reached the safe distance away from the train, and when the train is approaching. This idea could easily be brought to malls, museums, dorm rooms, etc. Anywhere that could provide a concern for the blind could benefit from adapting our beacon system to their location. The second future project we plan to work on is a smart walking stick that uses sensors and visual recognition to detect and announce what elements are ahead, what could potentially be in the user's way, what their surroundings look like, and provide better feedback to the user to assure they don't get misguided or lose their way.
## Inspiration Charles had a wonderfully exciting (lol) story about how he got to Hack the North, and we saw an opportunity for development ^-^! We want to capitalize on the idea of communal ridesharing, making it accessible for everyone. We also wanted to build something themed around data analytics and sustainability, and tracking carbon footprint via rideshare app seemed like a great fit. ## What it does We want to provide a better matchmaking system for drivers/riders centered around sustainability and eco-friendliness. Our app will match drivers/riders based on categories such as vehicle type, distance, trip similarly, and location - but why not just use Uber, or other rideshare apps? Isn’t this the exact same thing? Yes, but here’s the catch - it’s FREE! Wow that’s so cool. ## How we built it Our two talented web UI/UX designers created multiple iterations of wireframes in Figma, before translating it into a beautiful frontend web app. Our very pog backend developers built all the database APIs, real-time chat system, and search/filter functionality. ## Challenges we ran into The most time-consuming part was the idea generation. We spent a lot of time on our first night (we stayed up past 4 a.m.) and the entirety of Saturday morning until we landed on an idea that we thought was impactful, and had room for us to put our own twist on it. We also faced our fair share of bugs and roadblocks during the creation process. Backend is hard, man, and integration even more so. Having to construct so many React components and ensure a functional backend in such a short timespan was quite the challenge. ## Accomplishments that we're proud of We were definitely most proud of our designs and wireframes made in Figma, which were then translated into a beautiful React frontend. Our team put a lot of emphasis into making a user-friendly design and translating it into an app that people would actually try out. Our team also put a lot of hard work into the backend messaging system, which required a lot of trial-and-error, debugging, and a final great success. ## What we learned In an era dominated by AI apps and projects, a pure software project is becoming increasingly rare. However, we learned that creating something that we are passionate about and would personally use means more than developing a product for the sole purpose of chasing industry hype. We learned a lot about matching algorithms for social platforms, technologies that enable real-time instant messaging, as well as the challenges of integration between frontend, backend, and databases. ## What's next for PlanetShare Building our web app into a mobile app for people to download and use on-the-fly. Also obtaining more metrics towards helping users improve their carbon footprint, such as integrating with public transit or other sustainable modes of transportation like cycling. Also, we can: Integrate rewards points by partnering with companies like RBC (Avion rewards) to incentivize customers to carpool more Extend our functionality beyond just carpooling - we can track other forms of transportation as well, and recommend alternative options when carpooling isn’t available Leverage machine learning and data analytics to graph trends and make predictions about carbon emission metrics, just like a personal finance tracker Introduce a healthy dose of competition - benchmark yourself against your friends!
partial
## Inspiration Two of our team members are medical students and were on a medical mission trip to Honduras to perform cervical cancer screening. What we found though was that many patients were lost to follow-up due to the two-to-three week lag time between screening and diagnosis. We sought to eliminate this lag time and thus enable instant diagnosis of cervical cancer through Cerviscope. ## What it does Cerviscope is a product that enables point-of-care detection of cervical cancer based on hand-held light microscopy of pap smear slides and subsequent image analysis using convoluted neural networks. See our demo at [www.StudentsAtYourCervix.com](http://www.StudentsAtYourCervix.com)! (If URL forwarding doesn't work, <https://www.wolframcloud.com/objects/ecb2d3d2-c468-4c4b-80fd-3da5f2e2aa90>) ## How we built it Our solution is two-fold. The first is using machine learning, specifically convolutional neural networks, to analyze images from a publicly available database of annotated pap smear cervical cells to be able to predict cancerous cells. We couple our algorithm with the use of a handheld, mobile phone-compatible light microscope called the Foldscope, developed by the Prakash lab at Stanford, to enable cheap and rapid point of care detection. These images were not normalized, so we performed image processing and conformation tools to standardize these images as inputs to our neural network. ## Challenges we ran into 1) Time. We would have loved to fully automate the interface between image acquisition using the hand-held microscope and machine-learning-based image analysis. 2) Data. Acquire a larger database to improve our accuracy closer to human level performance ## Accomplishments that we're proud of We trained a neural network to detect cervical cancer cells with classification accuracies exceeding 80%. Coupled with the utilization of the Foldscope, this has the potential for huge impact in the point-of-care cervical cancer diagnosis and treatment field. ## What we learned How to develop, train, and deploy neural networks in just a few hours. Network with students to learn about amazing work such as the Foldscope! ## What's next for CerviScope - Cervical Cancer Detection Sleep. Krispy Kreme. Then IPO.
## Inspiration We wanted to build a technical app that is actually useful. Scott Forestall's talk at the opening ceremony really spoke to each of us, and we decided then to create something that would not only show off our technological skill but also actually be useful. Going to the doctor is inconvenient and not usually immediate, and a lot of times it ends up being a false alarm. We wanted to remove this inefficiency to make everyone's lives easier and make healthy living more convenient. We did a lot of research on health-related data sets and found a lot of data on different skin diseases. This made it very easy for us to chose to build a model using this data that would allow users to self diagnose skin problems. ## What it does Our ML model has been trained on hundreds of samples of diseased skin to be able to identify among a wide variety of malignant and benign skin diseases. We have a mobile app that lets you take a picture of a patch of skin that concerns you and runs it through our model and tells you what our model classified your picture as. Finally, the picture also gets sent to a doctor with our model results and allows the doctor to override that decision. This new classification is then rerun through our model to reinforce the correct outputs and penalize wrong outputs, ie. adding a reinforcement learning component to our model as well. ## How we built it We built the ML model in IBM Watson from public skin disease data from ISIC(International Skin Imaging Collaboration). We have a platform independent mobile app built in React Native using Expo that interacts with our ML Model through IBM Watson's API. Additionally, we store all of our data in Google Firebase's cloud where doctors will have access to them to correct the model's output if needed. ## Challenges we ran into Watson had a lot of limitations in terms of data loading and training, so it had to be done in extremely small batches, and it prevented us from utilizing all the data we had available. Additionally, all of us were new to React Native, so there was a steep learning curve in implementing our mobile app. ## Accomplishments that we're proud of Each of us learned a new skill at this hackathon, which is the most important thing for us to take away from any event like this. Additionally, we came in wanting to implement an ML model, and we implemented one that is far more complex than we initially expected by using Watson. ## What we learned Web frameworks are extremely complex with very similar frameworks being unable to talk to each other. Additionally, while REST APIs are extremely convenient and platform independent, they can be much harder to use than platform-specific SDKs. ## What's next for AEye Our product is really a proof of concept right now. If possible, we would like to polish both the mobile and web interfaces and come up with a complete product for the general user. Additionally, as more users adopt our platform, our model will get more and more accurate through our reinforcement learning framework. See a follow-up interview about the project/hackathon here! <https://blog.codingitforward.com/aeye-an-ai-model-to-detect-skin-diseases-252747c09679>
## Inspiration The Canadian winter's erratic bouts of chilling cold have caused people who have to be outside for extended periods of time (like avid dog walkers) to suffer from frozen fingers. The current method of warming up your hands using hot pouches that don't last very long is inadequate in our opinion. Our goal was to make something that kept your hands warm and *also* let you vent your frustrations at the terrible weather. ## What it does **The Screamathon3300** heats up the user's hand based on the intensity of their **SCREAM**. It interfaces an *analog electret microphone*, *LCD screen*, and *thermoelectric plate* with an *Arduino*. The Arduino continuously monitors the microphone for changes in volume intensity. When an increase in volume occurs, it triggers a relay, which supplies 9 volts, at a relatively large amperage, to the thermoelectric plate embedded in the glove, thereby heating the glove. Simultaneously, the Arduino will display an encouraging prompt on the LCD screen based on the volume of the scream. ## How we built it The majority of the design process was centered around the use of the thermoelectric plate. Some research and quick experimentation helped us conclude that the thermoelectric plate's increase in heat was dependent on the amount of supplied current. This realization led us to use two separate power supplies -- a 5 volt supply from the Arduino for the LCD screen, electret microphone, and associated components, and a 9 volt supply solely for the thermoelectric plate. Both circuits were connected through the use of a relay (dependent on the Arduino output) which controlled the connection between the 9 volt supply and thermoelectric load. This design decision provided electrical isolation between the two circuits, which is much safer than having common sources and ground when 9 volts and large currents are involved with an Arduino and its components. Safety features directed the rest of our design process, like the inclusion of a kill-switch which immediately stops power being supplied to the thermoelectric load, even if the user continues to scream. Furthermore, a potentiometer placed in parallel with the thermoelectric load gives control over how quickly the increase in heat occurs, as it limits the current flowing to the load. ## Challenges we ran into We tried to implement feedback loop, ambient temperature sensors; even though large temperature change, very small changes in sensor temperatures. Goal to have an optional non-scream controlled system failed because of ultimately not having a sensor feedback system. We did not own components such as the microphone, relay, or battery pack, we could not solder many connections so we could not make a permanent build. ## Accomplishments that we're proud of We're proud of using a unique transducer (thermoelectric plate) that uses an uncommon trigger (current instead of voltage level), which forced us to design with added safety considerations in mind. Our design was also constructed of entirely sustainable materials, other than the electronics. We also used a seamless integration of analog and digital signals in the circuit (baby mixed signal processing). ## What we learned We had very little prior experience interfacing thermoelectric plates with an Arduino. We learned to effectively leverage analog signal inputs to reliably trigger our desired system output, as well as manage physical device space restrictions (for it to be wearable). ## What's next for Screamathon 3300 We love the idea of people having to scream continuously to get a job done, so we will expand our line of *Scream* devices, such as the scream-controlled projectile launcher, scream-controlled coffee maker, scream-controlled alarm clock. Stay screamed-in!
partial
## Inspiration Kevin, one of our team members, is an enthusiastic basketball player, and frequently went to physiotherapy for a knee injury. He realized that a large part of the physiotherapy was actually away from the doctors' office - he needed to complete certain exercises with perfect form at home, in order to consistently improve his strength and balance. Through his story, we realized that so many people across North America require physiotherapy for far more severe conditions, be it from sports injuries, spinal chord injuries, or recovery from surgeries. Likewise, they will need to do at-home exercises individually, without supervision. For the patients, any repeated error can actually cause a deterioration in health. Therefore, we decided to leverage computer vision technology, to provide real-time feedback to patients to help them improve their rehab exercise form. At the same time, reports will be generated to the doctors, so that they may monitor the progress of patients and prioritize their urgency accordingly. We hope that phys.io will strengthen the feedback loop between patient and doctor, and accelerate the physical rehabilitation process for many North Americans. ## What it does Through a mobile app, the patients will be able to film and upload a video of themselves completing a certain rehab exercise. The video then gets analyzed using a machine vision neural network, such that the movements of each body segment is measured. This raw data is then further processed to yield measurements and benchmarks for the relative success of the movement. In the app, the patients will receive a general score for their physical health as measured against their individual milestones, tips to improve the form, and a timeline of progress over the past weeks. At the same time, the same video analysis will be sent to the corresponding doctor's dashboard, in which the doctor will receive a more thorough medical analysis in how the patient's body is working together and a timeline of progress. The algorithm will also provide suggestions for the doctors' treatment of the patient, such as prioritizing a next appointment or increasing the difficulty of the exercise. ## How we built it At the heart of the application is a Google Cloud Compute instance running together with a blobstore instance. The cloud compute cluster will ingest raw video posted to blobstore, and performs the machine vision analysis to yield the timescale body data. We used Google App Engine and Firebase to create the rest of the web application and API's for the 2 types of clients we support: an iOS app, and a doctor's dashboard site. This manages day to day operations such as data lookup, and account management, but also provides the interface for the mobile application to send video data to the compute cluster. Furthermore, the app engine sinks processed results and feedback from blobstore and populates it into Firebase, which is used as the database and data-sync. Finally, In order to generate reports for the doctors on the platform, we used stdlib's tasks and scale-able one-off functions to process results from Firebase over time and aggregate the data into complete chunks, which are then posted back into Firebase. ## Challenges we ran into One of the major challenges we ran into was interfacing each technology with each other. Overall, the data pipeline involves many steps that, while each in itself is critical, also involve too many diverse platforms and technologies for the time we had to build it. ## What's next for phys.io <https://docs.google.com/presentation/d/1Aq5esOgTQTXBWUPiorwaZxqXRFCekPFsfWqFSQvO3_c/edit?fbclid=IwAR0vqVDMYcX-e0-2MhiFKF400YdL8yelyKrLznvsMJVq_8HoEgjc-ePy8Hs#slide=id.g4838b09a0c_0_0>
## Inspiration Waste Management: Despite having bins with specific labels, people often put waste into wrong bins which lead to unnecessary plastic/recyclables in landfills. ## What it does Uses Raspberry Pi, Google vision API and our custom classifier to categorize waste and automatically sorts and puts them into right sections (Garbage, Organic, Recycle). The data collected is stored in Firebase, and showed with respective category and item label(type of waste) on a web app/console. The web app is capable of providing advanced statistics such as % recycling/compost/garbage, your carbon emissions as well as statistics on which specific items you throw out the most (water bottles, bag of chips, etc.). The classifier is capable of being modified to suit the garbage laws of different places (eg. separate recycling bins for paper and plastic). ## How We built it Raspberry pi is triggered using a distance sensor to take the photo of the inserted waste item, which is identified using Google Vision API. Once the item is identified, our classifier determines whether the item belongs in recycling, compost bin or garbage. The inbuilt hardware drops the waste item into the correct section. ## Challenges We ran into Combining IoT and AI was tough. Never used Firebase. Separation of concerns was a difficult task. Deciding the mechanics and design of the bin (we are not mechanical engineers :D). ## Accomplishments that We're proud of Combining the entire project. Staying up for 24+ hours. ## What We learned Different technologies: Firebase, IoT, Google Cloud Platform, Hardware design, Decision making, React, Prototyping, Hardware ## What's next for smartBin Improving the efficiency. Build out of better materials (3D printing, stronger servos). Improve mechanical movement. Add touch screen support to modify various parameters of the device.
**Come check out our fun Demo near the Google Cloud Booth in the West Atrium!! Could you use a physiotherapy exercise?** ## The problem A specific application of physiotherapy is that joint movement may get limited through muscle atrophy, surgery, accident, stroke or other causes. Reportedly, up to 70% of patients give up physiotherapy too early — often because they cannot see the progress. Automated tracking of ROM via a mobile app could help patients reach their physiotherapy goals. Insurance studies showed that 70% of the people are quitting physiotherapy sessions when the pain disappears and they regain their mobility. The reasons are multiple, and we can mention a few of them: cost of treatment, the feeling that they recovered, no more time to dedicate for recovery and the loss of motivation. The worst part is that half of them are able to see the injury reappear in the course of 2-3 years. Current pose tracking technology is NOT realtime and automatic, requiring the need for physiotherapists on hand and **expensive** tracking devices. Although these work well, there is a HUGE room for improvement to develop a cheap and scalable solution. Additionally, many seniors are unable to comprehend current solutions and are unable to adapt to current in-home technology, let alone the kinds of tech that require hours of professional setup and guidance, as well as expensive equipment. [![IMAGE ALT TEXT HERE](https://res.cloudinary.com/devpost/image/fetch/s--GBtdEkw5--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://img.youtube.com/vi/PrbmBMehYx0/0.jpg)](http://www.youtube.com/watch?feature=player_embedded&v=PrbmBMehYx0) ## Our Solution! * Our solution **only requires a device with a working internet connection!!** We aim to revolutionize the physiotherapy industry by allowing for extensive scaling and efficiency of physiotherapy clinics and businesses. We understand that in many areas, the therapist to patient ratio may be too high to be profitable, reducing quality and range of service for everyone, so an app to do this remotely is revolutionary. We collect real-time 3D position data of the patient's body while doing exercises for the therapist to adjust exercises using a machine learning model directly implemented into the browser, which is first analyzed within the app, and then provided to a physiotherapist who can further analyze the data. It also asks the patient for subjective feedback on a pain scale This makes physiotherapy exercise feedback more accessible to remote individuals **WORLDWIDE** from their therapist ## Inspiration * The growing need for accessible physiotherapy among seniors, stroke patients, and individuals in third-world countries without access to therapists but with a stable internet connection * The room for AI and ML innovation within the physiotherapy market for scaling and growth ## How I built it * Firebase hosting * Google cloud services * React front-end * Tensorflow PoseNet ML model for computer vision * Several algorithms to analyze 3d pose data. ## Challenges I ran into * Testing in React Native * Getting accurate angle data * Setting up an accurate timer * Setting up the ML model to work with the camera using React ## Accomplishments that I'm proud of * Getting real-time 3D position data * Supporting multiple exercises * Collection of objective quantitative as well as qualitative subjective data from the patient for the therapist * Increasing the usability for senior patients by moving data analysis onto the therapist's side * **finishing this within 48 hours!!!!** We did NOT think we could do it, but we came up with a working MVP!!! ## What I learned * How to implement Tensorflow models in React * Creating reusable components and styling in React * Creating algorithms to analyze 3D space ## What's next for Physio-Space * Implementing the sharing of the collected 3D position data with the therapist * Adding a dashboard onto the therapist's side
winning
## Inspiration One of the hardest parts of selling your stuff online – be it Kijiji, eBay, Facebook Marketplace, or otherwise – is figuring out what a reasonable price is. Too high and nobody's interested, too low and... well, nobody likes losing money! But what if we could solve this problem? ## What it does Our tool aggregates hundreds of eBay listings for any given query and gives you an estimate of what you should list your item for, based either on the title of your listing or an image of the object! ## How we built it For the image recognition component, we used [Google's excellent Vision AI API](https://cloud.google.com/vision), which classifies what is in the image you enter into the app. To get our statistics from eBay, we currently scrape eBay a Python script, extracting the titles and prices for submitted search terms – we would have liked to use eBay's API, but access to it is delayed by one business day which we obviously did not have! ## Challenges we ran into At first, we wanted to use a fine-tuned BERT (Bidirectional Encoder Representations from Transformers) model trained for regression. We saw some [excellent research](https://arxiv.org/abs/2107.09055) doing this for very similar use cases, and figured it would be a good path to take. Quickly downloading a (dataset of about 20k eBay listings)[<https://data.world/promptcloud/ebay-product-dataset>] and their prices, we set up a Google Colab to train the model and see what we could do. Problems quickly arose, not only from a lack of familiarity with transformers but also due to the tight time constraints of Hack the North. It became clear that to do a proper round of training and get the model where we wanted would take days, so we set about coming up with a different approach. Like mentioned before, it turned out that the eBay API was a non-starter, but BeautifulSoup came to the rescue and enabled us to scrape eBay's search results – we figured that, for a small hackathon project, this would be okay. ## Accomplishments that we're proud of Picking up React Native in such a short amount of time and making something actually functional – and really rather good-looking – was awesome! This is the first time either of us had built a mobile app, and having that achievement under our belts feels great. Tackling the variety of roadblocks and problems we encountered along the way was as challenging as it was rewarding, and we're proud to have gotten our project to the point where it stands now. ## What we learned We learned *so much* over the course of just a few days. We only had passing knowledge of how to use transformer models, and didn't even know it was possible to use BERT for regression; figuring out how to get that working, despite it proving nonviable, was incredibly valuable as a learning experience. Learning how to build a React Native mobile app from scratch was also entirely novel, and gave us exposure to one of the world's most popular web development frameworks. Crawling around Google Cloud Platform and seeing what we could use to make our project as awesome as it could be, taught us about all the nuances of setting up OAuth2, using Google Cloud Functions, and building an entire project using a platform neither of us had really looked at in-depth before. Overall, we did a ton of learning in just a few days, and it was all a blast! ## What's next for We'd like to circle back to using BERT for appraisals when there is more time for training, fiddling, and testing – it seemed like a promising path, and would also help us make the project more self-reliant, reducing the need for external APIs. It would also be great to publish the app on the Apple App Store or Google Play Store sometime soon, though a lot more polish will be needed before it's ready for the prime-time.
## Inspiration We wanted to find ways to make e-commerce more convenient, as well as helping e-commerce merchants gain customers. After a bit of research, we discovered that one of the most important factors that consumers value is sustainability. According to FlowBox, 65% of consumers said that they would purchase products from companies who promote sustainability. In addition, the fastest growing e-commerce platforms endorse sustainability. Therefore, we wanted to create a method that allows consumers access to information regarding the company's sustainability policies. ## What it does Our project is a browser extension that allows users to browse e-commerce websites while being able to check the product manufactures'' sustainability via ratings out of 5 stars. ## How we built it We started building the HTML as the skeleton of the browser extension. We then proceeded with JavaScript to connect the extension with ChatGPT. Then, we asked ChatGPT a question regarding the general consensus of a company's sustainability. We run this review through sentimental analysis, which returns a ratio of positive and negative sentiment with relevance towards sustainability. This information is then converted into a value out of 5 stars, which is displayed on the extension homepage. We finalized the project with CSS, making the extension look cleaner and more user friendly. ## Challenges we ran into We had issues with running servers, as we struggled with the input and output of information. We also ran into trouble with setting up the Natural Language Processing models from TensorFlow. There were multiple models trained using different datasets and methods, despite the fact they all use TensorFlow, they were developed at different times, which means different versions of TensorFlow were used. It made the debugging process a lot more extensive and made the implementation take a lot more time. ## Accomplishments that we're proud of We are proud that we were able to create a browser extension that makes the lives of e-commerce developers and shoppers more convenient. We are also proud of making a visually appealing extension that is accessible to users. Furthermore, we are proud of implementing modern technology such as ChatGPT within our approach to solving the challenge. ## What we learned We learned how to create a browser extension from scratch and implement the OpenAi API to connect our requests to ChatGPT. We also learned how to use Natural Language Processing to detect how positive or negative the response we received from ChatGPT was. Finally, we learned how to convert the polarity we received into a rating that is easy to read and accessible to users. ## What's next for E-commerce Sustainability Calculator In the future, we would like to implement a feature that gives a rating to the reliability of our sustainability rating. Since there are many smaller and lesser known companies on e-commerce websites, they would not have that much information about their sustainability policies, so their sustainability rating would be a lot less accurate compared to a more relevant company. We would implement this by using the amount of google searches for a specific company as a metric to measure their relevance and then base a score using a scale that ranges the the number of google searches.
## Inspiration Since this was the first hackathon for most of our group, we wanted to work on a project where we could learn something new while sticking to familiar territory. Thus we settled on programming a discord bot, something all of us have extensive experience using, that works with UiPath, a tool equally as intriguing as it is foreign to us. We wanted to create an application that will allow us to track the prices and other related information of tech products in order to streamline the buying process and enable the user to get the best deals. We decided to program a bot that utilizes user input, web automation, and web-scraping to generate information on various items, focusing on computer components. ## What it does Once online, our PriceTracker bot runs under two main commands: !add and !price. Using these two commands, a few external CSV files, and UiPath, this stores items input by the user and returns related information found via UiPath's web-scraping features. A concise display of the product’s price, stock, and sale discount is displayed to the user through the Discord bot. ## How we built it We programmed the Discord bot using the comprehensive discord.py API. Using its thorough documentation and a handful of tutorials online, we quickly learned how to initialize a bot using Discord's personal Development Portal and create commands that would work with specified text channels. To scrape web pages, in our case, the Canada Computers website, we used a UiPath sequence along with the aforementioned CSV file, which contained input retrieved from the bot's "!add" command. In the UiPath process, each product is searched on the Canada Computers website and then through data scraping, the most relevant results from the search and all related information are processed into a csv file. This csv file is then parsed through to create a concise description which is returned in Discord whenever the bot's "!prices" command was called. ## Challenges we ran into The most challenging aspect of our project was figuring out how to use UiPath. Since Python was such a large part of programming the discord bot, our experience with the language helped exponentially. The same could be said about working with text and CSV files. However, because automation was a topic none of us hardly had any knowledge of; naturally, our first encounter with it was rough. Another big problem with UiPath was learning how to use variables as we wanted to generalize the process so that it would work for any product inputted. Eventually, with enough perseverance, we were able to incorporate UiPath into our project exactly the way we wanted to. ## Accomplishments that we're proud of Learning the ins and outs of automation alone was a strenuous task. Being able to incorporate it into a functional program is even more difficult, but incredibly satisfying as well. Albeit small in scale, this introduction to automation serves as a good stepping stone for further research on the topic of automation and its capabilities. ## What we learned Although we stuck close to our roots by relying on Python for programming the discord bot, we learned a ton of new things about how these bots are initialized, the various attributes and roles they can have, and how we can use IDEs like Pycharm in combination with larger platforms like Discord. Additionally, we learned a great deal about automation and how it functions through UiPath which absolutely fascinated us the first time we saw it in action. As this was the first Hackathon for most of us, we also got a glimpse into what we have been missing out on and how beneficial these competitions can be. Getting the extra push to start working on side-projects and indulging in solo research was greatly appreciated. ## What's next for Tech4U We went into this project with a plethora of different ideas, and although we were not able to incorporate all of them, we did finish with something we were proud of. Some other ideas we wanted to integrate include: scraping multiple different websites, formatting output differently on Discord, automating the act of purchasing an item, taking input and giving output under the same command, and more.
losing
## Inspiration Predicting the stock market is probably one of the hardest things to do in the financial world. There are so many factors that come into play when considering the rise and fall of positions, and many differing opinions on which of these factors matters the most, from technical indicators to balance sheets. It is hard to dispute, however, that one factor has gained heavy influence in the past few decades: Media. From television to the internet, the modern day media clearly has had a large impact on financial instruments and the economy in general. Taking this as inspiration, we decided to create an app that predicts future stock prices using sentiment analysis of relevant news articles. ## What it does Mula is an app that provides an intuitive interface for users to manage their stock portfolio and predict changes that traditional methods may not catch in time. It provides a list of stocks that the user currently owns and a platform for trading them, as well as banking to store cash. Upon clicking on a specific stock, Mula provides a more detailed view of the stock’s historical prices, along with predictions of stock price changes from our algorithm coinciding with emotionally charged news. It also serves to provide a comparison between our predictions and actual price changes. Users can also put stocks on a watchlist, where they can get immediate notifications when a financial news article with significant sentiment regarding specific companies is published, as well as a recommended course of action. ## How we built it The stack consists of a Flutter front end, which allows the app to be multi-platform, and a Python Flask backend connected to a variety of APIs. The Capital One Hackathon API, aka Nessie, is used to manage a bank account, which users of the app store their currently non-invested money. The IEX Cloud API is used to retrieve historical and current pricing data on all individual publicly traded stocks, as well as news articles written about those stocks. The Amazon Comprehend API uses machine learning to analyze those news articles and determine the emotional sentiment behind them, i.e. positive or negative. We then used a weighted mathematical model to combine that sentiment data with Growth, Integrated, and Financial Returns scores from the Goldman Sachs Marquee API to create a prediction score for how the price of an individual stock will move in the coming days. To ensure the prediction is valid, we used the Marquee API and IEX Cloud API’s historical data and news to backtest this algorithm. We searched for news stories in the past with a significant measured sentiment, and computed a prediction of future price movement from that and the historical Growth, Integrated, and Financial Return scores. We can then look a few days ahead and see how effective this prediction was, and use that to adjust the weights of the prediction model. ## Challenges we ran into We initially wanted to build our app using React Native, however we ran into problems trying to set up the design language we wanted to work with. We decided to switch to Flutter, which none of us had worked with before. It took some time before we got the hang of it, but ultimately we saw it as the better choice as it gave less problems. We were also working with multiple APIs, and having them be compatible with each other in the backend proved to be quite a laborious process. ## Accomplishments that we're proud of We are very proud that we were able to successfully find a correlation between news sentiments and stock movements. This is a major finding and could set the foundation for a rise in novel trading algorithms. Additionally, we are happy that we were able to pick up Flutter relatively quickly. Now that we know how to set up apps with it, we may use it again in the future to do the same. ## What we learned This was our first time doing a financial project for a hackathon, so we learned a lot about bank account management, risk management, technical analysis and fundamental analysis. We also learned how to combine multiple APIs together in a cohesive, valuable way. Additionally, none of us had quite done front end tasks before, so we we’re glad that we were able to learn how to use Flutter. ## What's next for Mula. We hope to further improve on our prediction modeling algorithms and sentiment analysis with more complex neural networks/deep learning. We would also like to further iterate on the UI/UX of the app to further streamline the stock analysis and prediction process.
![1](https://user-images.githubusercontent.com/50319868/107144838-d2f8a780-690b-11eb-9055-5ba51c2fa89f.png) ![32323](https://user-images.githubusercontent.com/50319868/107149268-ac476a80-6925-11eb-8d28-29f23a352e7f.png) ## Inspiration We wanted to truly create a well-rounded platform for learning investing where transparency and collaboration is of utmost importance. With the growing influence of social media on the stock market, we wanted to create a tool where it will auto generate a list of recommended stocks based on its popularity. This feature is called Stock-R (coz it 'Stalks' the social media....get it?) ## What it does This is an all in one platform where a user can find all the necessary stock market related resources (websites, videos, articles, podcasts, simulators etc) under a single roof. New investors can also learn from other more experienced investors in the platform through the use of the chatrooms or public stories. The Stock-R feature uses Natural Language processing and sentiment analysis to generate a list of popular and most talked about stocks on twitter and reddit. ## How we built it We built this project using the MERN stack. The frontend is created using React. NodeJs and Express was used for the server and the Database was hosted on the cloud using MongoDB Atlas. We used various Google cloud APIs such as Google authentication, Cloud Natural Language for the sentiment analysis, and the app engine for deployment. For the stock sentiment analysis, we used the Reddit and Twitter API to parse their respective social media platforms for instances where a stock/company was mentioned that instance was given a sentiment value via the IBM Watson Tone Analyzer. For Reddit, popular subreddits such as r/wallstreetbets and r/pennystocks were parsed for the top 100 submissions. Each submission's title was compared to a list of 3600 stock tickers for a mention, and if found, then the submission's comment section was passed through the Tone Analyzer. Each comment was assigned a sentiment rating, the goal being to garner an average sentiment for the parent stock on a given day. ## Challenges we ran into In terms of the Chat application interface, the integration between this application and main dashboard hub was a major issue as it was necessary to pull forward the users credentials without having them to re-login to their account. This issue was resolved by producing a new chat application which didn't require the need of credentials, and just a username for the chatroom. We deployed this chat application independent of the main platform with a microservices architecture. On the back-end sentiment analysis, we ran into the issue of efficiently storing the comments parsed for each stock as the program iterated over hundreds of posts, commonly collecting further data on an already parsed stock. This issue was resolved by locally generating an average sentiment for each post and assigning that to a dictionary key-value pair. If a sentiment score was generated for multiple posts, the average were added to the existing value. ## Accomplishments that we're proud of ## What we learned A few of the components that we were able to learn and touch base one were: * REST APIs * Reddit API * React * NodeJs * Google-Cloud * IBM Watson Tone Analyzer -Web Sockets using Socket.io -Google App Engine ## What's next for Stockhub ## Registered Domains: -stockhub.online -stockitup.online -REST-api-inpeace.tech -letslearntogether.online ## Beginner Hackers This was the first Hackathon for 3/4 Hackers in our team ## Demo The apply is fully functional and deployed using the custom domain. Please feel free to try it out and let us know if you have any questions. <http://www.stockhub.online/>
## Inspiration There has never been a more relevant time in political history for technology to shape our discourse. Clara AI can help you understand what you're reading, giving you political classification and sentiment analysis so you understand the bias in your news. ## What it does Clara searches for news on an inputted subject and classifies its political leaning and sentiment. She can accept voice commands through our web application, searching for political news on a given topic, and if further prompted, can give political and sentiment analysis. With 88% accuracy on our test set, Clara is nearly perfect at predicting political leaning. She was trained using random forest and many hours of manual classification. Clara gives sentiment scores with the help of IBM Watson and Google Sentiment Analysis APIs. ## How we built it We built a fundamental technology using a plethora of Google Cloud Services on the backend, trained a classifier to identify political leanings, and then created multiple channels for users to interact with the insight generated by our algorithms. For our backend, we used Flask + Google Firebase. Within Flask, we used the Google Search Engine API, Google Web Search API, Google Vision API, and Sklearn to conduct analysis on the news source inputted by the user. For our web app we used React + Google Cloud Speech Recognition API (the app responds to voice commands). We also deployed a Facebook Messenger bot, as many of our users find their news on Facebook. ## Challenges we ran into Lack of wifi was the biggest, putting together all of our APIs, training our ML algorithm, and deciding on a platform for interaction. ## Accomplishments that we're proud of We've created something really meaningful that can actually classify news. We're proud of the work we put in and our persistence through many caffeinated hours. We can't wait to show our project to others who are interested in learning more about their news! ## What we learned How to integrate Google APIs into our Flask backend, and how to work with speech capability. ## What's next for Clara AI We want to improve upon the application by properly distributing it to the right channels. One of our team members is part of a group of students at UC Berkeley that builds these types of apps for fun, including BotCheck.Me and Newsbot. We plan to continue this work with them.
partial
FoodFriend is here. Perfectly integrated with Messenger, Effy is your all in one recipe searching AI assistant. Finding and cooking recipes has never been so seamless. Making Artifical Intelligence Practical Food Friend uses wit.ai to power Effy's natural language processing and recognition. This means Effy is trained specifically to understand your culinary requests. Integrating with Facebook Messenger means that Effy is always accesible on every device hassle free! Effy has no trouble understanding sentences like "Hey Eff, I am hoping to make some spicy Mexican but low fat and gluten free, any suggestions?". FoodFriend uses 9 independent parameters (such as diet, flavour, cuisine, ingredients) in its search ensuring you can always find recipes suited to your needs. Discover Effy and FoodFriend Now Our app is available on Messenger now! Send a message to FoodFriend to get started!
## Inspiration Food can be tricky, especially for college students. There are so many options for your favorite foods, like restaurants, frozen foods, grocery stores, and even cooking yourself. We want to be able to get the foods we love, with the best experience. ## What it does Shows you and compares all of the options for a specific dish you want to eat. Takes data from restaurants, grocery stores, and more and aggregates everything into an easy to understand list. Sorts based on a relevance algorithm to give you the best possible experience. ## How we built it The frontend is HTML, CSS, and JS. The backend we used Python, NodeJS, Beautiful Soup for web scraping, and several APIs for getting data. We also used Google Firebase to store the data in a database. ## Challenges we ran into It was difficult trying to decide the scope, and what we could realistically make in such a short time frame. ## Accomplishments that we're proud of We're proud of the overall smooth visuals, and the functionality of showing you many different options for foods. ## What we learned We learned about web design, along with data collection with APIs and web scraping. ## What's next for Food Master We are hoping to implement the ChatGPT API to interpret restaurant menus and automate data collection.
# Healthy.ly An android app that can show if a food is allergic to you just by clicking its picture. It can likewise demonstrate it's health benefits, ingredients, and recipes. ## Inspiration We are a group of students from India. The food provided here is completely new to us and we don't know the ingredients. One of our teammates is dangerously allergic to seafood and he has to take extra precautions while eating at new places. So we wanted to make an app that can detect if the given food is allergic or not using computer vision. We also got inspiration from the HBO show **Silicon Valley**, where a guy tries to make a **Shazam for Food** app. Over time our idea grew bigger and we added nutritional value and recipes to it. ## What it does This is an android app that uses computer vision to identify food items in the picture and shows you if you are allergic to it by comparing the ingredients to your restrictions provided earlier. It can also give the nutritional values and recipes for that food item. ## How we built it We developed a deep learning model using **Tensorflow** that can classify between 101 different food items. We trained it using the **Google Compute Engine** with 2vCPUs, 7.5 GB RAM and 2 Tesla K80 GPU. This model can classify 101 food items with over 70% accuracy. From the predicted food item, we were able to get its ingredients and recipes from an API from rapidAPI called "Recipe Puppy". We cross validate the ingredients with the items that the user is allergic to and tell them if it's safe to consume. We made a native **Android Application** that lets you take an image and uploads it to **Google Storage**. The python backend runs on **Google App Engine**. The web app takes the image from google storage and using **Tensorflow Serving** finds the class of the given image(food name). It uses its name to get its ingredients, nutritional values, and recipes and return these values to the android app via **Firebase**. The Android app then takes these values and displays them to the user. Since most of the heavy lifting happens in the cloud, our app is very light(7MB) and is **computationally efficient**. It does not need a lot of resources to run. It can even run in a cheap and underperforming android mobile without crashing. ## Challenges we ran into > > 1. We had trouble converting our Tensorflow model to tflite(tflite\_converter could not convert a multi\_gpu\_model to tflite). So we ended up hosting it on the cloud which made the app lighter and computationally efficient. > 2. We are all new to using google cloud. So it took us a long time to even figure out the basic stuff. Thanks to the GCP team, we were able to get our app up and running. > 3. We couldn't use the Google App Engine to support TensorFlow(we could not get it working). So we have hosted our web app on Google Compute Engine > 4. We did not get a UI/UX designer or a frontend developer in our team. So we had to learn basic frontend and design our app. > 5. We could only get around 70% validation accuracy due to the higher computation needs and less available time. > 6. We were using an API from rapidAPI. But since yesterday, they stopped support for that API and it wasn't working. So we had to make our own database to run our app. > 7. Couldn't use AutoML for vision classification, because our dataset was too large to be uploaded. > > > ## What we learned Before coming to this hack, we had no idea about using cloud infrastructure like Google Cloud Platform. In this hack, we learned a lot about using Google Cloud Platform and understand its benefits. We are pretty comfortable using it now. Since we didn't have a frontend developer we had to learn that to make our app. Making this project gave us a lot of exposure to **Deep Learning**, **Computer Vision**, **Android App development** and **Google Cloud Platform**. ## What's next for Healthy.ly 1. We are planning to integrate **Google Fit** API with this so that we can get a comparison between the number of calories consumed and the number of calories burnt to give better insight to the user. We couldn't do it now due to time constraints. 2. We are planning to integrate **Augmented Reality** with this app to make it predict in real-time and look better. 3. We have to improve the **User Interface** and **User Experience** of the app. 4. Spend more time training the model and **increase the accuracy**. 5. Increase the **number of labels** of the food items.
losing
## Inspiration 🚗 CARZ
## Inspiration Camera tracking is an efficient way to monitor wildlife populations, and automating this process will decrease the amount of time and monetary investment required. Infrared cameras allow for continuous detection at nighttime without disrupting wildlife. ## What it does / How we built it * Raspberry-Pi captures and uploads infrared images to AWS S3 Buckets * Rekognition deep learning image package and Python examines images for animal/human entities * Uploads animal count to MongoDB * Business/user-friendly AngularJS/NodeJS website to present data in a comprehensive way ## Challenges we ran into As AWS is a new technology, it took us a while to set it up and configure it for our project. ## Accomplishments that we're proud of Learning to work with a broad tech stack, cutting edge technologies, and integrate all the pieces together to form one final product. ## What we learned We learned a lot about AWS, cloud computing, teamwork. ## What's next for IR Ranger Scalability of databases and tech stack for future growth.
## Inspiration 🎞️ imy draws inspiration from the universal human desire to revisit and share cherished memories. We were particularly inspired by the simplicity and addictive nature of the iconic Flappy Bird game, as well as the nostalgic appeal of an 8-bit design style. Our goal was to blend the engaging mechanics of a beloved game with the warmth of reminiscing on the good old days. ## What it does 📷 imy is a gamified social media application that revitalizes the charm and nostalgia of the past. Users not only relive some of their favourite memories, but are also able to reconnect with old friends, reminded of the countless hours of fun that they've spent together. Upload photos in response to daily memory prompts like "A time you learned a new skill," or "Your first photo with a best friend," as well as prompts from different times/periods in your life. Posts can be liked by friends, with each like increasing the user's score. The app features a Flappy Bird-themed leaderboard, displaying leaders and encouraging friendly competition. Additionally, users can customize their profiles with really cute avatars! ## How we built it 💻 We built imy as a native mobile application with React Native and Expo Go. For the backend, we used Node.js, Express, and MongoDB. We chose this stack because of our team's relative familiarity with JavaScript and frontend development, and our team's interest in learning some backend and mobile development. This stack allowed us to learn both skills without having to learn new languages, and also deploy a cool app that we could run natively on our phones! ## Challenges we ran into ⏳ One of the biggest challenges we ran into was learning the intricacies of React Native and backend development; we had originally thought that React Native wouldn't be too difficult to pick up, because of our previous experience with React. Although the languages are similar, we ran into some bugs that took us quite some time to resolve! Bugs, bugs, bugs were the pests of the weekend, as we spent hours trying to figure out why images weren't uploading to the backend, or why profile icons kept returning NaN. It didn't help that Railway (where we hosted our backend) had a site-wide crash during that debugging session :( Additionally, we wanted to use Auth0 for our authentication, but we found out after much too long that it did not work with Expo Go. Stepping out of our comfort zones forced us to think on our feet and be constantly learning over the last 36 hours, but it was definitely worth it in the end! ## Accomplishments that we're proud of 📽️ As a team comprised of all beginner hackers, we're super proud that we were able to come up with a cool idea and push ourselves to complete it within 36 hours! UofTHacks was two of our team members' first hackathon, and our most experienced member has only been to 2 hackathons herself. In terms of the final product, we're really happy with the retro-style design and our creative, nostalgia-themed idea. We worked really hard this weekend, and we learned so much! ## What we learned 💿 We learned how to deploy a backend, make a native app, and merge conflicts (there were a *lot* of them, and not just in git!). We learned what we're capable of doing in just 36 hours, and had lots of fun in the process. ## What's next for imy 📱 We love the idea behind imy and it's definitely a project that we'll continue working on post UoftHacks! There's lots of code refactoring and little features that we'd love to add, and we think the cute frontend design has a lot of promise. We're extremely excited for the future of imy, and we hope to make it even better soon.
losing
## Inspiration Having previously volunteered and worked with children with cerebral palsy, we were struck with the monotony and inaccessibility of traditional physiotherapy. We came up with a cheaper, more portable, and more engaging way to deliver treatment by creating virtual reality games geared towards 12-15 year olds. We targeted this age group because puberty is a crucial period for retention of plasticity in a child's limbs. We implemented interactive games in VR using Oculus' Rift and Leap motion's controllers. ## What it does We designed games that targeted specific hand/elbow/shoulder gestures and used a leap motion controller to track the gestures. Our system improves motor skill, cognitive abilities, emotional growth and social skills of children affected by cerebral palsy. ## How we built it Our games use of leap-motion's hand-tracking technology and the Oculus' immersive system to deliver engaging, exciting, physiotherapy sessions that patients will look forward to playing. These games were created using Unity and C#, and could be played using an Oculus Rift with a Leap Motion controller mounted on top. We also used an Alienware computer with a dedicated graphics card to run the Oculus. ## Challenges we ran into The biggest challenge we ran into was getting the Oculus running. None of our computers had the ports and the capabilities needed to run the Oculus because it needed so much power. Thankfully we were able to acquire an appropriate laptop through MLH, but the Alienware computer we got was locked out of windows. We then spent the first 6 hours re-installing windows and repairing the laptop, which was a challenge. We also faced difficulties programming the interactions between the hands and the objects in the games because it was our first time creating a VR game using Unity, leap motion controls, and Oculus Rift. ## Accomplishments that we're proud of We were proud of our end result because it was our first time creating a VR game with an Oculus Rift and we were amazed by the user experience we were able to provide. Our games were really fun to play! It was intensely gratifying to see our games working, and to know that it would be able to help others! ## What we learned This project gave us the opportunity to educate ourselves on the realities of not being able-bodied. We developed an appreciation for the struggles people living with cerebral palsy face, and also learned a lot of Unity. ## What's next for Alternative Physical Treatment We will develop more advanced games involving a greater combination of hand and elbow gestures, and hopefully get testing in local rehabilitation hospitals. We also hope to integrate data recording and playback functions for treatment analysis. ## Business Model Canvas <https://mcgill-my.sharepoint.com/:b:/g/personal/ion_banaru_mail_mcgill_ca/EYvNcH-mRI1Eo9bQFMoVu5sB7iIn1o7RXM_SoTUFdsPEdw?e=SWf6PO>
# Inspiration When we ride a cycle we to steer the cycle towards the direction we fall, and that's how we balance. But the amazing part is we(or our brain) compute a complex set of calculations to balance ourselves. And that we do it naturally. With some practice it gets better. The calculation in our mind that goes on highly inspired us. At first we thought of creating the cycle, but later after watching "Handle" from Boston Dynamics, we thought of creating the self balancing robot, "Istable". Again one more inspiration was watching modelling of systems mathematically in our control systems labs along with how to tune a PID controlled system. # What it does Istable is a self balancing robot, running on two wheels, balances itself and tries to stay vertical all the time. Istable can move in all directions, rotate and can be a good companion for you. It gives you a really good idea of robot dynamics and kinematics. Mathematically model a system. It can aslo be used to teach college students the tuning of PIDs of a closed loop system. It is a widely used control scheme algorithm and also very robust to use algorithm. Along with that From Zeigler Nichols, Cohen Coon to Kappa Tau methods can also be implemented to teach how to obtain the coefficients in real time. Theoretical modelling to real time implementation. Along with that we can use in hospitals as well where medicines are kept at very low temperature and needs absolute sterilization to enter, we can use these robots there to carry out these operations. And upcoming days we can see our future hospitals in a new form. # How we built it The mechanical body was built using scrap woods laying around, we first made a plan how the body will look like, we gave it a shape of "I" that's and that's where it gets its name. Made the frame, placed the batteries(2x 3.7v li-ion), the heaviest component, along with that attached two micro metal motors of 600 RPM at 12v, cut two hollow circles from old sandal to make tires(rubber gives a good grip and trust me it is really necessary), used a boost converter to stabilize and step up the voltage from 7.4v to 12v, a motor driver to drive the motors, MPU6050 to measure the inclination angle, and a Bluetooth module to read the PID parameters from a mobile app to fine tune the PID loop. And the brain a microcontroller(lgt8f328p). Next we made a free body diagram, located the center of gravity, it needs to be above the centre of mass, adjusted the weight distribution like that. Next we made a simple mathematical model to represnt the robot, it is used to find the transfer function and represents the system. Later we used that to calculate the impulse and step response of the robot, which is very crucial for tuning the PID parameters, if you are taking a mathematical approach, and we did that here, no hit and trial, only application of engineering. The microcontroller runs a discrete(z domain) PID controller to balance the Istable. # Challenges we ran into We were having trouble balancing Istable at first(which is obvious ;) ), and we realized that was due to the placing the gyro, we placed it at the top at first, we corrected that by placing at the bottom, and by that way it naturally eliminated the tangential component of the angle thus improving stabilization greatly. Next fine tuning the PID loop, we do got the initial values of the PID coefficients mathematically, but the fine tuning took a hell lot of effort and that was really challenging. # Accomplishments we are proud of Firstly just a simple change of the position of the gyro improved the stabilization and that gave us a high hope, we were loosing confidence before. The mathematical model or the transfer function we found was going correct with real time outputs. We were happy about that. Last but not least, the yay moment was when we tuned the robot way correctly and it balanced for more than 30seconds. # What we learned We learned a lot, a lot. From kinematics, mathematical modelling, control algorithms apart from PID - we learned about adaptive control to numerical integration. W learned how to tune PID coefficients in a proper way mathematically, no trial and error method. We learned about digital filtering to filter out noisy data, along with that learned about complementary filters to fuse the data from accelerometer and gyroscope to find accurate angles. # Whats next for Istable We will try to add an onboard display to change the coefficients directly over there. Upgrade the algorithm so that it can auto tune the coefficients itself. Add odometry for localized navigation. Also use it as a thing for IoT to serve real time operation with real time updates.
## Inspiration Stroke costs the United States an estimated $33 billion each year, reducing mobility in more than half of stroke survivors age 65 and over, or approximately 225,000 people a year. Stroke motor rehabilitation is a challenging process, both for patients and for their care providers. On the patient-side, occupational and physical therapy often involves hours of training patients to perform functional movements. This process can be mundane and tedious for patients. On the physician’s side, medical care providers need quantitative data and metrics on joint motion while stroke patients perform rehabilitative tasks. To learn more about the stroke motor skill rehabilitation process and pertinent needs in the area, our team interviewed Dr. Kara Flavin, a physician and clinical assistant professor of Orthopaedic Surgery and Neurology & Neurological Sciences at the Stanford University School of Medicine, who helps stroke patients with recovery. We were inspired by her thoughts on the role of technology in the stroke recovery process to learn more about this area, and ultimately design our own technology to meet this need. ## What it does Our product, Kinesis, consists of an interconnected suite of three core technologies: an Arduino-based wearable device that measures the range of motion exhibited by the joints in users’ hands and transmits the data via Bluetooth; a corresponding virtual reality experience in which patients engage with a virtual environment with their hands, during which the wearable transmits range of motion data; and a Bluetooth to Android application to MongoDB data collection and visualization mechanism to store and provide this information to health care professionals. ## How we built it Our glove uses variable-resistance flex sensors to measure joint flexion of the fingers. We built circuits powered by the Arduino microcontroller to generate range-of-motion data from the sensor output, and transmit the information via a Bluetooth module to an external data collector (our Android application.) We sought a simple yet elegant design when mounting our hardware onto our glove. Next, we built an Android application to collect the data transmitted over Bluetooth by our wearable. The data is collected and sent to a remote server for storage using MongoDB and Node.js. Finally, we the data is saved in .csv files, which are convenient for processing and would allow for accessible and descriptive visuals for medical professionals. Our virtual-reality experience was built in the Unity engine and was constructed for Google Cardboard, deployable to any smartphone device that supports Google Cardboard. ## Challenges Our project proved to be an exciting yet difficult journey. We quickly found that the various aspects of our project - the Arduino-based hardware, Android application, and VR with Unity/Google Cardboard - were a challenging application of internet of things (IoT) and hard to integrate. Sending data via Bluetooth from the hardware (Arduino) to the Android app and parsing that data to useful graphs was an example of how we had to combine different technologies. Another major challenge was that none of our team members had prior Unity/VR-development/Google Cardboard experience, so we had to learn these frameworks from scratch at the hackathon. ## Accomplishments that we’re proud of We hope that our product will make motor rehabilitation a more engaging and immersive process for stroke patients while also providing insightful data and analytics for physicians. We’re proud of learning new technologies to put our hack together, building a cost-effective end-to-end suite of technologies, and blending together software as well as hardware to make a product with incredible social impact. ## What we learned We had a very well-balanced team and were able to effectively utilize our diverse skill sets to create Kinesis. Through helping each other with coding and debugging, we familiarized ourselves with new ways of thinking. We also learned new technologies (VR/ Unity/Google Cardboard, MongoDB, Node.js), and at the same time learnt something new about the ones that we are familiar with (Android, hardware/Arduino). We learnt that the design process is crucial in bringing the right tools together to address social causes in an innovative way. ## What's next for Kinesis We would like to have patients use Kinesis and productize our work. As more patients use Kinesis, we hope to add more interactive virtual reality games and to use machine learning to derive better analytics from the increasing amount of data on motor rehabilitation patterns. We would also like to extend the applications of this integrated model to other body parts and health tracking issues.
winning
## Inspiration **Machine learning** is a powerful tool for automating tasks that are not scalable at the human level. However, when deciding on things that can critically affect people's lives, it is important that our models do not learn biases. [Check out this article about Amazon's automated recruiting tool which learned bias against women.](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G?fbclid=IwAR2OXqoIGr4chOrU-P33z1uwdhAY2kBYUEyaiLPNQhDBVfE7O-GEE5FFnJM) However, to completely reject the usefulness of machine learning algorithms to help us automate tasks is extreme. **Fairness** is becoming one of the most popular research topics in machine learning in recent years, and we decided to apply these recent results to build an automated recruiting tool which enforces fairness. ## Problem Suppose we want to learn a machine learning algorithm that automatically determines whether job candidates should advance to the interview stage using factors such as GPA, school, and work experience, and that we have data from which past candidates received interviews. However, what if in the past, women were less likely to receive an interview than men, all other factors being equal, and certain predictors are correlated with the candidate's gender? Despite having biased data, we do not want our machine learning algorithm to learn these biases. This is where the concept of **fairness** comes in. Promoting fairness has been studied in other contexts such as predicting which individuals get credit loans, crime recidivism, and healthcare management. Here, we focus on gender diversity in recruiting. ## What is fairness? There are numerous possible metrics for fairness in the machine learning literature. In this setting, we consider fairness to be measured by the average difference in false positive rate and true positive rate (**average odds difference**) for unprivileged and privileged groups (in this case, women and men, respectively). High values for this metric indicates that the model is statistically more likely to wrongly reject promising candidates from the underprivileged group. ## What our app does **jobFAIR** is a web application that helps human resources personnel keep track of and visualize job candidate information and provide interview recommendations by training a machine learning algorithm on past interview data. There is a side-by-side comparison between training the model before and after applying a *reweighing algorithm* as a preprocessing step to enforce fairness. ### Reweighing Algorithm If the data is unbiased, we would think that the probability of being accepted and the probability of being a woman would be independent (so the product of the two probabilities). By carefully choosing weights for each example, we can de-bias the data without having to change any of the labels. We determine the actual probability of being a woman and being accepted, then set the weight (for the woman + accepted category) as expected/actual probability. In other words, if the actual data has a much smaller probability than expected, examples from this category are given a higher weight (>1). Otherwise, they are given a lower weight. This formula is applied for the other 3 out of 4 combinations of gender x acceptance. Then the reweighed sample is used for training. ## How we built it We trained two classifiers on the same bank of resumes, one with fairness constraints and the other without. We used IBM's [AIF360](https://github.com/IBM/AIF360) library to train the fair classifier. Both classifiers use the **sklearn** Python library for machine learning models. We run a Python **Django** server on an AWS EC2 instance. The machine learning model is loaded into the server from the filesystem on prediction time, classified, and then the results are sent via a callback to the frontend, which displays the metrics for an unfair and a fair classifier. ## Challenges we ran into Training and choosing models with appropriate fairness constraints. After reading relevant literature and experimenting, we chose the reweighing algorithm ([Kamiran and Calders 2012](https://core.ac.uk/download/pdf/81728147.pdf?fbclid=IwAR3P1SFgtml7w0VNQWRf_MK3BVk8WyjOqiZBdgmScO8FjXkRkP9w1RFArfw)) for fairness, logistic regression for the classifier, and average odds difference for the fairness metric. ## Accomplishments that we're proud of We are proud that we saw tangible differences in the fairness metrics of the unmodified classifier and the fair one, while retaining the same level of prediction accuracy. We also found a specific example of when the unmodified classifier would reject a highly qualified female candidate, whereas the fair classifier accepts her. ## What we learned Machine learning can be made socially aware; applying fairness constraints helps mitigate discrimination and promote diversity in important contexts. ## What's next for jobFAIR Hopefully we can make the machine learning more transparent to those without a technical background, such as showing which features are the most important for prediction. There is also room to incorporate more fairness algorithms and metrics.
## Inspiration We were heavily focused on the machine learning aspect and realized that we lacked any datasets which could be used to train a model. So we tried to figure out what kind of activity which might impact insurance rates that we could also collect data for right from the equipment which we had. ## What it does Insurity takes a video feed from a person driving and evaluates it for risky behavior. ## How we built it We used Node.js, Express, and Amazon's Rekognition API to evaluate facial expressions and personal behaviors. ## Challenges we ran into This was our third idea. We had to abandon two major other ideas because the data did not seem to exist for the purposes of machine learning.
## Inspiration Why are there so few women in STEM? How can we promote female empowerment within STEM? As two of us are women, we have personally faced misogyny within the tech community--from being told women aren't fit for certain responsibilities, to our accomplishments being undermined, we wanted to diminish gender inequality in STEM. We chose to focus on one of the roots of this problem: misogynistic or sexist comments in both professional and social settings. ## What it does Thus, we built Miso, an AI that detects sexism and toxicity in text messages and warns the sender of their behaviour. It is then up to them whether they still want to send it or not, but mind that messages which are filtered as at least 60% sexist or 80% toxic will be sent up to admin within the communication channel. It can be integrated into Discord as a bot or used directly on our website. In a professional workspace, which Discord is increasingly being used as–especially for new startups–the consequences of toxic behaviour are even more serious. Every employee in the company will have a profile which tracks every time a message is flagged as sexist or toxic. This tool helps HR determine inappropriate behaviour, measure the severity of it, and keep records to help them mediate workplace conflicts. We hope to reduce misogyny within tech communities so that women feel more empowered and encouraged to join STEM! ## How we built it We made our own custom machine learning model on Cohere. We made a classify model which categorises text inputs into the labels True (misogynistic) and False (non-misogynistic). To train it, we combined various databases which took comments from social media sites including Reddit and Twitter, and organised them into 2 columns: message text and their associated label. In addition to our custom model, we also implemented the Cohere Toxicity Detection API into our program and checked messages against both models. Next, we developed a Discord Bot so that users can integrate this AI into their Discord Servers. We built it using Python, Discord API, and JSON. If someone sends a message which is determined to be 60% likely to be sexist or 80% likely to be toxic, then the bot would delete the message and send a warning text into the channel. To log the message history of the server, we used Estuary. Whenever a message would be sent in the discord, a new text file would be created and uploaded to Estuary to be backed up. This text file contains the message id, content and author. The file is uploaded by calling the Estuary API and making a request to upload. Once the file uploads, the content ID is saved and the file on our end is deleted. Estuary allows us to save message logs that are problematic without the burden of storage on our end. Our next problem was connecting our machine learning models to our front end website & Discord. The Discord was easy enough as both the machine learning model and the discord bot was coded in Python. We imported the model into our Discord bot and ran every message into both machine learning models to determine whether they were problematic or not. The website was more difficult because it was made in React.JS so we needed an external app to connect the front end display and back end models. We created an API endpoint in our model page using FastApi and created a server using Uvicorn. This allowed us to use the Fetch API to fetch the model’s prediction of the inputted sample message. As for our frontend, we developed a website to display Miso using React.JS, Javascript, HTML/CSS, and Figma. ## Challenges we ran into Some challenges we ran into were figuring out how to connect our frontend to our backend, incorporate estuary, train a custom machine learning model in Cohere, and handle certain libraries, such as JSON, for the first time. ## Accomplishments that we're proud of We’re proud of creating our own custom machine learning model! ## What we learned We learned a lot about machine learning and what Estuary is and how to use it! ## What's next for Miso We hope to be able to incorporate it into even more communication platforms such as Slack or Microsoft Teams.
winning
## Inspiration A week or so ago, Nyle DiMarco, the model/actor/deaf activist, visited my school and enlightened our students about how his experience as a deaf person was in shows like Dancing with the Stars (where he danced without hearing the music) and America's Next Top Model. Many people on those sets and in the cast could not use American Sign Language (ASL) to communicate with him, and so he had no idea what was going on at times. ## What it does SpeakAR listens to speech around the user and translates it into a language the user can understand. Especially for deaf users, the visual ASL translations are helpful because that is often their native language. ![Image of ASL](https://res.cloudinary.com/devpost/image/fetch/s--wWJOXt4_--/c_limit,f_auto,fl_lossy,q_auto:eco,w_900/http://az616578.vo.msecnd.net/files/2016/04/17/6359646757437353841666149658_asl.png) ## How we built it We utilized Unity's built-in dictation recognizer API to convert spoken word into text. Then, using a combination of gifs and 3D-modeled hands, we translated that text into ASL. The scripts were written in C#. ## Challenges we ran into The Microsoft HoloLens requires very specific development tools, and because we had never used the laptops we brought to develop for it before, we had to start setting everything up from scratch. One of our laptops could not "Build" properly at all, so all four of us were stuck using only one laptop. Our original idea of implementing facial recognition could not work due to technical challenges, so we actually had to start over with a **completely new** idea at 5:30PM on Saturday night. The time constraint as well as the technical restrictions on Unity3D reduced the number of features/quality of features we could include in the app. ## Accomplishments that we're proud of This was our first time developing with the HoloLens at a hackathon. We were completely new to using GIFs in Unity and displaying it in the Hololens. We are proud of overcoming our technical challenges and adapting the idea to suit the technology. ## What we learned Make sure you have a laptop (with the proper software installed) that's compatible with the device you want to build for. ## What's next for SpeakAR In the future, we hope to implement reverse communication to enable those fluent in ASL to sign in front of the HoloLens and automatically translate their movements into speech. We also want to include foreign language translation so it provides more value to **deaf and hearing** users. The overall quality and speed of translations can also be improved greatly.
## Inspiration My friend Pablo used to throw me ball when playing beer pong, he moved away so i replaced him with a much better robot. ## What it does it tosses you a ping pong ball right when you need, you just have to show it your hand. ## How we built it With love sweat tears and lots of energy drink. ## Challenges we ran into getting open cv and arduino to communicate. ## Accomplishments that we're proud of getting the arduino to communicate with python ## What we learned open cv ## What's next for P.A.B.L.O (pong assistant beats losers only) use hand tracking to track the cups and actually play and win the game
## Inspiration Our love for cards and equity ## What it does A costume camera rig captures the cards in play, our OpenCV powered program looks for cards and sends them to our self trained machine learning model, which then displays relevant information and suggests the optimal move for you to win! ## How we built it -Python and Jupiter for the code -OpenCV for image detection -Tensor-flow and MatPlotLib for the machine learning model ## Challenges we ran into * We have never used any of these technologies (asides from Python) before ## Accomplishments that we're proud of -We cant believe it kind of works, its amazing, we are so proud. ## What we learned -Re: How we built it ## What's next for Machine Jack -Improving the ML model, and maybe supporting other games. The foundation is there and extensible!
winning
Housing is the main issue as a college student whether you move from a foreign country or just from from Kansas to Cal. Even after getting accepted to Cal housing proves to be a big issue and as an international student paying such a high rent proves to be a problem. We used modern user needs like a short span to minimize the pinpoint of the user It helps you filter out a house within your needs and rather than going through the whole apartment you just need to swipe right or left to connect with the landlord of the house you love. We used HTML, CSS, and JAVASCRIPT Learning technologies on the go and building the project even after feeling demotivated which in result boosted our confidence ## What we learned. --> We learnt REACT and how to work with FIREBASE but since there were lot of bugs we didn't use it Doing user research which will determine how many students loved this and what areas we need to improve and with the final prototype done we will bootstrap the whole project and built an awesome housing feature for modern gen college students
## Inspiration Have you ever been working on a project and had your desktop tabs *juuust* right? Have you ever been playing a video game and had the perfect wiki tabs open. Have you ever then had to shutdown your computer for the night and lose all those sweet, sweet windows? Tabinator is the program for you! ## What it does Tabinator allows you to save and load your desktop "layouts"! Just open our desktop app, save your current programs as a layout, and voila! You're programs - including size, position, and program information like Chrome tabs - are saved to the cloud. Now, when you're ready to dive back into your project, simply apply your saved config, and everything will fit back into it's right place. ## How we built it Tabinator consists of a few primary components: * **Powershell and Swift Scripts**: In order to record the locations and states of all the windows on your desktop, we need to interface with the operating system. By using Powershell and Swift, we're able to convert your programs and their information into standardized, comprehensible data for Tabinator. This allows us to load your layouts, regardless of operating system. * **React + Rust Desktop App**: Our desktop app is build with React, and is powered with Tauri - written in Rust - under the hood. This allows us the customization of React with the power of Rust. * **React Front-end + Convex Backend**: Our website - responsible for our Authentication - is written in React, and interfaces with our Convex backend. Our flow is as follows: A user visits our website, signs up with Github, and is prompted to download Tabinator for their OS. Upon downloading, our Desktop app grabs your JWT token from our web-app in order to authenticate with the API. Now, when the user goes to back up their layout, they trigger the relevant script for their operating system. That script grabs the window information, converts it into standardized JSON, passes it back to our React desktop app, which sends it to our Convex database. ## Challenges we ran into Not only was this our first time writing Powershell, Swift, and Rust, but it was also our first time gluing all these pieces together! Although this was an unfamiliar tech stack for us all, the experience of diving head-first into it made for an incredible weekend ## Accomplishments that we're proud of Being able to see our application work - really work - was an incredible feeling. Unlike other project we have all worked on previously, this project has such a tactile aspect, and is so cool to watch work. Apart from the technical accomplishments, this is all our first times working with total strangers at a hackathon! Although a tad scary walking into an event with close to no idea who we were working with, we're all pretty happy to say we're leaving this weekend as friends. ## What's next for Tabinator Tabinator currently only supports a few programs, and is only functional on Linux distributions running X11. Our next few steps would be to diversify the apps you're able to add to Tabinator, flesh out our Linux support, and introduce a pricing plan in order to begin making a profit.
## Inspiration We want to have some fun and find out what we could get out of Computer Vision API from Microsoft. ## What it does This is a web application that allows the user to upload an image, generates an intelligent poem from it and reads the poem out load with different chosen voices. ## How we built it We used Python interface of Cognitive Service API from Microsoft and built a web application with django. We used a public open source tone generator to play different tones reading the poem to the users. ## Challenges we ran into We learned django from scratch. It's not very easy to use. But we eventually made all the components connect together using Python. ## Accomplishments that we're proud of It’s fun! ## What we learned It's difficult to combine different components together. ## What's next for PIIC - Poetic and Intelligent Image Caption We plan to make an independent project with different technology than Cognitive Services and published to the world.
losing
## Inspiration As university students, we often find that we have groceries in the fridge but we end up eating out and the groceries end up going bad. ## What It Does After you buy groceries from supermarkets, you can use our app to take a picture of your receipt. Our app will parse through the items in the receipts and add the items into the database representing your fridge. Using the items you have in your fridge, our app will be able to recommend recipes for dishes for you to make. ## How We Built It On the back-end, we have a Flask server that receives the image from the front-end through ngrok and then sends the image of the receipt to Google Cloud Vision to get the text extracted. We then post-process the data we receive to filter out any unwanted noise in the data. On the front-end, our app is built using react-native, using axios to query from the recipe API, and then stores data into Firebase. ## Challenges We Ran Into Some of the challenges we ran into included deploying our Flask to Google App Engine, and styling in react. We found that it was not possible to write into Google App Engine storage, instead we had to write into Firestore and have that interact with Google App Engine. On the frontend, we had trouble designing the UI to be responsive across platforms, especially since we were relatively inexperienced with React Native development. We also had trouble finding a recipe API that suited our needs and had sufficient documentation.
## Inspiration The idea arose from the current political climate. At a time where there is so much information floating around, and it is hard for people to even define what a fact is, it seemed integral to provide context to users during speeches. ## What it does The program first translates speech in audio into text. It then analyzes the text for relevant topics for listeners, and cross references that with a database of related facts. In the end, it will, in real time, show viewers/listeners a stream of relevant facts related to what is said in the program. ## How we built it We built a natural language processing pipeline that begins with a speech to text translation of a YouTube video through Rev APIs. We then utilize custom unsupervised learning networks and a graph search algorithm for NLP inspired by PageRank to parse the context and categories discussed in different portions of a video. These categories are then used to query a variety of different endpoints and APIs, including custom data extraction API's we built with Mathematica's cloud platform, to collect data relevant to the speech's context. This information is processed on a Flask server that serves as a REST API for an Angular frontend. The frontend takes in YouTube URL's and creates custom annotations and generates relevant data to augment the viewing experience of a video. ## Challenges we ran into None of the team members were very familiar with Mathematica or advanced language processing. Thus, time was spent learning the language and how to accurately parse data, give the huge amount of unfiltered information out there. ## Accomplishments that we're proud of We are proud that we made a product that can help people become more informed in their everyday life, and hopefully give deeper insight into their opinions. The general NLP pipeline and the technologies we have built can be scaled to work with other data sources, allowing for better and broader annotation of video and audio sources. ## What we learned We learned from our challenges. We learned how to work around the constraints of a lack of a dataset that we could use for supervised learning and text categorization by developing a nice model for unsupervised text categorization. We also explored Mathematica's cloud frameworks for building custom API's. ## What's next for Nemo The two big things necessary to expand on Nemo are larger data base references and better determination of topics mentioned and "facts." Ideally this could then be expanded for a person to use on any audio they want context for, whether it be a presentation or a debate or just a conversation.
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
partial
# Grocify: Your Personal Grocery Assistant ## Inspiration The idea for Grocify came from a small but frustrating moment in our busy college lives. We bought a perfect avocado, full of possibilities. But in the loop of classes and assignments, it sat there, forgotten. By the time we remembered, it was too late—a sad, mushy reminder of how hard it can be to keep track of fresh groceries and how much food we waste. That moment made us wonder—how many others face the same problem? Wasting food isn’t just annoying; it’s wasteful and bad for the environment. We wanted a way to avoid these moments, save money, and make our busy lives a bit more organized. That’s how Grocify was born: a simple, intuitive tool to help manage groceries, track expiry dates, and reduce food waste, all while making it easier to use what we buy. ## What It Does Grocify is like having a personal grocery assistant in your pocket, making food management easy. From keeping track of what’s in your pantry to ensuring nothing goes to waste, Grocify helps you stay on top of your groceries without the hassle. Using barcode and image scanning, adding items is quick and painless—just scan the barcode or click the photo. Once items are in the inventory, Grocify takes care of tracking expiry dates and helping you avoid the mess. Grocify can suggest recipes based on what you have, prioritizing ingredients nearing expiration. It even considers your dietary preferences, so the recipes are not just convenient but fit your needs. And if you get stuck while cooking, our helpful assistant is just a tap away to guide you through the steps. ## How We Built It Building Grocify was an iterative process that began with understanding how to simplify grocery management for users. We started by prototyping a user journey that made adding items effortless—leading us to integrate barcode scanning using an API and advanced image recognition powered by Llama. Once core features were established, we added an inventory page to track product statuses in real time. Next, we introduced a Recipes page that curates personalized recipes tailored to the ingredients users already have, utilizing the mixtral and gemma models through Groq to deliver optimal suggestions. To enhance the experience, we integrated a chatbot that offers step-by-step cooking guidance, understanding that many college students may need extra support in the kitchen. We also included options for specifying dietary preferences and special requests, allowing the app to generate customized recipes that fit users' needs. Overall, Grocify aims to take the hassle out of grocery management and make cooking approachable and enjoyable. ## Challenges We Ran Into As with any ambitious project, building Grocify came with its fair share of challenges—some of which were quite unexpected! One of the first major hurdles was getting the barcode scanner to correctly identify individual products, especially when they were originally part of a larger package. We had a particularly amusing incident where we scanned a single granola bar, and it read as a 24-pack—likely because it was originally part of a larger box. Another challenge was integrating the expiry date feature. We thought it would be simple enough to estimate how long items would stay fresh—until we realized our initial model displayed expiry dates for ripe bananas as being three weeks later. After plenty of trial and error, we fine-tuned the system to provide realistic expiry estimates, even for tricky produce. The chatbot also posed an interesting problem. Initially, it answered simple cooking questions, but we quickly found users asking wild inquiries—like substituting pancake mix for flour in every recipe. The chatbot ended up giving overly confident (and wrong) advice, so we had to add extra checks to prevent it from turning into a rogue chef. Through all these challenges, we learned to stay adaptable. Every hurdle helped us refine Grocify, ensuring users would have an intuitive and reliable experience. ## Accomplishments That We're Proud Of We are especially proud of how Grocify makes sustainable living effortless. It’s not just about managing groceries—it’s about reducing food waste, saving money, and making it all incredibly easy. The convenience is unmatched: you can check what’s in your pantry on your way home, plan a recipe, and know exactly what to cook before stepping through the door. One of our biggest wins is making sustainability accessible. As a vegetarian, I love how easy it is to see if a product fits my dietary restrictions just by scanning it. Grocify takes the guesswork out of managing groceries, ensuring everyone can make choices that are good for them and for the planet. Although working with personalized dietary restrictions added complexity, there were moments where we were surprised by how aware the system was about common allergens in products. For example, we learned through the app's warning that Cheetos contain milk and aren't vegan—a detail we would never have guessed. Seeing the system catch these details made us incredibly proud. Another highlight was realizing our image recognition feature could distinguish even the most visually similar produce items. During testing, we saw the app correctly identify subtle differences between items like mandarins and oranges, which felt like a major leap forward. We're also proud of how Grocify estimates expiry dates. Though we initially had some hiccups, refining that system into something accurate and reliable took a lot of work, and the payoff was worth it. ## What We Learned Building Grocify taught us the importance of empathy and thoughtful design. Small features like barcode scanning and personalized recipes can make managing groceries easier and reduce waste. The vision of integrating with stores for automatic syncing showed us how to reach a broader audience, making things effortless for users. Sustainability doesn't have to be complicated with the right tools, and these insights will guide us as we continue improving Grocify. We also learned how to effectively leverage generative AI and LLM (Large Language Model) APIs. Integrating these technologies allowed us to build features far more sophisticated than we imagined. The chatbot, powered by an LLM, evolved into an intelligent assistant capable of answering complex cooking questions, generating custom recipes, and offering helpful substitutions. Working with generative AI taught us about prompt engineering—how to structure inputs to get useful outputs, and how to refine responses for user-friendliness. ## What's Next for Grocify Moving forward, we want Grocify to empower users even more when it comes to choosing what they bring into their homes. By expanding our barcode scanning feature, we aim to give users the ability to quickly evaluate products based on their ingredient lists, helping them avoid items with excessive chemicals or preservatives and make healthier choices. We also plan to enhance our recipe generator by incorporating even more personalization, so users can discover meals that align perfectly with their health goals and preferences. Our mission is to make grocery management smarter and healthier for everyone—while keeping it simple, intuitive, and focused on sustainability. Additionally, we want to integrate Grocify with popular stores like Trader Joe's, allowing users to sync purchases directly with the app. This means items bought in-store can be automatically added to their inventory, making the process seamless. By leveraging our barcode scanning feature, users will also be able to access detailed nutritional facts, enabling more informed dietary decisions.
## Overview Shelf turns your home pantry into a "smart" pantry--helping you keep track of food items, warning you of items to buy, and making shopping trips simple. ## Motivation Home automation provides many avenues for improving people's daily lives by performing many small acts that humans often are not well equipped to handle. However, these improvements necessarily come at a cost; in many cases requiring completely replacing old fixtures with newer — smarter ones. Yet in many cases, a large amount of functionality can be included by retrofitting the old devices while simultaneously integrating with new ones. The goal of **Shelf** was to design a smart pantry system that could improve upon people's own capabilities — such as keeping track of all the food in one's house — while only requiring a slight modification to existing fixtures. ## Design **Shelf** is designed to be dead simple to use. By affixing a rail to the back of the pantry door, a small device can periodically scan the pantry to determine what food is there, how much remains, and how long until it is likely to spoil. From there it publishes information both on a screen within the pantry, and via a companion app for convenient lookup. Furthermore, all of this should require only slight modification in mounting of these rails. ## Functionality The primary monitoring device of **Shelf** is the Android Things mounted on the back of the pantry door. After the pantry's doors are opened and closed, this device uses the two servos--one attached to a pulley and one attached to the camera--to move and rotate a camera that takes pictures of the pantry interior and sends them to the server. Running with Google Cloud Platform, the server processes these images and then runs them through a couple of Google's Vision API annotators to extract product information through logos, labels, and similarity to web entities. Our service then takes these annotations and again processes them to determine what products exist in the pantry and how the pantry has changed since the last scan--returning alerts to the Android Things device if some product is missing. Finally, the consumer also has access to a mobile React native application that allows he or she to access their "shopping list" or list of items to refill their pantry with at any time. ## Challenges Initially, we attempted to build the product recognition system from scratch using Google's ML Engine and scraping our own image data. However, running the pipeline of creating the classifier would have taken too long even when we reduced the dataset size and employed transfer learning. There were also difficulties in using the Android Things dev kits, as the cameras in particular had a tendency to be flaky and unreliable. ## Future A smart pantry system holds much potential for future development. Some areas in which we would like to further add include better performance in product recognition (with the additional time to train our own specific image classifier), enabling users to order products directly from the pantry device, and more insightful data about the pantry contents.
## Inspiration It’'s pretty common that you will come back from a grocery trip, put away all the food you bought in your fridge and pantry, and forget about it. Even if you read the expiration date while buying a carton of milk, chances are that a decent portion of your food will expire. After that you’ll throw away food that used to be perfectly good. But, that’s only how much food you and I are wasting. What about everything that Walmart or Costco trashes on a day to day basis? Each year, 119 billion pounds of food is wasted in the United States alone. That equates to 130 billion meals and more than $408 billion in food thrown away each year. About 30 percent of food in American grocery stores is thrown away. US retail stores generate about 16 billion pounds of food waste every year. But, if there was a solution that could ensure that no food would be needlessly wasted, that would change the world. ## What it does PantryPuzzle will scan in images of food items as well as extract its expiration date, and add it to an inventory of items that users can manage. When food nears expiration, it will notify users to incentivize action to be taken. The app will take actions to take with any particular food item, like recipes that use the items in a user’s pantry according to their preference. Additionally, users can choose to donate food items, after which they can share their location to food pantries and delivery drivers. ## How we built it We built it with a React frontend and a Python flask backend. We stored food entries in a database using Firebase. For the food image recognition and expiration date extraction, we used a tuned version of Google Vision API’s object detection and optical character recognition (OCR) respectively. For the recipe recommendation feature, we used OpenAI’s GPT-3 DaVinci large language model. For tracking user location for the donation feature, we used Nominatim open street map. ## Challenges we ran into React to properly display Storing multiple values into database at once (food item, exp date) How to display all firebase elements (doing proof of concept with console.log) Donated food being displayed before even clicking the button (fixed by using function for onclick here) Getting location of the user to be accessed and stored, not just longtitude/latitude Needing to log day that a food was gotten Deleting an item when expired. Syncing my stash w/ donations. Don’t wanna list if not wanting to donate anymore) How to delete the food from the Firebase (but weird bc of weird doc ID) Predicting when non-labeled foods expire. (using OpenAI) ## Accomplishments that we're proud of * We were able to get a good computer vision algorithm that is able to detect the type of food and a very accurate expiry date. * Integrating the API that helps us figure out our location from the latitudes and longitudes. * Used a scalable database like firebase, and completed all features that we originally wanted to achieve regarding generative AI, computer vision and efficient CRUD operations. ## What we learned We learnt how big of a problem the food waste disposal was, and were surprised to know that so much food was being thrown away. ## What's next for PantryPuzzle We want to add user authentication, so every user in every home and grocery has access to their personal pantry, and also maintains their access to the global donations list to search for food items others don't want. We integrate this app with the Internet of Things (IoT) so refrigerators can come built in with this product to detect food and their expiry date. We also want to add a feature where if the expiry date is not visible, the app can predict what the likely expiration date could be using computer vision (texture and color of food) and generative AI.
partial
## Inspiration In India, there are a lot of cases of corruption in the medicine industry. Medicines are generally expensive and not accessible to people from financially backward households. As a result, there are many unnecessary deaths caused by lack of proper treatment. People who buy the medicines at high costs are generally caught in debt traps and face a lifetime of discomfort. Every year, the government spends 95 million CAD on medicines from private medical stores to provide it to ration shops in rural areas that sell it for cheap prices to poor people. However, private medical store owners bribe government hospital doctors to prescribe medicines which can only be found in private stores, thus causing a loss for the government and creating debt traps for the people. ## What it does Every medicine has an alternative to it. This is measure by the salts present in it. Our app provides a list of alternative medicines to every medicine prescribed by doctors to give patients a variety to choose from and go to ration stores who might have the alternative at cheaper prices. This information is retrieved from the data sets of medicines by the Medical Authority of India. ## How we built it We built the front end using Swift, Figma and Illustrator. For the backend, we used Google firebase and SQL for collecting information about medicines from official sources. We also used the Google Maps API and the Google Translate API to track the location of the patient and provide information in his own language/dialect. ## Challenges we ran into ## Accomplishments that we're proud of We are proud of coming up with an idea that solves problems for both the government and the patients by solving corruption. We are also proud of our ability to think of and create such an elaborate product in a restricted time frame. ## What's next for Value-Med If we had more time, we would have integrated Voiceflow as well for people who did not know how to read so that they could receive voice commands for navigating the application.
## Inspiration 34.2 million people are unpaid and unsupported caregivers for seniors, with 48% of which are working age. These people could be working throughout the day and find it difficult to manage their care receiver's medications. Care receivers could be reliant on their caregiver as they often forget to take their medication. This is where our application would come to fill in the gap to make the lives of the caregivers and their network less stressful by providing more information about the status of the care receiver's prescriptions. ## What it does Our solution is a at-home prescription management service for caregivers of seniors to provide real-time updates to a family network across paltforms. It uses Sonr's peer-to-peer interaction and real-time network to provide quick updates to all members in the family while ensuring we still provide a smooth and beginner-friendly user experience. Hosting our information on the blockchain ensures the security and safety of our patient's data. ## How we built it We hosted our app in Sonr's flutter SDK in order to connect our device to the Sonr Network to allow for a Decentralized Network. We built it using iOS emulator due to better development based on the advice from the Sonr team. We have also used Sonr's Login protocol to help make account login and creation seamless as they just need to sign in using their Sonr credentials. We implemented Sonr's Schema and Document functionality in order to save important prescription data on the Sonr Decenrtalized network ## Challenges we ran into We had dependencies on Sonr's API who was unable to support the android emulator at this point in time. We also discovered the limitation of computing resources which caused us take a long period of time to connect. ## Accomplishments that we're proud of Managing to achieve our goal of creating a working prototype that integrates with the Sonr's API is one of our proudest accomplishments. We also manage to create an addressable problem and ideate a unique solution to it which we were able to build in 24 hours. We were not familiar with Sonr before this, however, with the help of the mentors, we have learnt a lot and were able to implement sonr's technology to our problem. ## What we learned We really found Sonr's technology very intriguing exciting, especially as we are moving towards Web3 today. We were very keen on exploring how might we make full use of the technology in our mission to improve the lives of the caregivers today. After discussing with the Sonr mentors, we were enlightened on how it works and were able to use it in our application. ## What's next for CalHacks - Simpillz For the future use case, we hope to develop more support for an expanded network of users which could include medical professionals and nurses to help keep track of the patient prescription meds as well. We are also looking into expanding into IOT devices to track prescription intake by the seniors to automate the tracking process such that there will be less dependencies on the caregiver.
## Inspiration Our team identified two intertwined health problems in developing countries: 1) Lack of easy-to-obtain medical advice due to economic, social and geographic problems, and 2) Difficulty of public health data collection in rural communities. This weekend, we built SMS Doc, a single platform to help solve both of these problems at the same time. SMS Doc is an SMS-based healthcare information service for underserved populations around the globe. Why text messages? Well, cell phones are extremely prevalent worldwide [1], but connection to the internet is not [2]. So, in many ways, SMS is the *perfect* platform for reaching our audience in the developing world: no data plan or smartphone necessary. ## What it does Our product: 1) Democratizes healthcare information for people without Internet access by providing a guided diagnosis of symptoms the user is experiencing, and 2) Has a web application component for charitable NGOs and health orgs, populated with symptom data combined with time and location data. That 2nd point in particular is what takes SMS Doc's impact from personal to global: by allowing people in developing countries access to medical diagnoses, we gain self-reported information on their condition. This information is then directly accessible by national health organizations and NGOs to help distribute aid appropriately, and importantly allows for epidemiological study. **The big picture:** we'll have the data and the foresight to stop big epidemics much earlier on, so we'll be less likely to repeat crises like 2014's Ebola outbreak. ## Under the hood * *Nexmo (Vonage) API* allowed us to keep our diagnosis platform exclusively on SMS, simplifying communication with the client on the frontend so we could worry more about data processing on the backend. **Sometimes the best UX comes with no UI** * Some in-house natural language processing for making sense of user's replies * *MongoDB* allowed us to easily store and access data about symptoms, conditions, and patient metadata * *Infermedica API* for the symptoms and diagnosis pipeline: this API helps us figure out the right follow-up questions to ask the user, as well as the probability that the user has a certain condition. * *Google Maps API* for locating nearby hospitals and clinics for the user to consider visiting. All of this hosted on a Digital Ocean cloud droplet. The results are hooked-through to a node.js webapp which can be searched for relevant keywords, symptoms and conditions and then displays heatmaps over the relevant world locations. ## What's next for SMS Doc? * Medical reports as output: we can tell the clinic that, for example, a 30-year old male exhibiting certain symptoms was recently diagnosed with a given illness and referred to them. This can allow them to prepare treatment, understand the local health needs, etc. * Epidemiology data can be handed to national health boards as triggers for travel warnings. * Allow medical professionals to communicate with patients through our SMS platform. The diagnosis system can be continually improved in sensitivity and breadth. * More local language support [1] <http://www.statista.com/statistics/274774/forecast-of-mobile-phone-users-worldwide/> [2] <http://www.internetlivestats.com/internet-users/>
partial
## What it does SPOT would be equipped with various sensors, cameras, and LIDAR technology to perform inspections in hazardous environments. A network of SPOT bots will be deployed in a within a 2.5 – 3-mile radius surrounding a particular infrastructure for security and surveillance tasks, patrolling areas and providing real-time video feeds to human operators. It can be used to monitor large facilities, industrial sites, or public events, enhancing security effort. These network of SPOT robots will be used to inspect and collect data/mages for analysis, tracking suspects, and gathering crucial intelligence in high-risk environments thus maintaining situational awareness without putting officers in harm's way. They will be providing real-time video feeds . If it detects any malicious activity, the SPOT will act as the first respondent and deploy non-lethal measures by sending a distress signal to the closest law enforcement officer/authority who’d be able mitigate the situation effectively. Consequently, the other SPOT bots in the network would also be alerted. Its ability to provide real-time situational awareness without putting officers at risk is a significant advantage. ## How we built it * Together.ai : Used llama to enable conversations and consensus among agents * MindsDB : Database is stored in postgres (render). The database is imported to Mindsdb. The sentiment classifier is trained with the help of demo data and the sentiments which are retrieved from every agent allows us to understand the mental state of every bot * Reflex : UI for visualization of statistical measures of the bots -Intel : To train mobilevnet for classifying threats * Intersystems : To Carry on Battery Life forecasting for the agent to enable efficient decisions
## Inspiration As first-year students, we have experienced the difficulties of navigating our way around our new home. We wanted to facilitate the transition to university by helping students learn more about their university campus. ## What it does A social media app for students to share daily images of their campus and earn points by accurately guessing the locations of their friend's image. After guessing, students can explore the location in full with detailed maps, including within university buildings. ## How we built it Mapped-in SDK was used to display user locations in relation to surrounding buildings and help identify different campus areas. Reactjs was used as a mobile website as the SDK was unavailable for mobile. Express and Node for backend, and MongoDB Atlas serves as the backend for flexible datatypes. ## Challenges we ran into * Developing in an unfamiliar framework (React) after learning that Android Studio was not feasible * Bypassing CORS permissions when accessing the user's camera ## Accomplishments that we're proud of * Using a new SDK purposely to address an issue that was relevant to our team * Going through the development process, and gaining a range of experiences over a short period of time ## What we learned * Planning time effectively and redirecting our goals accordingly * How to learn by collaborating with team members to SDK experts, as well as reading documentation. * Our tech stack ## What's next for LooGuessr * creating more social elements, such as a global leaderboard/tournaments to increase engagement beyond first years * considering freemium components, such as extra guesses, 360-view, and interpersonal wagers * showcasing 360-picture view by stitching together a video recording from the user * addressing privacy concerns with image face blur and an option for delaying showing the image
## Inspiration My inspiration for creating CityBlitz was getting lost in Ottawa TWO SEPARATE TIMES on Friday. Since it was my first time in the city, I honestly didn't know how to use the O-Train or even whether Ottawa had buses in operation or not. I realized that if there existed an engaging game that could map hotspots in Ottawa and ways to get to them, I probably wouldn't have had such a hard time navigating on Friday. Plus, I wanted to actively contribute to sustainability, hence the trophies for climate charities pledge. ## What it does CityBlitz is a top-down pixelated roleplay game that leads players on a journey through Ottawa, Canada. It encourages players to use critical thinking skills to solve problems and to familiarize themselves with navigation in a big city, all while using in-game rewards to make a positive difference in sustainability. ## How I built it * Entirely coded using Javax swing * All 250+ graphics assets are hand-drawn using Adobe Photoshop * All original artwork * In-game map layouts copy real-life street layouts * Buildings like the parliament and the O-Train station are mimicked from real-life * Elements like taxis and street signs also mimic those of Ottawa ## Challenges I ran into Finding the right balance between a puzzle RPG being too difficult/unintuitive for players vs. spoonfeeding the players every solution was the hardest part of this project. This was overcome through trial and error as well as peer testing and feedback. ## Accomplishments that we're proud of Over 250 original graphics, a fully functioning RPG, a sustainability feature, and overall gameplay. ## What I learned I learned how to implement real-world elements like street layouts and transit systems into a game for users to familiarize themselves with the city in question. I also learned how to use GitHub and DevPost, how to create a repository, update git files, create a demo video, participate in a hackathon challenge, submit a hackathon project, and pitch a hackathon project. ## What's next for CityBlitz Though Ottawa was the original map for CityBlitz, the game aims to create versions/maps centering around other major metropolitan areas like Toronto, New York City, Barcelona, Shanghai, and Mexico City. In the future, CityBlitz aims to partner with these municipal governments to be publicly implemented in schools for kids to engage with, around the city for users to discover, and to be displayed on tourism platforms to attract people to the city in question.
partial
##What is Nearest Health? 🏥 Nearest Health is an app that eases the stress of wait times by directing patients to open-capacity hospitals and walk-in clinics through a quicker system. ##Current Problems with the Health Industry 🚨 * Providing information on the wait times can cause patients to get upset and feel misled * Wait times cannot be guaranteed because every patient is there for a different health concern * It is difficult to get a hold of hospital staff since they are usually very busy ##What makes Nearest Health Different? 💭 Based on real-time data, we guide patients in need of medical attention to the nearest hospitals and walk-in clinics that have vacancies with different statuses of availabilities. --- ## Inspiration Our experiences waiting in the Emergency Room at hospitals and hearing stories about it from friends. ## What it does The Nearest Health app serves as another system to help increase efficiency and aids people in finding the nearest health care they can get that is close in proximity to them and has availability. ## How we built it We built this using HTML, CSS, SCSS, JS, Bootstrap, and Figma ## Challenges we ran into Being a team of designers from a close form of backgrounds, we challenged ourselves to push our designs into code. This came with a lot of high expectations and visioning, as we really wanted our solution to be something we enjoyed as well. Right off the bat, we were thrown into adopting React and struggled to understand the structure. We narrowed down our scope into a manageable project by pushing to JavaScript and SASS instead (which was great). ## Accomplishments that we're proud of Going into something completely out of our field of expertise and gaining new forms of insight. This was also all of our first in-person hackathons so it was cool to see what was achieved. ## What we learned That the health industry still has ways to go and it's interesting to see how future technologies might solve our biggest problems.
## Inspiration Everywhere we hear, "(data) is the new currency." And yet our understanding of it is limited. As developers, we have decided to make it more accessible to the people we think need it most: people who have an illness. Often, medical information is not explicit to the general public, and we cannot grasp its simpler meaning. That's why we are on a mission to vulgarize it. ## What it does We take data from medical research websites and make it comprehensible to patients leveraging AI to parse through it, rendering the best fitting study depending on the patient's needs. ## How we built it Using HTML and CSS heavy code, we built the frontend mostly using Svelte, a powerful and easy alternative to React.js. For the backend, it is necessary to think of it as a pipeline. We first take the user input, format it through Python, send it to ChatGPT that has loaded the data-filled file, and it sends back the frontend at the client's disposal. ## Challenges we ran into The first challenge we ran into was the data access. The first API we used was unformatted data, due to poorly regulated data entry. If the data is unformatted, then it is difficult to standardize it, especially with how much time we had left. The second challenge was the choice of frameworks. The number of tools available to the devs is very large, and it is very easy to get lost in framework changes if you aren't completely sure of how your project is going to turn out. Since we had trouble with the first API, the second issue did not help. ## Accomplishments that we're proud of We found a way around both challenges. As a solution to the search, we decided to pivot towards multiple databases instead of the one we originally intended, to have a bit more latitude regarding data mining. This gave us a bit of hope, which in turn solved the second problem, and we had a better idea of how to tackle the project. We are proud of our resilience in the face of a failing idea, and going through with it no matter the obstacles. ## What we learned That HTML cannot be linked directly to Python using a framework. If it can be, why is it so difficult? We also learned that the medical field is a data rich field, but in order to make a very interesting and useful product, you need a doctor to vulgarize to you. Just "formatting data" is pretty much useless if you don't understand the data in the first place. ## What's next for MediQuest AI This project feels more like an open-source project, constantly improved with new implementations and completely different ways to think. This aligns well with our philosophy: accessible medical information to people with no medical background. GitHub: <https://github.com/Jhul4724/mchacks_mediquest.git>
## Inspiration This project was inspired by providing a solution to the problem of users with medical constraints. There are users who have mobility limitations which causes them to have difficulty leaving their homes. PrescriptionCare allows them to order prescriptions online and have them delivered to their homes on a monthly or a weekly basis. ## What it does PrescriptionCare is a web app that allows users to order medical prescriptions online. Users can fill out a form and upload an image of their prescription in order to have their medication delivered to their homes. This app has a monthly subscription feature since most medications is renewed after 30 days, but also allows users to sign up for weekly subscriptions. ## How we built it We designed PrescriptionCare using Figma, and built it using Wix. ## Challenges we ran into We ran into several challenges, mainly due to our inexperience with hackathons and the programs and languages we used along the way. Initially, we wanted to create the website using HTML, CSS, and JavaScript, however, we didn't end up going down that path, as it ended up being a bit too complicated because we were all beginners. We ended up choosing to use Wix due to its ease of use and excellent template selection to give us a solid base to build PrescriptionCare off of. We also ran into issues with an iOS app we tried to develop to complement the website, mainly due to learning Swift and SwiftUI, which is not very beginner friendly. ## Accomplishments that we're proud of Managing to create a website in just a few hours and being able to work with a great team. Some the members of this team also had to learn new software in just a few hours which was also a challenge, but this experience is a good one and we'll be much more prepared for our next hackathon. We are proud to have experimented with two new tools thanks to this application. We were able to draft a website through Wix and create an app through xCode and SwiftUI. Another accomplishment is that our team consists of first-time hackers, so we are proud to have started the journey of hacking and cannot wait to see what is waiting for us in the future. ## What we learned We learned how to use the Wix website builder for the first time and also how to collaborate together as a team. We don't really know each other and happened to meet at the competition and will probably work together at another hackathon in the future. We learned a positive mindset is another important asset to bring into a hackathon. At first, we felt intimidated by hackathons, but we are thankful to have learned that hackathons can be fun and a priceless learning experience. ## What's next for PrescriptionCare It would be nice to be able to create a mobile app so that users can get updates and notifications when their medication arrives. We could create a tracking system that keeps track of the medication you take and estimates when the user finishes their medication. PrescriptionCare will continue to expand and develop its services to reach more audience. We hope to bring more medication and subscription plans for post-secondary students who live away from home, at-home caretakers, and more, and we aim to bring access to medicine to everyone. Our next goal is to continue developing our website and mobile app (both android and IOS), as well as collect data of pharmaceutical drugs and their usage. We hope to make our app a more diverse, and inclusive app, with a wide variety of medication and delivery methods.
losing
## Inspiration We created Stage Hand with the hopes of equipping people with the necessary skills to become poised and confident speakers. Upon realizing how big of a fear public speaking is for most people, we wanted to help people cope with that fear while gaining the skillsets necessary to become better presenters. Stage Hand was our solution to this as it provides users with a real time feedback tool that helps them rapidly improve their skills while overcoming their fears. ## What it does Stage Hand is a web application that offers users real time feedback on their speaking pace, expression, and coherence by having them record videos of themselves delivering speeches. The goal in doing this is to help users pinpoint and improve upon the weak aspects of their public speaking ability. As you practice with Stage Hand, it essentially coaches you on how to polish your speeches by keeping track of and displaying vital statistics like your average speaking speed, the main emotion that your facial expression conveys, and how many filler words you have used. We hope that making this information available in real time will allow for users to learn how to adapt while speaking in order to create the optimal speech and become a holistically better public speaker. ## How we built it The Stage Hand interface was built using the React library to compose the user interface and the MediaDevices object to access our user’s video camera. The data we collect from each recording is sent to the suite of Microsoft’s Cognitive Services, utilizing the Bing Speech API’s speech-to-text function and developing our own custom language and acoustic models through the Custom Speech Service. Furthermore, various points throughout the video recording were passed into the Microsoft Emotion API, which we used to analyze the speaker’s emotional expression at any given point. Throughout the entire recording, live feedback is being given to the user so they can continue to adapt and learn the skills necessary to become excellent public speakers. ## Challenges we ran into Our biggest challenge that we faced in our final rounds of debugging was that we kept getting a 403 Error when calling the Bing Speech API that was necessary for the inner speech-to-text aspect of our program. We addressed this issue with a Microsoft mentor and were able to determine that the root cause of this was on Microsoft’s end. We ended up pinging a Microsoft employee in India who had a similar issue recently, and we are awaiting a response so that we can optimize our application. In the meantime, we found a temporary workaround for this issue: we were suggested to call REST API instead of WebSocket API even though REST only works for fifteen seconds of recording. This has allowed us to create a working product that we can demo (although it is less efficient). We hope to fix this in the future upon getting a response from the employee in India. Another chief issue that we faced is that we were having a hard time recognizing filler words in the pattern of normal speech because most speech-to-text APIs automatically filter out filler words. This made it nearly impossible for us to actually be able to detect them. We were able to overcome this issue by developing custom language and acoustic models on Microsoft’s Custom Speech Service. These models allowed us to detect filler words so that we would be able to count them in our program and give feedback to the user on how to minimize the use of fillers in their speech. ## Accomplishments that we're proud of We are especially proud of our team’s ability to adapt to the variety of issues we had come across in the process of building the application. Every time we made a breakthrough, it seemed like we ran into a new issue. For example, when it seemed like we were almost done creating the backbone of our application, we faced issues when calling the Bing Speech API. Regardless of these challenges that were thrown at us, we were always able to overcome them in one way or another. In this sense, we are most proud of our team’s resilience. ## What we learned From this experience, we learned how to utilize and incorporate many technologies that none of us have previously used. For example, we were able to learn how to use Microsoft’s Cognitive Services in order to perform tasks like detecting emotion and converting speech to text for the internal processing of our application. Additionally, we learned how to develop custom language and acoustic models using Microsoft’s Custom Speech Service. Overall, we learned a lot from this experience when incorporating technologies into our application that were new to all of us. ## What's next for Stage Hand Since we ran into difficulty using the Web Socket API, we had trouble opening up a tunnel from our client to the server. This bottlenecked our ability to create a live stream that directly funnels our user’s video data to the Microsoft Cognitive Services. In the future, we hope to overcome these issues to create an app that truly allows for live interactive feedback from our devices.
## Inspiration Only a small percentage of Americans use ASL as their main form of daily communication. Hence, no one notices when ASL-first speakers are left out of using FaceTime, Zoom, or even iMessage voice memos. This is a terrible inconvenience for ASL-first speakers attempting to communicate with their loved ones, colleagues, and friends. There is a clear barrier to communication between those who are deaf or hard of hearing and those who are fully-abled. We created Hello as a solution to this problem for those experiencing similar situations and to lay the ground work for future seamless communication. On a personal level, Brandon's grandma is hard of hearing, which makes it very difficult to communicate. In the future this tool may be their only chance at clear communication. ## What it does Expectedly, there are two sides to the video call: a fully-abled person and a deaf or hard of hearing person. For the fully-abled person: * Their speech gets automatically transcribed in real-time and displayed to the end user * Their facial expressions and speech get analyzed for sentiment detection For the deaf/hard of hearing person: * Their hand signs are detected and translated into English in real-time * The translations are then cleaned up by an LLM and displayed to the end user in text and audio * Their facial expressions are analyzed for emotion detection ## How we built it Our frontend is a simple React and Vite project. On the backend, websockets are used for real-time inferencing. For the fully-abled person, their speech is first transcribed via Deepgram, then their emotion is detected using HumeAI. For the deaf/hard of hearing person, their hand signs are first translated using a custom ML model powered via Hyperbolic, then these translations are cleaned using both Google Gemini and Hyperbolic. Hume AI is used similarly on this end as well. Additionally, the translations are communicated back via text-to-speech using Cartesia/Deepgram. ## Challenges we ran into * Custom ML models are very hard to deploy (Credits to <https://github.com/hoyso48/Google---American-Sign-Language-Fingerspelling-Recognition-2nd-place-solution>) * Websockets are easier said than done * Spotty wifi ## Accomplishments that we're proud of * Learned websockets from scratch * Implemented custom ML model inferencing and workflows * More experience in systems design ## What's next for Hello Faster, more accurate ASL model. More scalability and maintainability for the codebase.
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
partial
## Why MusicShift? When you listen to music, it belongs to you & your friends. We want to make sure you feel that way about every song. Switching aux cords, settling for lackluster playlists, or attempting to plan a playlist in advance doesn't let that happen. Through MusicShift, we make sure that the best playlist is also the most spontaneous. ## What is it? MusicShift is a plug-and-play, ever-evolving collaborative playlist in a box. Just plug in an aux cord, share a QR code with your friends, and let the best music start playing. MusicShift lets you collaborate on your playlists. You can add songs to your playlist, and even upvote songs that others have added so the more popular songs are played sooner. There is no limit to the songs you can search, and no limit to the number of people who can collaborate on a single playlist through real time multi-user sync. Playlists can have different purposes too. MusicShift is fun enough to be the music player during a carpool, and sophisticated enough to supply the music in public parks and restaurants. There's no need to worry about how your party's playlist fares when everyone is working together to pick the music. ## How it works MusicShift is made up of three parts: a hardware device, a progressive web app, and a database backend. The hardware device is a Raspberry Pi 2 which polls the backend (MongoDB database of tracks & votes used to generate rankings / play order) for the next Spotify song to play. Using Spotify’s Python bindings & taking advantage of its predictable caching locations, we intercept the downloaded streams and live route them to the aux output. Meanwhile, our progressive web app built using Polymer offers a live view into the playlist - what’s playing, what’s next, the ability to upvote/downvote songs to have them play sooner or later, and of course skip functionality (optional, configurable by the playlist creator). It loads instantly on users’ devices and presents itself as a like-native app (addable to the user lockscreen). ## What's next? Here's a look at the future of MusicShift: * User authentication, so you have complete control over your playlists * Playlist uploads through Spotify integration * Establish private and public streams for different settings and venues * NFC or Bluetooth beacon with MusicShift for easier connection
## Inspiration this is a project which is given to me by an organization and my collegues inspierd me to do this project ## What it does It can remind what we have to do in future and also set time when it is to be done ## How we built it I built it using command time utility in python programming ## Challenges we ran into Many challanges such as storing data in file and many bugs ae come in between middle of this program ## Accomplishments that we're proud of I am proud that i make this real time project which reminds a person todo his tasks ## What we learned I leraned more about command line utility of python ## What's next for Todo list Next I am doing various projects such as Virtual assistant and game development
## Inspiration We both love karaoke, but there are lots of obstacles: * going to a physical karaoke is expensive and inconvenient * youtube karaoke videos not always matches your vocal (key) range, and there is also no playback * existing karaoke apps have limited songs, not flexible based on your music taste ## What it does Vioke is a karaoke web-app that supports pitch-changing, on/off vocal switching, and real-time playback, simulating the real karaoke experience. Unlike traditional karaoke machines, Vioke is accesible anytime, anywhere, from your own devices. ## How we built it **Frontend** The frontend is built with React, and it handles settings including on/off playback, on/off vocal, and pitch changing. **Backend** The backend is built in Python. It leverages source separation ML library to extract instrumental tracks. It also uses a pitch-shifting library to adjust the key of a song. ## Challenges we ran into * Playback latency * Backend library compatibily conflicts * Integration between frontend and backend * Lack of GPU / computational power for audio processing ## Accomplishments that we're proud of * We were able to learn and implement audio processing, an area we did not have experience with before. * We built a product that can can be used in the future. * Scrolling lyrics is epic * It works!! ## What's next for Vioke * Caching processed audio to eventually create a data source that we can leverage from to reduce processing time. * Train models for source separation in other languages (we found that the pre-built library mostly just supports English vocals). * If time and resources allow, we can scale it to a platform where people can share their karaoke playlists and post their covers.
partial
## Inspiration We were inspired to create Hackgenic after we were having trouble deciding on what project we should make for this hackathon. With multiple members with such different experiences, skills, and interests, it's sometimes hard to find commonalities and decide on a project that everyone agrees on. It can be very difficult to choose a project, and when faced with a completely blank sheet of paper, many people find that they need a few preliminary ideas to bounce off of. ## What it does That's why we built Hackgenic. This is the perfect app to help you choose a project to create, as it asks for input on your interests, any preferred languages or frameworks, and some demographic information to provide you with multiple project ideas sure to fit your needs and interests. ## How we built it We built Hackgenic using only HTML, CSS, and JavaScript. ## Challenges we ran into Some challenges we ran into included issues with styling, keeping webpages and code files organized, and combining all of our skills and abilities to create the final product. ## Accomplishments that we're proud of That being said, we are proud of how much we've learned about both the concepts we used throughout the project, as well as the project-making process in general, and all the skills that we have gained throughout the hackathon. ## What we learned We learned more about using web-development to create quizzes, take user input, and create an algorithm to output potential project ideas. ## What's next for Hackgenic Although we were confined by tight time constraints during the hackathon, we had great plans for Hackgenic. In the future, we would like to integrate web scraping to provide users with a neverending supply of potential ideas and to provide a greater range of possibilities to our users.
## Inspiration University can be very stressful at times and sometimes the resources you need are not readily available to you. This web application can help students relax by completing positive "mindfulness" tasks to beat their monsters. We decided to build a web application specifically for mobile since we thought that the app would be most effective if it is very accessible to anyone. We implemented a social aspect to it as well by allowing users to make friends to make the experience more enjoyable. Everything in the design was carefully considered and we always had mindfulness in mind, by doing some research into how colours and shapes can affect your mood. ## What it does Mood prompts you to choose your current mood on your first login of every day, which then allows us to play music that matches your mood as well as create monsters which you can defeat by completing tasks meant to ease your mind. You can also add friends and check on their progress, and in theory be able to interact with each other through the app by working together to defeat the monsters, however, we haven't been able to implement this functionality yet. ## How we built it The project uses Ruby on Rails to implement the backend and JavaScript and React Bootstrap on the front end. We also used GitHub for source control management. ## Challenges we ran into Starting the project, a lot of us had issues downloading the relevant software as we started by using a boilerplate but through collaboration between us we were all able to figure it out and get started on the project. Most of us were still beginning to learn about web development and thus had limited experience programming in JavaScript and CSS, but again, through teamwork and the help of some very insightful mentors, we were able to pick up these skills very quickly, each of us being able to contribute a fair portion to the project. ## Accomplishments that we are proud of Every single member in this group has learned a lot from this experience, no matter their previous experience going into the hackathon. The more experienced members did an amazing job helping those who had less experience, and those with less experience were able to quickly learn the elements to creating a website. The thing we are most proud of is that we were able to accomplish almost every one of our goals that we had set prior to the hackathon, building a fully functioning website that we hope will be able to help others. Every one of us is proud to have been a part of this amazing project, and look forward to doing more like it. ## What's next for Mood Our goal is to give the help to those who need it through this accessible app, and to make it something user's would want to do. And thus to improve this app we would like to: add more mindfulness tasks, implement a 'friend collaboration' aspect, as well as potentially make a mobile application (iOS and Android) so that it is even more accessible to people.
## Inspiration Internet Addiction, while not yet codified within a psychological framework, is growing both in prevalence as a potentially problematic condition with many parallels to existing recognized disorders. How much time do you spend on your phone a day? On your laptop? Using the internet? What fraction of that time is used doing things that are actually productive? Over the past years, there is seen to be a stronger and stronger link between an increasing online presence and a deteriorating mental health. We spend more time online than we do taking care of ourselves and our mental/emotional wellbeing. However, people are becoming more aware of their own mental health, more open to sharing their struggles and dealing with them. However the distractions of social media, games, and scrolling are constantly undermining our efforts. Even with an understanding of the harmful nature of these technologies, we still find it so difficult to take our attention away from them. Much of the media we consume online is built to be addicting, to hook us in. How can we pull ourselves away from these distractions in a way that doesn’t feel punishing? ## What it does Presents an audio visual stimulation that allows the user to become more aware of their “mind space” Lo fi audio and slow moving figures A timer that resets everytime keyboard or mouse is moved Program awards the player a new plant for every time the timer comes to an end ## How we built it UI/UX design using sigma Frontend using HTML, JS, and CSS AWS Amplify for deploying our webapp Github for version control ## Challenges we ran into Initially wanted to use react.js and develop our server backend ourselves but since we are inexperienced, we had to scale back our goals. Team consisted mostly of people with backend experience, it was difficult to convert to front-end. Furthermore, Most of our members were participating in their first hackathon. ## Accomplishments that we're proud of We're very proud of learning how to work together as a team, and manage projects in github for the first time. We're proud of having an end product, even though it didn't fully meet our expectations. We're happy to have this experience, and can't wait to participate in more hackathons in the future. ## What we learned We developed a lot of web development skills, specifically with javascript, as most of our member have never used it in the past. We also learned a lot about AWS. We're all very excited about how we can leverage AWS to develop more serverless web applications in the future. ## What's next for Mind-Space We want to develop Mind-Space to play more like an idle game, where the user can choose their choice of relaxing music, or guided meditation. As the player spends more time in their space, different plants will grow, some animals will be introduced, and eventually what started as just one sprout will become a whole living ecosystem. We want to add social features, where players can add friends and visit each other's Mind-Space, leveraging AWS Lambda and MongoDB to achieve this.
losing
## Inspiration We wanted to find a way to use music to help users with improving their mental health. Music therapy is used to improve cognitive brain functions like memory, regulate mood, improve the quality of sleep, managing stress and it can even help with depression. What's more is that music therapy has been proven to help with physical wellbeing as well when paired with usual treatment in clinical settings. Music is also a great way to improve productivity such as listening to pink noise or white noise to aid in focus or sleep We wanted to see if it was possible to this music to a user's music taste when given a music therapy goal, enhancing the experience of music therapy for the individual. This can be even more application in challenging times such as COVID-19 due to a decline in mental health for the general population. This is where Music Remedy comes in: understanding the importance of music therapy and all its applications, can we *personalize this experience even further*? ## What it does Users can log in to the app using their Spotify credentials where the app then has access to the user's Spotify data. As the users navigate through the app, they are asked to select what their goal for music therapy is. Upon this, when the music therapy goal is chosen, the app creates a playlist for the user according to their Spotify listening history and pre-existing playlists to help them with emotional and physical health and healing. Through processing the user's taste in music, it will allow the app to create a playlist that's more favorable to the user's likes and needs while still accomplishing the goals of music therapy. Described above allows the user to experience music therapy in a more enhanced and personalized way by ensuring that the user is listening to music to aid in their therapy while also ensuring that it is from songs and artists that they enjoy. ## How we built it We used the Spotify API to aid with authentication and the users' music taste and history. This gave details on what artists and genres the user typically listened to and enjoyed We built the web app representing the user interface using NodeJS paired with ExpressJS on the backend and JavaScript, HTML, CSS on the front end through the use of EJS templating, Bootstrap and jQuery. A prototype for the mobile app representing the event participant interface was built using Figma. ## Challenges we ran into There are definitely many areas of the Musical Remedy app that can be improved upon if we were given more time. Deciding on the idea and figuring out how much was feasible within the time given was a bit of a challenge. We tried to find the simplest way to effectively express our point across. Additionally, this hackathon was our first introduction to the Spotify API and using user's data to generate new data for other uses. ## Accomplishments that we're proud of We are proud of how much we learned about the Spotify API and how to use it over the course of the weekend. Additionally, learning to become more efficient in HTML, CSS and JavaScript using EJS templating is something we'd definitely use in the future with other projects. ## What we learned We've learned how to effectively manage my time during the hackathon. We tried to work on most of my work during the peak hours of my productivity and took breaks whenever I got too overwhelmed or frustrated. Additionally, we learned more about the Spotify API and the use of APIs in generals. ## What's next for Musical Remedy Musical Remedy has the potential to expand its base of listeners from just Spotify users. So far, we are imaging to be able to hook up a database to store user's music preferences which would be recorded using a survey of their favourite genres, artists, etc. Additionally, we'd like to increase the number and specificity of parameters when generating recommended playlists to tailor to the specific categories (pain relief, etc.) ## Domain.Com Entry Our domains is registered as `whatsanothernameforapplejuice-iphonechargers.tech`
## Inspiration There are three other things that slow down Metaverse adoption today: * Not really immersive without a powerful VR * Bad user experience * Low usage ## What it does We are Mimetic and we want to make the Metaverse more fun, immersive, and accessible through augmented puppetry that translates real movement to the Metaverse with a single camera. ## How we built it We used Unity and a deep learning powered motion capture algorithm to build the base of our interactive Metaverse experience. ## Challenges we ran into We ran into challenges compiling our solution and mastering the submission process. ## Accomplishments that we're proud of We are proud of the highly expansive and immersive environments we created to explore the power of Mimetic and motion capture technology. ## What we learned We learned C# and designing for Unity from each other and the importance of going without sleep in these competitions. ## What's next for Mimetic Our path forward is to continue harnessing movement and the magical ambiance from the real-world with unique virtual experiences that exceed what's possible, transcending limits of time and space.
## Inspiration Homes are becoming more and more intelligent with Smart Home products such as the Amazon Echo or Google Home. However, users have limited information about the infrastructure's status. ## What it does Our smart chat bot helps users to monitor their house's state from anywhere using low cost sensors. Our product is easy to install, user friendly and fully expandable. **Easy to install** By using compact sensors, HomeScan is able to monitor information from your house. Afraid of gas leaks or leaving the heating on? HomeScan has you covered. Our product requires minimum setup and is energy efficient. In addition, since we use a small cellular IoT board to gather the data, HomeScan sensors are wifi-independant. This way, HomeScan can be placed anywhere in the house. **User Friendly** HomeScan uses Cisco Spark bots to communicate data to the users. Run diagnostics, ask for specific sensor data, our bots can do it all. Best of all, there is no need to learn command lines as our smart bots use text analysis technologies to find the perfect answer to your question. Since we are using Cisco Spark, the bots can be accessed on the go on both the Spark mobile app or on our website.Therefore, you'll have no problem accessing your data while away from your home. **Fully expandable** HomeScan was built with the future in mind. Our product will fully benefit from future technological advancements. For instance, 5G will enable HomeScan to expand and reach places that currently have a poor cellular signal. In addition, the anticipated release of Cisco Spark's "guestID" will grant access to our smart bots to an even wider audience. Newer bot customization tools will also allow us to implement additional functionalities. Lastly, HomeScan can be expanded into an infrastructure ranking system. This could have a tremendous impact on the real-estate industry as houses could be rated based on their infrastructure performances. This way, data could be used for services such as AirBnB, insurance companies and even home-owners. We are confident that HomeScan is the solution for monitoring a healthy house and improve your real-estate decisions. future proof ## How I built it The infrastructure's information are being gathered through a Particle Electron board running of cellular network. The data are then sent to an Amazon's Web Services server. Finally, a Cisco Spark chat bot retrieves the data and outputs relevant queries according to the user's inputs. The intelligent bot is also capable of warning the user in case of an emergency. ## Challenges I ran into Early on, we ran into numerous hardware issues with the Particle Electron board. After consulting with industry professionals and hours of debugging, we managed to successfully get the board working the way we wanted. Additionally, with no experience with back-end programming, we struggled a lot understanding the tools and the interactions between platforms but ended with successful results. ## Accomplishments that we are proud of We are proud to showcase a fully-stacked solution using various tools with very little to no experience with it. ## What we learned With perservance and mutual moral support, anything is possible. And never be shy to ask for help.
partial
## Inspiration Our team was inspired by the beloved movie Ratatouille, a film that shaped our childhoods and sparked our love for cooking. As university students, we’ve faced the challenges of balancing cooking with busy academic schedules and knew there had to be a way to make it easier and more fun. That’s why we decided to bring Remy, the genius chef rat, from the screen into reality! With our project, anyone can cook—no matter how busy or inexperienced you are in the kitchen. Remy will be your guide, helping you whip up delicious dishes with ease. ## What it does Remy is your personal kitchen guide, sitting right on your head—just like in Ratatouille. Simply say, “What’s up, Remy?” to initiate a conversation. Not sure what to cook? Using his "eyes" and "ears," Remy will fetch a recipe based on what you say, and guide you through every step. He’ll even give playful tugs on your hair to steer you toward the right ingredients and tools in your kitchen—just like in the movie! Feeling lost in the sauce? Ask Remy anything, and he’ll answer instantly. No more scrolling through recipes on your phone with greasy hands—Remy’s got your back, helping you chef it up with ease and fun. ## How we built it We built Remy using a multithreaded system on a Flask backend, running a total of five threads to manage real-time tasks. One thread constantly listens for audio input, which is captured using a mobile phone acting as both the microphone and camera due to hardware limitations with the Raspberry Pi. The audio is transcribed into text using speech-to-text APIs, and another thread handles text input, processing it through a fine-tuned OpenAI model optimized for cooking instructions. A separate thread replays responses using text-to-speech, while the fifth thread captures footage from the phone’s camera for analysis and uploading. For the physical interaction, we used a Raspberry Pi to control servo motors that power Remy’s animatronic movements. Although the original plan included complex servo movements, we simplified them due to hardware failures. Remy can still perform basic motions, guiding users in the kitchen. Additionally, we developed a dynamically updating frontend using React and Socket.io, allowing users to track the images captured by Remy, view the questions they’ve asked, and see the answers Remy provides in real time. We also leveraged TensorFlow machine learning models to help Remy identify ingredients for recipes, such as distinguishing between different fruits. ## Challenges we ran into Our project faced several challenges, particularly with hardware and integration. Initially, we planned to use a microphone and camera directly connected to the Raspberry Pi, but we couldn’t procure a working setup in time. As a result, we shifted to using a mobile phone for audio and visual input, which required adapting our system to handle external hardware. Additionally, we encountered issues with the servo motors used to animate Remy. During testing, multiple servo motors failed, and with limited hardware resources available, we had to simplify Remy's movements to basic gestures. We also had to learn sewing techniques to embed the servo motors into Remy’s skeleton, dealing with snapping strings and other complications in the process. On the software side, we had to manage the complexity of multithreading across five separate tasks on a Flask backend, ensuring that the real-time audio processing, transcription, and camera input didn’t cause delays or crashes. Balancing these threads while keeping performance smooth was a challenge, especially when processing multiple inputs simultaneously. We also had to fine-tune machine learning models using OpenAI and TensorFlow to help Remy correctly identify ingredients in real time. This required a lot of iteration, as the models needed to accurately distinguish between similar-looking ingredients, such as different fruits, which took considerable time and effort. Lastly, integrating all of these components into a coherent, user-friendly experience was a significant challenge. Developing a dynamically updating frontend using React and Socket.io, while syncing it with real-time data from the backend, took careful coordination. Despite these obstacles, we were able to successfully deliver a functional, interactive kitchen assistant. ## Accomplishments that we're proud of We are incredibly proud to have presented a coherent, functional solution made out of multiple machine learning models, showcasing our abilities of problem solving and working under pressure. When faced with complications, such as failed hardware and problems with our trained models, we were able to pivot efficiently and find solutions on the fly. Another achievement was successfully integrating all the hardware components into a cohesive system, overcoming the complexities of putting everything together. ## What we learned Throughout this project, we gained valuable insights into integrating diverse technologies and handling real-world constraints. We learned the intricacies of multithreading on a Flask backend, managing multiple simultaneous tasks while maintaining real-time performance. Working with machine learning models in TensorFlow taught us how to fine-tune models for specific applications, such as ingredient recognition. We also discovered the importance of adapting to hardware limitations and learned practical skills like sewing to integrate servo motors into animatronics. This project reinforced our problem-solving abilities and highlighted the significance of flexibility and innovation when facing unexpected challenges. ## What's next for Remy the Rat Looking ahead, we plan to enhance Remy's capabilities with additional features and improvements. We aim to refine the animatronics by integrating more advanced servo motor movements and exploring alternative materials to improve durability and functionality. Expanding Remy’s knowledge base with more comprehensive recipe databases and multilingual support will make him even more versatile in the kitchen. We also envision incorporating more sophisticated computer vision algorithms to better identify and interact with ingredients. Additionally, we want to explore potential integrations with smart kitchen devices for a more seamless and efficient cooking experience. Our goal is to continue evolving Remy into an even more helpful and interactive kitchen assistant.
# Inspiration We wanted to improve our cooking skills and reduce food waste by using all available ingredients effectively. As students who love cooking and savoring good food but don't have much time to spend in the kitchen every day, we needed a way to whip up delightful meals efficiently without compromising on taste and satisfaction. # What it does We developed RecipeMakerAI using a combination of React for the frontend, Node.js for the backend, and integrated various APIs to enhance functionality. The Edamam API powers our ingredient-to-recipe search, while TensorFlow's pre-trained models enable image recognition for ingredient detection. We also used React Router for navigation and Axios for API requests. # Challenges we ran into We encountered several challenges, particularly with implementing the Edamam API. Initially, we attempted to use Node.js, but the API responses were only displayed on the console. We switched to pure JavaScript for better integration. Additionally, we struggled with finding an optimal image classification model and ultimately chose TensorFlow's pre-built model after extensive research. # Accomplishments that we're proud of We are proud to have successfully integrated image detection and API implementation. Despite the hurdles, we managed to get the image detection working and effectively utilized the APIs to enhance the app's functionality. # What we learned Throughout this project, we learned a great deal about API implementation and identifying suitable databases for our needs. We also explored using OpenAI to display recipe preparation times. Furthermore, we deepened our understanding of TensorFlow's pre-built models and their capabilities. Our skills in React were strengthened as we built multiple components, used React Router DOM, and integrated various APIs to create a comprehensive app with both frontend and backend features. # What's next for RecipieMaker Looking ahead, we plan to implement more features to assist users with disabilities and dietary restrictions. We aim to improve our image classification system by using a larger database for more accurate food identification. Additionally, we want to expand our model to support real-time object detection via video and provide more options to filter recipes based on cuisine, meal type, allergy, or dietary restrictions. We also envision enabling users to shop for missing ingredients and find the best deals nearby. Future updates will include a trending recipes section and user reviews for generated recipes to better cater to our users' needs. Next steps for RecipeMaker involve refining accuracy in ingredient detection. This will be achieved through fine-tuning and training our own dataset to ensure more precise and reliable results.
## Inspiration The idea for Refrigerator Ramsay came from a common problem we all face: food waste. So many times, we forget about ingredients in the fridge, or we don't know how to use them, and they go bad. We wanted to create something that could help us not only use those ingredients but also inspire us to try new recipes and reduce food waste in the process. ## What it does Refrigerator Ramsay is an AI-powered kitchen assistant that helps users make the most of their groceries. You can take a picture of your fridge or pantry, and the AI will identify the ingredients. Then, it suggests recipes based on what you have. It also has an interactive voice feature, so you can ask for cooking tips or get help with recipes in real-time. ## How we built it We used **Google Gemini AI** for image recognition to identify ingredients from photos. and then, to come up with creative meal ideas. We also added **Hume EVI**, which is an emotionally intelligent voice AI, to make the experience more interactive. This lets users chat with the AI to get tips, learn new cooking techniques, or personalize recipes. ## Challenges we ran into One challenge we faced was working with **Hume EVI** for the first time. It was really fun to explore, but there was definitely a learning curve. We had to figure out how to make the voice assistant not just smart but also engaging, making sure it could deliver cooking tips naturally and understand what users were asking. It took some trial and error to get the right balance between information and interaction. Another challenge was training the **Google Gemini AI** to be as accurate as possible when recognizing ingredients. It wasn't easy to ensure the AI could reliably detect different foods, especially when items were stacked or grouped together in photos. We had to spend extra time fine-tuning the model so that it could handle a variety of lighting conditions and product packaging. ## Accomplishments that we're proud of We're proud of creating a solution that not only helps reduce food waste but also makes cooking more fun. The AI's ability to suggest creative recipes from random ingredients is really exciting. We're also proud of how user-friendly (and funny) the voice assistant turned out to be, making it feel like you're cooking with a friend. ## What we learned We learned a lot about how AI can be used to solve everyday problems, like reducing food waste. Working with **Hume EVI** taught us about building conversational AI, which was new for us. It was really fun, but there was definitely a challenge in figuring out how to make the voice interaction feel natural and helpful at the same time. We also learned about the importance of **training AI models**, especially with the **Gemini AI**. Getting the image recognition to accurately identify different ingredients required us to experiment a lot with the data and train the model to work in a variety of environments. This taught us that accuracy is key when it comes to creating a seamless user experience. ## What's next for Refrigerator Ramsay Next, we want to improve the AI’s ability to recognize even more ingredients and maybe even offer nutritional advice. We’re also thinking about adding features where the AI can help you plan meals for the whole week based on what you have. Eventually, we’d love to partner with grocery stores to suggest recipes based on store deals, helping users save money too!
losing
## Inspiration Inspired by a team member's desire to study through his courses by listening to his textbook readings recited by his favorite anime characters, functionality that does not exist on any app on the market, we realized that there was an opportunity to build a similar app that would bring about even deeper social impact. Dyslexics, the visually impaired, and those who simply enjoy learning by having their favorite characters read to them (e.g. children, fans of TV series, etc.) would benefit from a highly personalized app. ## What it does Our web app, EduVoicer, allows a user to upload a segment of their favorite template voice audio (only needs to be a few seconds long) and a PDF of a textbook and uses existing Deepfake technology to synthesize the dictation from the textbook using the users' favorite voice. The Deepfake tech relies on a multi-network model trained using transfer learning on hours of voice data. The encoder first generates a fixed embedding of a given voice sample of only a few seconds, which characterizes the unique features of the voice. Then, this embedding is used in conjunction with a seq2seq synthesis network that generates a mel spectrogram based on the text (obtained via optical character recognition from the PDF). Finally, this mel spectrogram is converted into the time-domain via the Wave-RNN vocoder (see [this](https://arxiv.org/pdf/1806.04558.pdf) paper for more technical details). Then, the user automatically downloads the .WAV file of his/her favorite voice reading the PDF contents! ## How we built it We combined a number of different APIs and technologies to build this app. For leveraging scalable machine learning and intelligence compute, we heavily relied on the Google Cloud APIs -- including the Google Cloud PDF-to-text API, Google Cloud Compute Engine VMs, and Google Cloud Storage; for the deep learning techniques, we mainly relied on existing Deepfake code written for Python and Tensorflow (see Github repo [here](https://github.com/rodrigo-castellon/Real-Time-Voice-Cloning), which is a fork). For web server functionality, we relied on Python's Flask module, the Python standard library, HTML, and CSS. In the end, we pieced together the web server with Google Cloud Platform (GCP) via the GCP API, utilizing Google Cloud Storage buckets to store and manage the data the app would be manipulating. ## Challenges we ran into Some of the greatest difficulties were encountered in the superficially simplest implementations. For example, the front-end initially seemed trivial (what's more to it than a page with two upload buttons?), but many of the intricacies associated with communicating with Google Cloud meant that we had to spend multiple hours creating even a landing page with just drag-and-drop and upload functionality. On the backend, 10 excruciating hours were spent attempting (successfully) to integrate existing Deepfake/Voice-cloning code with the Google Cloud Platform. Many mistakes were made, and in the process, there was much learning. ## Accomplishments that we're proud of We're immensely proud of piecing all of these disparate components together quickly and managing to arrive at a functioning build. What started out as merely an idea manifested itself into usable app within hours. ## What we learned We learned today that sometimes the seemingly simplest things (dealing with python/CUDA versions for hours) can be the greatest barriers to building something that could be socially impactful. We also realized the value of well-developed, well-documented APIs (e.g. Google Cloud Platform) for programmers who want to create great products. ## What's next for EduVoicer EduVoicer still has a long way to go before it could gain users. Our first next step is to implementing functionality, possibly with some image segmentation techniques, to decide what parts of the PDF should be scanned; this way, tables and charts could be intelligently discarded (or, even better, referenced throughout the audio dictation). The app is also not robust enough to handle large multi-page PDF files; the preliminary app was designed as a minimum viable product, only including enough to process a single-page PDF. Thus, we plan on ways of both increasing efficiency (time-wise) and scaling the app by splitting up PDFs into fragments, processing them in parallel, and returning the output to the user after collating individual text-to-speech outputs. In the same vein, the voice cloning algorithm was restricted by length of input text, so this is an area we seek to scale and parallelize in the future. Finally, we are thinking of using some caching mechanisms server-side to reduce waiting time for the output audio file.
## Inspiration Have you ever tried learning the piano but never had the time to do so? Do you want to play your favorite songs right away without any prior experience? This is the app for you. We required something like this even for our personal use. It helps an immense lot in building muscle memory to play the piano for any song you can find on Youtube directly. ## What it does Use AR to project your favorite digitally-recorded piano video song from Youtube over your piano to learn by following along. ## Category Best Practicable/Scalable, Music Tech ## How we built it Using Unity, C# and the EasyAR SDK for Augmented Reality ## Challenges we ran into * Some Youtube URLs had cipher signatures and could not be loaded asynchronously directly * App crashes before submission, fixed barely on time ## Accomplishments that we're proud of We built a fully functioning and user friendly application that perfectly maps the AR video onto your piano, with perfect calibration. It turned out to be a lot better than we expected. We can use it for our own personal use to learn and master the piano. ## What we learned * Creating an AR app from scratch in Unity with no prior experience * Handling asynchronously loading of Youtube videos and using YoutubeExplode * Different material properties, video cipher signatures and various new components! * Developing a touch gesture based control system for calibration ## What's next for PianoTunesAR * Tunes List to save the songs you have played before by name into a playlist so that you do not need to copy URLs every time * Machine learning based AR projection onto piano with automatic calibration and marker-less spatial/world mapping * All supported Youtube URLs as well as an Upload your own Video feature * AR Cam tutorial UI and icons/UI improvements * iOS version # APK Releases <https://github.com/hamzafarooq009/PianoTunesAR-Wallifornia-Hackathon/releases/tag/APK>
## **Education Track** ## Inspiration We were brainstorming ideas of how to best fit this hackathon into our schedule and still be prepared for our exams the following Monday. We knew that we wanted something where we could learn new skills but also something useful in our daily lives. Specifically, we wanted to create a tool that could help transcribe our lectures and combine them into easy-to-access formats. ## What it does This tool allows users to upload video files or links and get a PDF transcript of it. This creates an easy way for users who prefer to get the big ideas to save time and skip the lectures as well as act as a resource for open-note exams. ## How we built it The website is a react.js application whilst the backend is made up of Firebase for the database, and an express API to access the Google Speech to Text API to help with our transcribing. ## Challenges we ran into Both of us being novices to using APIs such as Google's we spent quite a bit of time troubleshooting and figuring out how they worked. This left us with less time than desired but we were able to complete a web app that captures the big picture of our app. ## Accomplishments that we're proud of We are both proud that we were able to learn so many new skills in so little time and are excited to continue this project and create new ones as well. And most importantly we had fun! ## What we learned We learned quite a lot this weekend. * Google Cloud APIs * Firebase * WikiRaces (How to get from Tigers to Microphones in record time) * Having Fun! ## What's next for EZNotes We hope to finish out the app's other features as well as create a summarizing tool using AI so that the notes are even more useful.
winning
## Inspiration NFT metadata is usually public. This would be bad if NFTs were being used as event tickets or private pieces of art. We need private NFT metadata. ## What it does Allows users to encrypt NFT metadata based on specific conditions. Default decryption condition is holding more than 0.1 Avax token. Other conditions to decrypt can be holding other tokens, decrypting during a specific time interval, or being the owner of a specific wallet. ## How we built it Smart contracts were built with Solidity and deployed on Avalanche. Frontend was build with NextJS and Wagmi React Hooks. Encryption was handled with the Lit Protocol. ## Challenges we ran into Figuring out how to use the lit Protocol and deploy onto Avalanche. I really wanted to verify my contract programmatically but I couldn't figure out how with the Avalanche Fuji block explorer. ## Accomplishments that we're proud of Finishing a hackathon project and deploying. ## What we learned How to encrypt NFT metadata. ## What's next for Private Encrypted NFTs Apply this to more impactful use cases and make it more user friendly. I think the idea of encrypted metadata is really powerful and can be applied to many places. All about execution, user interface, and community.
## Inspiration 💡 Buying, selling, and trading physical collectibles can be a rather tedious task, and this has become even more apparent with the recent surge of NFTs (Non-Fungible Tokens). The global market for physical collectibles was estimated to be worth $372 billion in 2020. People have an innate inclination to collect, driving the acquisition of items such as art, games, sports memorabilia, toys, and more. However, considering the world's rapid shift towards the digital realm, there arises a question about the sustainability of this market in its current form. At its current pace, it seems inevitable that people may lose interest in physical collectibles, gravitating towards digital alternatives due to the speed and convenience of digital transactions. Nevertheless, we are here with a mission to rekindle the passion for physical collectibles. ## What it does 🤖 Our platform empowers users to transform their physical collectibles into digital assets. This not only preserves the value of their physical items but also facilitates easy buying, selling, and trading. We have the capability to digitize various collectibles with verifiable authenticity, including graded sports/trading cards, sneakers, and more. ## How we built it 👷🏻‍♂️ To construct our platform, we utilized [NEXT.js](https://nextjs.org/) for both frontend and backend development. Additionally, we harnessed the power of the [thirdweb](https://thirdweb.com/) SDK for deploying, minting, and trading NFTs. Our NFTs are deployed on the Ethereum L2 [Mumbai](https://mumbai.polygonscan.com/) testnet. `MUMBAI_DIGITIZE_ETH_ADDRESS = 0x6A80AD071932ba92fe43968DD3CaCBa989C3253f MUMBAI_MARKETPLACE_ADDRESS = [0xedd39cAD84b3Be541f630CD1F5595d67bC243E78](https://thirdweb.com/mumbai/0xedd39cAD84b3Be541f630CD1F5595d67bC243E78)` Furthermore, we incorporated the Ethereum Attestation Service to verify asset ownership and perform KYC (Know Your Customer) checks on users. `SEPOLIA_KYC_SCHEMA = 0x95f11b78d560f88d50fcc41090791bb7a7505b6b12bbecf419bfa549b0934f6d SEPOLIA_KYC_TX_ID = 0x18d53b53e90d7cb9b37b2f8ae0d757d1b298baae3b5767008e2985a5894d6d2c SEPOLIA_MINT_NFT_SCHEMA = 0x480a518609c381a44ca0c616157464a7d066fed748e1b9f55d54b6d51bcb53d2 SEPOLIA_MINT_NFT_TX_ID = 0x0358a9a9cae12ffe10513e8d06c174b1d43c5e10c3270035476d10afd9738334` We also made use of CockroachDB and Prisma to manage our database. Finally, to view all NFTs in 3D 😎, we built a separate platform that's soon-to-be integrated into our app. We scrape the internet to generate all card details and metadata and render it as a `.glb` file that can be seen in 3D! ## Challenges we ran into 🏃🏻‍♂️🏃🏻‍♂️💨💨 Our journey in the blockchain space was met with several challenges, as we were relatively new to this domain. Integrating various SDKs proved to be a formidable task. Initially, we deployed our NFTs on Sepolia, but encountered difficulties in fetching data. We suspect that thirdweb does not fully support Sepolia. Ultimately, we made a successful transition to the Mumbai network. We also faced issues with the PSA card website, as it went offline temporarily, preventing us from scraping data to populate our applications. ## Accomplishments that we're proud of 🌄 As a team consisting of individuals new to blockchain technology, and even first-time deployers of smart contracts and NFT minting, we take pride in successfully integrating web3 SDKs into our application. Moreover, users can view their prized possessions in **3-D!** Overall, we're proud that we managed to deliver a functional minimum viable product within a short time frame. 🎇🎇 ## What we learned 👨🏻‍🎓 Through this experience, we learned the value of teamwork and the importance of addressing challenges head-on. In moments of uncertainty, we found effective solutions through open discussions. Overall, we have gained confidence in our ability to deliver exceptional products as a team.Lastly, we learned to have fun and build things that matter to us. ## What's next for digitize.eth 👀👀👀 Our future plans include further enhancements such as: * Populating our platform with a range of supported NFTs for physical assets. * Take a leap of faith and deploy on Mainnet * Deploy our NFTs on other chains, eg Solana. Live-demo: <https://bl0ckify.tech> Github: <https://github.com/idrak888/digitize-eth/tree/main> + <https://github.com/zakariya23/hackathon>
# Notic3 – A Decentralized Solution for Content Creators ## Overview Notic3 is a decentralized platform designed to empower content creators, offering a blockchain-based alternative to platforms like Patreon (and that one unholy platform). Our mission is to eliminate intermediaries, giving creators direct access to their supporters while ensuring secure, transparent, and tamper-proof data. Built fully on-chain, Notic3 exemplifies the spirit of decentralization by providing trustless interactions and immutable records. While many blockchain-based projects use off-chain solutions to sidestep technical challenges, our team committed to going all-in on decentralization. This approach presented unique obstacles, but it also set us apart, reinforcing our belief in the transformative potential of blockchain for creative industries. ## Inspiration Blockchain technology offers more than just financial innovation—it provides a new way to manage ownership, access, and rewards. Inspired by these capabilities, we wanted to build something that could give content creators control over their work and revenue streams without relying on centralized platforms that charge high fees or control the distribution of content. While the Web3 space has already seen some early experiments in creator tools, many platforms are still hybrids—leveraging blockchain partially but retaining centralized components. We wanted to explore what would happen if we stayed true to the core ethos of decentralization. Notic3 was born from that idea: a fully on-chain platform that doesn't compromise on its principles. ## Our Approach We chose to build Notic3 entirely on-chain to guarantee transparency, immutability, and censorship resistance. This meant every interaction—from subscription payments to content access—would be recorded directly on the blockchain (available to the general public). This decision came with trade-offs: * **Secure data storage:** Managing private files like videos or audio on-chain is insecure if unencrypted, so we had to creatively manage metadata to ensure users are truly accessing content they own. * **Novelty of Move and Sui:** The Move programming language, native to blockchains like Aptos and Sui, was completely new to our team. Learning it on the fly was one of the biggest challenges we faced, in addition to learning about the Sui SDK. * **Smart Contract Design:** Writing complex smart contracts to handle subscription models, creator payouts, and access permissions directly on-chain was difficult. ## Key Features * **Subscription-based Payments:** Creators can set up recurring subscriptions that allow supporters to access premium content. Payments are processed seamlessly on-chain, ensuring transparency and immediate distribution of funds. * **Web3 Storage:** Everything about our app is in the chain, including the file storage. We utilized Sui's newest Data Storage solution, Walrus, to store encrypted files on the chain using blobs allowing creators to leverage the chain for distribution of their content. ## Conclusion Working on Notic3 has been an incredible experience. We took on ambitious challenges, and despite the difficulties, we believe we succeeded in delivering a powerful product. Along the way, we learned new technologies, embraced decentralized principles, and grew both as developers and as a team. Notic3 is proof of what can be achieved when you stay true to your mission, even when easier paths are available. We are excited to continue developing the platform beyond the hackathon and see how it can empower creators around the world. Thank you for the opportunity to present Notic3, and we look forward to feedback and collaboration as we continue this journey!
partial
## Inspiration Building domain-specific automated systems in the real world is painstaking, requiring massive codebases for exception handling and robust testing of behavior for all kinds of contingencies — automated packaging, drone delivery, home surveillance, and search and rescue are all enormously complex and result in highly specialized industries and products that take thousands of engineering hours to prototype. But it doesn’t have to be this way! Large language models have made groundbreaking strides towards helping out with the similarly tedious task of writing, giving novelists, marketing agents, and researchers alike a tool to iterate quickly and produce high-quality writing exhibiting both semantic precision and masterful high-level planning. Let’s bring this into the real world. What if asking “find the child in the blue shirt and lead them to the dinner table” was all it took to create that domain-specific application? Taking the first steps towards generally intelligent embodied AI, DroneFormer turns high-level natural language commands into long scripts of low-level drone control code leveraging advances in language and visual modeling. The interface is the simplest imaginable, yet the applications and end result can adapt to the most complex real-world tasks. ## What it does DroneFormer offers a no-code way to program a drone via generative AI. You can easily control your drone with simple written high-level instructions. Simply type up the command you want and the drone will execute it — flying in spirals, exploring caves to locate lost people with depth-first search, or even capturing stunning aerial footage to map out terrain. The drone receives a natural language instruction from the user (e.g. "find my keys") and explores the room until it finds the object. ## How we built it Our prototype compiles natural language instructions down into atomic actions for DJI Tello via in-context learning using the OpenAI GPT-3 API. These actions include primitive actions from the DJI SDK (e.g. forward, back, clockwise turn) as well as custom object detection and visual language model query actions we built leveraging zero-shot image and multimodels models such as YOLOv5 and image processing frameworks such as OpenCV. We include a demo for searching for and locating objects using the onboard Tello camera and object detection. ## Challenges we ran into One significant challenge was deciding on a ML model that best fit our needs of performant real-time object detection. We experimented with state-of-the-art models such as BLIP and GLIP which either were too slow at inference time, or were not performing as expected in terms of accuracy. Ultimately, we settled on YOLOv5 as having a good balance between latency and ability to collect knowledge about an image. We were also limited by the lack of powerful onboard compute, which meant the drone needs to connect to an external laptop (which needed to serve both the drone and internet networks, which we resolved using Ethernet and wireless at the same time) which in turn connects to the internet for OpenAI API inference. ## Accomplishments that we're proud of We were able to create an MVP! DroneFormer successfully generates complex 20+ line instructions to detect and navigate to arbitrary objects given a simple natural language instruction to do so (e.g. “explore, find the bottle, and land next to it”). ## What we learned Hardware is a game changer! Embodied ML is a completely different beast than even a simulated reinforcement learning environment, and working with noisy control systems adds many sources of error on top of long-term language planning. To deal with this, we iterated much more frequently and added functionality to deal with new corner cases and ambiguity as necessary over the course of the project, rewriting as necessary. Additionally, connectivity issues arose often due to the three-tiered nature of the system between the drone, laptop, and cloud backends. ## What's next for DroneFormer We were constrained by the physical confines of the TreeHacks drone room and obstacles available in the vicinity, as well as the short battery life of the Tello drone. Expanding to larger and more complex hardware, environments, and tasks, we expect the DroneFormer framework to handily adapt, given a bit of prompt engineering, to emergent sophisticated behaviors such as: * Watching over a child wandering around the house and reporting any unexpected behavior according to a fine-tuned classifier * Finding that red jacket that you could swear was on the hanger but which has suddenly disappeared * Checking in “person” if the small coffee shop down the street is still open despite the out-of-date Google Maps schedule * Sending you a picture of the grocery list you forgot at home DroneFormer will be a new type of personal assistant — one that always has your back and can bring the magic of complex language model planning to the embodied real world. We’re excited! <https://medium.com/@sidhantbendre22/hacking-the-moonshot-stanford-treehacks-2023-9166865d4899>
## Inspiration After a bit of research, we found that immediately following a natural disaster, the lack of aid is the cause of most of the casualties. In many third world countries, it takes as long as \_ a few weeks \_ for first responders to rescue many survivors. Many people in need of food, water, medicine, and other aid supplies can unfortunately not survive for longer than 48 hours. For this, we have created a POC drone that can work with a thousand others to search, identify and deliver aid to survivors. ## What it does The drone is fully autonomous, given a set of GPS coordinates, it will be able to deliver a package without human intervention. We took this concept a step further: our drone will search a predefined area, immediately following a natural disaster. While it is doing so, it is looking for survivors using some type of machine vision implementation. For the purpose of this project, we are using a color sensor camera and a color tag as our delivery point. Once the target has been found, a small medicinal package is released by means of an electromagnet. The drone then sends over the cellular network, the coordinates of the drop, notifying ground crews of the presence of a survivor. Furthermore, this also prevents multiple drones from delivering a package to the same location. The server coordinates the delivery on the scale of hundreds or thousands of drones, simultaneously searching the area. ## How I built it The flight control is provided by the Ardupilot module. The collision avoidance and color detection is handled by an Arduino. Internet access and pipelining is provided by a Raspberry Pi, and the server is an online database running Python on Google App Engine. The ground station control is using the Mission Planner, and provides realtime updates about the location of the drone, cargo condition, temperature, etc. Everything is built on a 250mm racer quad frame. Essentially, the flight computer handles all of the low-level flight controls, as well as the GPS navigation. The main workhorse of this drone is the Arduino, which integrates information from sonar sensors, an object sensing camera, as well as a GPS to guide the flight computer around obstacles, and to locate the color tag, which is representative of a victim of a natural disaster. The Arduino also takes in information from the accelerometer to be able to compute its position with the help of a downward facing camera in GPS-compromised areas. ## Challenges I ran into **weight** and **power management**. Considering the fact that we had to use a small racer quad that only had a 2 pound maximum payload, we had to use as few and as light components as possible. Additionally, the power regulators on the motor controllers were only capable of powering a few small electronics. We had to combine multiple regulators to be able to power all of the equipment. ## Accomplishments that I'm proud of Perhaps coming up with an application of drones that could have a far-reaching impact; the ability to save hundred or thousands of lives, and possibly assist in economic development of third-world countries. ## What I learned Something as small as a drone can really make a difference in the world. Even saving a single life is substantial for a pseudo-self-aware machine, and is truly a step forward for technology. ## What's next for Second-Order Autonomy Robustness testing and implementation of the machine vision system. Of course scaling up the platform so that it may be a viable solution in third-world countries. Considering a clear need for the technology, this project could be taken further for practical use.
## Inspiration Social distancing and quarantine measures that are currently in place due to the COVID-19 pandemic have resulted in widespread social isolation and pose a significant detriment to people's mental health. During this time, many individuals without a social media account don't have the means to make new friends! Keeping those individuals in mind, we wanted to create a site that allows people to meet and talk to new people over video call on Zoom. We hope to improve connectivity through the internet with this website! ## What it does CovPals is a web application that allows users to meet a new friend online by allowing them to schedule meetings during the current week at a certain time slot. If 2 individuals book a call during the same time slot, they will be matched and notified via email and receive a zoom link 1 hour before the booked time. ## How we built it To develop the frontend, we used React to render our views and handle inputting our form data. On the backend, Express and Nodejs were used for the web-server and a MongoDB Atlas Database is being used to store user information needed to find matches and schedule meetings. We utilized the Mailgun API to give our server the ability to automatically send emails to participants, and notify them of their matches. Additionally, we leveraged the Zoom API to generate weekly meeting links for each matched pair of users. ## Challenges we ran into We ran into some issues while developing the backend as most of us had no experience with backend frameworks, so a lot of time was spent figuring them out. Additionally, we experienced some unexpected behaviour involving javascript promises and the asynchronous paradigm in javascript as a result of us needing to make API calls. This was also the first time we worked with the libraries and APIs used in this project, so getting accustomed to them in a timely manner was challenging. ## Accomplishments that we're proud of We're proud to have learned so much in 2 days and create a user-friendly web application that is easily accessible to the public. ## What we learned This was the first time most of our group members worked with Express and MongoDB as well as the APIs offered by Mailgun and Zoom. Most of us have also never worked with databases so learning how to make queries and mutating data was also a new learning experience. On the frontend, this was the first time any of us had used Material-UI. ## What's next for CovPals We want to create more input forms asking the user some personal questions and use that information to match them with others within the same age group with similar interests. This is so that they start the conversation easily by talking about what they have in common! We also want to add an interactive chatbot that will prompt the user with questions rather than just filling out a form. This is so that the user feels more welcomed. Lastly, we wanted to add some uplifting quotes and happy GIFs to the site to help make people happier!
partial
Wanted to try something low-level! MenuMate is a project aimed at enhancing dining experiences by ensuring that customers receive quality, safe, and delicious food. It evaluates restaurants using health inspection records and food-site reviews, initially focusing on Ottawa with plans for expansion. Built on React, the tool faced integration challenges with frameworks and databases, yet achieved a seamless front and backend connection. The current focus includes dataset expansion and technical infrastructure enhancement. The tool scrapes data from websites and reads JSON files for front-end display, primarily using technologies like BeautifulSoup, React, HTML, CSS, and JavaScript. The team encountered challenges, as it was their first experience with web scraping and faced difficulties in displaying data.
With an increase in the number of fast food and fine dining restaurants in the recent years, the number of restaurants that have begun to mal-perform has risen at an equivalent rate. Due to either a lack of dedication to their business, or an issue revolving company finances, certain fast food restaurants and fine dining companies may not be at their operational prime. This decrease in dining experience would result in you, their valued guests, to be offered subpar service and potentially hazardous and life-threatening food in return. Our project, MenuMate, helps to avert this issue as much as possible, so that any customer who pays for their food are offered a quality product that is safe and delicious. through one simple search, users will be able to look for their favourite restaurants and determine whether or not their favourite restaurants shouldn’t be there favourite anymore! By analyzing a variety of indicators, such as Health inspection records, food-site reviews, and a variety of other offically-recognized indicators, MenuMate is able to determine whether or not your restaurant is properly producing quality food in a safe and ethical manner. Though MenuMate is only available in the city of Ottawa, we hope to branch out our project to various other cities to make sure that everyone is able to receive the great food that they deserve! We built this using a React frontend and a Flask backend. One challenge we ran into was linking all the frameworks together to ensure our product was functioning as it should be. We are also all relatively new to hackathons and so we lacked a lot of experience regarding a lot of the technologies. Finally, we tried incorporating a database to store the data but it became too tall of a task and so we stuck to storing the data into a .csv file. We're proud that we were able to successfully link the frontend and backend through API calls and our use of Flask. Also, the conversion between our Figma design and our React frontend was pretty impressive considering it was our first time using both. We learned a lot more about backend and in general how to develop a fullstack application and its requirements. We're hoping to expand the dataset such that it includes more than the first 50 restaurants taken from the database. We also hope to successfully incorporate MongoDB to store this future data.
## Inspiration Food is capable of uniting all of us together, no matter which demographic we belong to or which cultures we identify with. Our team recognized that there was a problem with how challenging it can be for groups to choose a restaurant that accommodated everyone's preferences. Furthermore, food apps like Yelp and Zomato can often cause 'analysis paralysis' as there are too many options to choose from. Because of this, we wanted to build a platform to facilitate the process of coming together for food, and make the process as simple and convenient as possible. ## What it does Bonfire is an intelligent food app that takes into account the food preferences of multiple users and provides a fast, reliable, and convenient recommendation based on the aggregate inputs of the group. To remove any friction while decision-making, Bonfire is even able to make a reservation on behalf of the group using Google's Dialogflow. ## How we built it We used Android Studio to build the mobile application and connected it to a Python back-end. We used Zomato's API for locating restaurants and data collection, and Google Sheets API and Google Apps scripts to decide the optimal restaurant recommendation given the user's preferences. We then used Adobe XD to create detailed wireframes to visualize the app's UI/UX. ## Challenges we ran into We found that Integrating all the API's into our app was quite challenging as some required Partner access privileges and restricted the amount of information we could request. In addition, choosing a framework to connect the back-end was a difficult. ## Accomplishments that we're proud of As our team is comprised of students studying bioinformatics, statistics, and kinesiology, we are extremely proud to have been able to bring an idea to fruition, and we are excited to continue working on this project as we think it has promising applications. ## What we learned We learned that trying to build a full-stack application in 24 hours is no easy task. We managed to build a functional prototype and a wireframe to visualize what the UI/UX experience should be like. ## What's next for Bonfire: the Intelligent Food App For the future of Bonfire, we are aiming to include options for dietary restrictions and incorporating Google Duplex into our app for a more natural-sounding linguistic profile. Furthermore, we want to further polish the UI to enhance the user experience. To improve the quality of the recommendations, we plan to implement machine learning for the decision-making process, which will also take into account the user's past food preferences and restaurant reviews.
losing
Ensemble learning is the practice of taking multiple weak learners, and combining them to create one strong learner. Ensemble techniques are known dominate the world of data science in accuracy, with techniques such as Random Forests taking top spots in challenges, and most winning solutions involving ensembling a couple of strong learners. However, current ensembles must be designed by hand, and machine learning libraries do not support building complex heterogeneous ensembles. To fix this, we built ensemble\_factory, which allows programmers to create complex heterogeneous graphs of models combined with ensembling methods, and given a dataset, can perform AutoML to autonomously search for optimal ensemble topology. We support common ML methods such as decision trees, regressions, SVMs, etc, and a wide variety of powerful ensembling methods for boosting, stacking, and bagging. Our library also has powerful tooling for inspecting ensemble graphs, allowing data scientists to make precise bias-variance tradeoffs using the ensembling toolkit. Astonishingly, our AutoML is able to find ensemble topologies for big datasets like MNIST that consistently outperform ensemble benchmarks such as Random Forests in test. Attached is a visualization of one such topology produced by our tool. In the future, we'd like to perform more experiments on more datasets, write a paper, and get ensemble\_factory into the hands of real data scientists.
## Inspiration When thinking about how we could make a difference within local communities impacted by Covid-19, what came to mind are our frontline workers. Our doctors, nurses, grocery store workers, and Covid-19 testing volunteers, who have tirelessly been putting themselves and their families on the line. They are the backbone and heartbeat of our society during these past 10 months and counting. We want them to feel the appreciation and gratitude they deserve. With our app, we hope to bring moments of positivity and joy to those difficult and trying moments of our frontline workers. Thank you! ## What it does Love 4 Heroes is a web app to support our frontline workers by expressing our gratitude for them. We want to let them know they are loved, cared for, and appreciated. In the app, a user can make a thank you card, save it, and share it with a frontline worker. A user's card is also posted to the "Warm Messages" board, a community space where you can see all the other thank-you-cards. ## How we built it Our backend is built with Firebase. The front-end is built with Next.js, and our design framework is Tailwind CSS. ## Challenges we ran into * Working with different time zones [12 hour time difference]. * We ran into trickiness figuring out how to save our thank you cards to a user's phone or laptop. * Persisting likes with Firebase and Local Storage ## Accomplishments that we're proud of * Our first Hackathon + We're not in the same state, but came together to be here! + Some of us used new technologies like Next.js, Tailwind.css, and Firebase for the first time! + We're happy with how the app turned out from a user's experience + We liked that we were able to create our own custom card designs and logos, utilizing custom made design-textiles ## What we learned * New Technologies: Next.js, Firebase * Managing time-zone differences * How to convert a DOM element into a .jpeg file. * How to make a Responsive Web App * Coding endurance and mental focus -Good Git workflow ## What's next for love4heroes More cards, more love! Hopefully, we can share this with a wide community of frontline workers.
## Inspiration Nowadays, unless you've been living under a rock, you've definitely heard of the fancy computer science buzzwords, including "machine learning", or "neural networks". Unless you have incredible resources or motivation, these words will continue to mystify you until you pour hours into understanding the concepts and mechanics behind the principle of machine learning (Trust us, we've been there and done that). Our goal is to come up with an application that gives users the ability to change a few parameters here and there, giving them a high-level overview of machine learning that is nevertheless quite informative. ## What it does Our application allows you to build your own neural network at a high level, giving users the ability to control the type and number of layers in the network in each run of the program. The application allows you to submit your own data to train and test on, then uses TensorFlow to build several informative graphs on the neural network as it progresses through the stages of forward and back-propagation. ## How we built it We built the framework of the application using ReactJS and the main page of the application using Canvas (vanilla Javascript). The machine learning is done with TensorFlow JS. ## Challenges we ran into We ran into many problems with integration, especially between the vanilla JS with React, and different TensorFlow JS functions. ## Accomplishments that we're proud of Designing the scratch for ML! We needed to integrate many different ideas to get this project to where it is, but we learned a lot of different tools along the way. The drag and drop mechanism is simple to use, and the graphs provide an idea of how ML algorithms train without the burden of having to code them. ## What we learned Our team members came together with different levels of technical experience, but we all came out with a much better grasp and experience with React, TensorFlow, and building full-stack applications. ## What's next for ML Outreach We plan to build more useful and cool visualizations as well as more options in machine learning, such as linear or logistic classifiers, or even unsupervised learning algorithms.
partial
## Inspiration There currently is no way for a normal person to invest in bitcoin or cryptocurrencies without taking on considerable risk. Even the simplest solutions, Coinbase, requires basic knowledge of the crypto market and what to buy. If you're someone who's interested in bitcoin and cryptocurrencies, care about your assets, yet don't have the time to learn about crypto investing, it can be very challenging to invest in cryptocurrencies. There's no reason why those investments shouldn't be accessible. Why not leave the buying and selling up to professionals who will manage your money for you? All you have to do is select a weekly deposit amount and a risk portfolio. ## What it does We make investing in crypto as easy as venmo. Just one click. Slide the weekly investment amount to view your predicted earnings with a certain risk portfolio over anytime from one month to ten years. Find an investment amount that you feel comfortable with. And just click "Get Started." We spent considerable time designing a very simple UI. Our on-boarding processing makes the app easy to understand, and introduces users to the product well. ## What we (didn't) use We decided to do all of our designing from scratch in order to fully optimize the design for our specific product. All of our designs were done on our own with NO libraries. We created a graph class to custom make our graph and we coded everything solo. The only libraries we used were for the backend. ## Most Challenging Our app is linked with Stripe payments and has a full Firebase backend. Setting up the Firebase backend to efficiently communicate with our front end was quite difficult. We also spent A LOT of time creating a great UI, and more importantly, a great UX for the user. We made the app incredibly simple to use. ## What I learned We thought submissions were at 10am not 9am. We should be more proactive about that.
## Inspiration With the cost of living increasing yearly and inflation at an all-time high, people need financial control more than ever. The problem is the investment field is not beginner friendly, especially with it's confusing vocabulary and an abundance of concepts creating an environment detrimental to learning. We felt the need to make a clear, well-explained learning environment for learning about investing and money management, thus we created StockPile. ## What it does StockPile provides a simulation environment of the stock market to allow users to create virtual portfolio's in real time. With relevant information and explanations built into the UX, the complex world of investments is explained in simple words, one step at a time. Users can set up multiple portfolios to try different strategies, learn vocabulary by seeing exactly where the terms apply, and access articles tailored to their actions from the simulator using AI based recommendation engines. ## How we built it Before starting any code, we planned and prototyped the application using Figma and also fully planned a backend architecture. We started our project using React Native for a mobile app, but due to connection and network issues while collaborating, we moved to a web app that runs on the phone using React. ## Challenges we ran into Some challenges we faced was creating a minimalist interface without the loss of necessary information, and incorporating both learning and interaction simultaneously. We also realized that we would not be able to finish much of our project in time, so we had to single out what to focus on to make our idea presentable. ## Accomplishments that we're proud of We are proud of our interface, the depth of which we fleshed out our starter concept, and the ease of access of our program. ## What we learned We learned about * Refining complex ideas into presentable products * Creating simple and intuitive UI/UX * How to use react native * Finding stock data from APIs * Planning backend architecture for an application ## What's next for StockPile Next up for StockPile would be to actually finish coding the app, preferably in a mobile version over a web version. We would also like to add the more complicated views, such as explanations for candle charts, market volume charts, etc. in our app. ## How StockPile approaches it's challenges: #### Best Education Hack Our entire project is based around encouraging, simplifying and personalizing the learning process. We believe that everyone should have access to a learning resource that adapts to them while providing them with a gentle yet complete introduction to investing. #### MLH Best Use of Google Cloud Our project uses some google services at it's core. - GCP App Engine - We can use app engine to host our react frontend and some of our backend. - GCP Cloud Functions - We can use Cloud Functions to quickly create microservices for different servies, such as backend for fetching stock chart data from FinnHub. - GCP Compute Engine - To host a CMS for the learn page content, and to host instance of CockroachDB - GCP Firebase Authentication to authenticate users securely. - GCP Recommendations AI - Used with other statistical operations to analyze a user's portfolio and present them with articles/tutorials best suited for them in the learn section. #### MLH Best Use of CockroachDB CockroachDB is a distributed SQL database - one that can scale. We understand that buying/selling stocks is transactional in nature, and there is no better solution that using a SQL database. Addditionally, we can use CockroachDB as a timeseries database - this allows us to effectively cache stock price data so we can optimze costs of new requests to our stock quote API.
## Inspiration We have two bitcoin and cryptocurrency enthusiasts on our team, and only of us made money during its peak earlier this month. Cryptocurrencies are just too volatile, and its value depends too much on how the public feels about it. How people think and talk about a cryptocurrency affects it price to a large extent, unlike stocks which also have the support of the market, shareholders and the company itself. ## What it does Our website scrapes for thousands of social media posts and news articles to get information about the required cryptocurrency. We then analyse it using NLP and ML and determine whether the price is likely to go up or down in the very near future. We also display the current price graphs, social media and news trends (if they are positive, neutral or negative) and the popularity ranking of the selected currency on social platforms. ## How I built it The website is mostly built using node.js and bootstrap. We use chart.js for a lot of our web illustrations, as well as python for web scraping, performing sentimental analysis and text processing. NLKT and Google Cloud Natural Language API were especially useful with this. We also stored our database on firebase. **Google Cloud**: We used firebase to efficiently store and manage our database, and Google Cloud Natural Language API to perform sentimental analysis on hundreds of social media posts efficiently. ## Challenges I ran into It was especially hard to create, store and process the large datasets we made consisting of social media posts and news articles. Even though we only needed data from the past few weeks, it was a lot since so many people post online. Getting relevant data, free of spam and repeated posts, and actually getting useful information out of it was hard. ## Accomplishments that I'm proud of We are really proud that we were able to connect multiple streams of data, analyse them and display all relevant information. It was amazing to see when our results matched the past peaks and crashes in bitcoin price. ## What I learned We learned how to scrape relevant data from the web, clean it and perform sentimental analysis on it to make predictions about future prices. Most of this was new to our team members and we definitely learned a lot. ## What's next for We hope to further increase the functionality of our website. We want users to have an option to give the website permission to automatically buy and sell cryptocurrencies when it determines it is the best time to do so. ## Domain name We bought the domain name get-crypto-insights.online for the best domain name challenge since it is relevant to our project. If I found a website of this name on the internet, I would definitely visit it to improve my cryptocurrency trading experience. ## About Us We are Discord team #1, with @uditk, @soulkks, @kilobigeye and @rakshaa
partial
## Inspiration We love to travel and have found that we typically have to use multiple sources in the process of creating an itinerary. With Path Planner, the user can get their whole tripped planned for them in one place. ## What it does Build you an itinerary for any destination you desire ## How we built it Using react and nextjs we developed the webpage and used chatGpt API to pull an itinerary with out given prompts ## Challenges we ran into Displaying a Map to help choose a destination ## Accomplishments that we're proud of Engineering an efficient prompt that allows us to get detailed itinerary information and display it in a user friendly fashion. ## What we learned How to use leaflet with react and next.js and utilizing leaflet to input an interactive map that can be visualized on our web pages. How to engineer precise prompts using open AI's playground. ## What's next for Path Planner Integrating prices for hotels, cars, and flights as well as having a login page so that you can store your different itineraries and preferences. This would require creating a backend as well.
## Inspiration The internet is filled with user-generated content, and it has become increasingly difficult to manage and moderate all of the text that people are producing on a platform. Large companies like Facebook, Instagram, and Reddit leverage their massive scale and abundance of resources to aid in their moderation efforts. Unfortunately for small to medium-sized businesses, it is difficult to monitor all the user-generated content being posted on their websites. Every company wants engagement from their customers or audience, but they do not want bad or offensive content to ruin their image or the experience for other visitors. However, hiring someone to moderate or build an in-house program is too difficult to manage for these smaller businesses. Content moderation is a heavily nuanced and complex problem. It’s unreasonable for every company to implement its own solution. A robust plug-and-play solution is necessary that adapts to the needs of each specific application. ## What it does That is where Quarantine comes in. Quarantine acts as an intermediary between an app’s client and server, scanning the bodies of incoming requests and “quarantining” those that are flagged. Flagging is performed automatically, using both pretrained content moderation models (from Azure and Moderation API) as well as an in house machine learning model that adapts to specifically meet the needs of the application’s particular content. Once a piece of content is flagged, it appears in a web dashboard, where a moderator can either allow or block it. The moderator’s labels are continuously used to fine tune the in-house model. Together with this in house model and pre-trained models a robust meta model is formed. ## How we built it Initially, we built an aggregate program that takes in a string and runs it through the Azure moderation and Moderation API programs. After combining the results, we compare it with our machine learning model to make sure no other potentially harmful posts make it through our identification process. Then, that data is stored in our database. We built a clean, easy-to-use dashboard for the grader using react and Material UI. It pulls the flagged items from the database and then displays them on the dashboard. Once a decision is made by the person, that is sent back to the database and the case is resolved. We incorporated this entire pipeline into a REST API where our customers can pass their input through our programs and then access the flagged ones on our website. Users of our service don’t have to change their code, simply they append our url to their own API endpoints. Requests that aren’t flagged are simply instantly forwarded along. ## Challenges we ran into Developing the in house machine learning model and getting it to run on the cloud proved to be a challenge since the parameters and size of the in house model is in constant flux. ## Accomplishments that we're proud of We were able to make a super easy to use service. A company can add Quarantine with less than one line of code. We're also proud of adaptive content model that constantly updates based on the latest content blocked by moderators. ## What we learned We learned how to successfully integrate an API with a machine learning model, database, and front-end. We had learned each of these skills individually before, but we has to figure out how to accumulate them all. ## What's next for Quarantine We have plans to take Quarantine even further by adding customization to how items are flagged and taken care of. It is proven that there are certain locations that spam is commonly routed through so we could do some analysis on the regions harmful user-generated content is coming from. We are also keen on monitoring the stream of activity of individual users as well as track requests in relation to each other (detect mass spamming). Furthermore, we are curious about adding the surrounding context of the content since it may be helpful in the grader’s decisions. We're also hoping to leverage the data we accumulate from content moderators to help monitor content across apps using shared labeled data behind the scenes. This would make Quarantine more valuable to companies as it monitors more content.
## Inspiration As college students, we both agreed that fun trips definitely make college memorable. However, we are sometimes hesitant about initiating plans because of the low probability of plans making it out of the group chat. Our biggest problem was being able to find unique but local activities to do with a big group, and that inspired the creation of PawPals. ## What it does Based on a list of criteria, we prompt OpenAI's API to generate the perfect itinerary for your trip, breaking down hour by hour a schedule of suggestions to have an amazing trip. This relieves some of the underlying stresses of planning trips such as research and decision-making. ## How we built it On the front-end, we utilized Figma to develop an UI design that we then translated into React.js to develop a web app. We connect a Python back-end through the utilization of Fast API to integrate the usage of SQLite and OpenAI API into our web application. ## Challenges we ran into Some challenges we ran into were the lack of experience in Full-Stack development. We did not have the strongest technical skill sets needed; thus, we took this time to explore and learn new technologies in order to accomplish the project. Furthermore, we recognized the long time it takes for OpenAI API to generate text, leading us to in the future output the generated text as a stream utilizing the SDK for it by Vercel. ## Accomplishments that we're proud of As a team of two, we were very proud of our final product and took pride in our work. We came up with a plan and vision that we truly dedicated ourselves to in order to accomplish it. Even though we have not had too much experience in creating a full-stack application and developing a Figma design, we decided to take the leap in exploring these domains, going out of our comfort zone which allowed us to truly learn a lot from this experience. ## What we learned We learned the value of coming up with a plan and solving problems that arise creatively. There is always a solution to a goal, and we found that we simply need to find it. Furthermore, we both were able to develop ourselves technically as we explored new domains and technologies, pushing our knowledge to greater breadth and depth. We were also able to meet many people along the way that would provide pieces of advice and guidance on paths and technologies to look into. These people accelerated the learning process and truly added a lot to the hackathon experience. Overall, this hackathon was an amazing learning experience for us! ## What's next for PawPals We hope to deploy the site on Vercel and develop a mobile application of the app. We believe with the streaming technology, we can create faster response time. Furthermore, through some more scaling in interconnectivity with friends, budgeting, and schedule adjusting, we will be able to create an application that will definitely shift the travel planning industry!
winning
This project was developed with the RBC challenge in mind of developing the Help Desk of the future. ## What inspired us We were inspired by our motivation to improve the world of work. ## Background If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution. ## Try it! <http://www.rbcH.tech> ## What we learned Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes. ## How we built * Node.js for our servers (one server for our webapp, one for BotFront) * React for our front-end * Rasa-based Botfront, which is the REST API we are calling for each user interaction * We wrote our own Botfront database during the last day and night ## Our philosophy for delivery Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP. ## Challenges we faced Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence. ## Our code <https://github.com/ntnco/mchacks/> ## Our training data <https://github.com/lool01/mchack-training-data>
## Inspiration Partially inspired by the Smart Cities track, we wanted our app to have the direct utility of ordering food, while still being fun to interact with. We aimed to combine convenience with entertainment, making the experience more enjoyable than your typical drive-through order. ## What it does You interact using only your voice. The app automatically detects when you start and stop talking, uses AI to transcribe what you say, figures out the food items (with modifications) you want to order, and adds them to your current order. It even handles details like size and flavor preferences. The AI then generates text-to-speech audio, which is played back to confirm your order in a humorous, engaging way. There is absolutely zero set-up or management necessary, as the program will completely ignore all background noises and conversation. Even then, it will still take your order with staggering precision. ## How we built it The frontend of the app is built with React and TypeScript, while the backend uses Flask and Python. We containerized the app using Docker and deployed it using Defang. The design of the menu is also done in Canva with a dash of Harvard colors. ## Challenges we ran into One major challenge was getting the different parts of the app—frontend, backend, and AI—to communicate effectively. From media file conversions to AI prompt engineering, we worked through each of the problems together. We struggled particularly with maintaining smooth communication once the app was deployed. Additionally, fine-tuning the AI to accurately extract order information from voice inputs while keeping the interaction natural was a big hurdle. ## Accomplishments that we're proud of We're proud of building a fully functioning product that successfully integrates all the features we envisioned. We also managed to deploy the app, which was a huge achievement given the complexity of the project. Completing our initial feature set within the hackathon timeframe was a key success for us. Trying to work with Python data type was difficult to manage, and we were proud to navigate around that. We are also extremely proud to meet a bunch of new people and tackle new challenges that we were not previously comfortable with. ## What we learned We honed our skills in React, TypeScript, Flask, and Python, especially in how to make these technologies work together. We also learned how to containerize and deploy applications using Docker and Docker Compose, as well as how to use Defang for cloud deployment. ## What's next for Harvard Burger Moving forward, we want to add a business-facing interface, where restaurant staff would be able to view and fulfill customer orders. There will also be individual kiosk devices to handle order inputs. These features would allow *Harvard Burger* to move from a demo to a fully functional app that restaurants could actually use. Lastly, we can sell the product by designing marketing strategies for fast food chains.
## Inspiration At the heart of our creation lies a profound belief in the power of genuine connections. Our project was born from the desire to create a safe space for authentic conversations, fostering empathy, understanding, and support. In a fast-paced, rapidly changing world, it is important for individuals to identify their mental health needs, and receive the proper support. ## What it does Our project is an innovative chatbot leveraging facial recognition technology to detect and respond to users' moods, with a primary focus on enhancing mental health. By providing a platform for open expression, it aims to foster emotional well-being and create a supportive space for users to freely articulate their feelings. ## How we built it We use opencv2 to actively analyze video and used its built-in facial recognition to build a cascade around individuals' faces when identified. Then, using the deepface library with a pre-trained model for identifying emotions, we assigned each cascaded face with an emotion in live time. Then, using this information, we used cohere's language learning model to then generate a message corresponding to the emotion the individual is feeling. The LLM is also used to chat back and forth between the user and the bot. Then all this information is displayed on the website which is designed using flask for the backend and html, css, react, nodejs for the frontend. ## Challenges we ran into The largest adversity encountered was dealing with the front-end portion of it. Our group lacked experience with developing websites, and found many issues connecting the back-end to the front-end. ## Accomplishments that we're proud of Creating the cascades around the faces, incorporating a machine learning model to the image processing, and attemping to work on front-end for the first time with no experience. ## What we learned We learned how to create a project with a team and divide up tasks and combine them together to create one cohesive project. We also learned new technical skills (i.e how to use API's, machine learning, and front-end development) ## What's next for My Therapy Bot Try and design a more visually appealing and interactive webpage to display our chat-bot. Ideally it would include live video feed of themselves with cascades and emotions while chatting. It would be nice to train our own machine learning model to determine expressions as well.
winning
## Inspiration According to the National Sleep Foundation, about half of U.S. adult drivers admit to consistently getting behind the wheel while feeling drowsy. About 20% admit to falling asleep behind the wheel at some point in the past year – with more than 40% admitting this has happened at least once in their driving careers. And a study by the AAA Foundation for Traffic Safety estimated that 328,000 drowsy driving crashes occur annually. Being drowsy behind the wheel makes a person 3x more likely to be involved in a motor vehicle accident, and driving after going more than 20 hours without sleep is the equivalent of driving with a blood-alcohol concentration of 0.08% - the legal limit. Seeing all of this, we wanted to create a solution that would save lives, and make roads safer. ## What it does SleepStop is a system that alerts drivers when they are falling asleep, and contacts emergency services in case of accidents. The first system we developed is an artificial intelligence procedure that detects when a driver is falling asleep. When it detects that the driver has closed their eyes for a prolonged period, it plays a sound that prompts the driver to wake up, and pull over. The warning sounds are inspired by airplane warning sounds, which are specially designed and chosen for emergency situations. We believe the sudden loud sounds are enough to wake a driver up for long enough to allow them to pull over safely. The second system we developed is a crash detection system. When an accident is detected, authorities are immediately contacted with the location of the crash. ## How we built it For the sleep detection system, we used opencv to create an artificial intelligence that looks at the driver’s eyes, and times how long the driver’s eyes have been closed for. When the eyes have been closed for long enough to trigger our system, a sound is played to wake the driver up. For the crash system, by recognizing when airbags are opened, and/or when the driver has closed their eyes for an elongated period of time, we can determine when a crash has occurred. When a crash is detected, emergency services are contacted via Twilio. ## Challenges we ran into One of the major problems with this project was that the code ran on a raspberry pi, but almost none of our development environments had Linux installed. As a result, we had to be very careful when testing to make sure everything was cross compatible. We ran into a challenge when creating an executable. We didn’t make it through. We tried interacting with smart watches in order to send a vibration to the driver on top of the loud sound. Unfortunately, we had to scrap this idea as making custom interactions with a FitBit proved far too challenging. ## Accomplishments that we're proud of We are proud that our prototype has such advanced functionality, and that we were able to produce it in a small time-frame. We are also proud to have a working eye detection system, car crash system, sms alerting, and alarm sounding. The thing we are most proud of is the potential of this hack to save lives. ## What we learned While working on Sleep Stop, we learned a lot about working as a team. We had to communicate well to make sure that we weren’t writing conflicting code. We also learned about how to structure large Python projects, using some of Python’s more advanced features. Another thing that we had to learn to make this project was cross-platform compatibility in Python. Initially, our project would work on Linux but break on Windows. We learned how to reliably detect facial features such as closed eyes in Python using OpenCV. ## What's next for Sleep Stop Right now we have a prototype but in the future it would be beneficial to create a highly-integrated product. We can also envision working with car manufacturers to make sleep-detection a built-in safety feature. Finally, we believe stimuli other than loud sounds, such as vibrations, are desirable to ensure the driver wakes up.
## Inspiration As lane-keep assist and adaptive cruise control features are becoming more available in commercial vehicles, we wanted to explore the potential of a dedicated collision avoidance system ## What it does We've created an adaptive, small-scale collision avoidance system that leverages Apple's AR technology to detect an oncoming vehicle in the system's field of view and respond appropriately, by braking, slowing down, and/or turning ## How we built it Using Swift and ARKit, we built an image-detecting app which was uploaded to an iOS device. The app was used to recognize a principal other vehicle (POV), get its position and velocity, and send data (corresponding to a certain driving mode) to an HTTP endpoint on Autocode. This data was then parsed and sent to an Arduino control board for actuating the motors of the automated vehicle ## Challenges we ran into One of the main challenges was transferring data from an iOS app/device to Arduino. We were able to solve this by hosting a web server on Autocode and transferring data via HTTP requests. Although this allowed us to fetch the data and transmit it via Bluetooth to the Arduino, latency was still an issue and led us to adjust the danger zones in the automated vehicle's field of view accordingly ## Accomplishments that we're proud of Our team was all-around unfamiliar with Swift and iOS development. Learning the Swift syntax and how to use ARKit's image detection feature in a day was definitely a proud moment. We used a variety of technologies in the project and finding a way to interface with all of them and have real-time data transfer between the mobile app and the car was another highlight! ## What we learned We learned about Swift and more generally about what goes into developing an iOS app. Working with ARKit has inspired us to build more AR apps in the future ## What's next for Anti-Bumper Car - A Collision Avoidance System Specifically for this project, solving an issue related to file IO and reducing latency would be the next step in providing a more reliable collision avoiding system. Hopefully one day this project can be expanded to a real-life system and help drivers stay safe on the road
## Inspiration According to CDC.gov, about 6000 fatal crashes each year may be caused by drowsy drivers and the Global News published in 2018 that distracted driving is indeed the number one cause of deaths on Ontario roads. For some reason, people seem to multitask when they drive, making them more prone to accidents. We plan got the idea to investigate how artificial intelligence could automatically detect the distracted activity of the drivers and, based on the type of distraction, alert them as a safety precaution and which can be implemented in embedded architecture for production purposes. ## What it does AntiDistracto, as the name suggests tries its best to keep you from being drowsy or distracted. It is a user-friendly and scalable platform that analyzes drowsy and distracted faces and plays an alert sound to wake them up. ## How we built it Back end: Google Cloud Platform and Python. We got data from an open-source dataset (<https://www.kaggle.com/c/state-farm-distracted-driver-detection/data>) and used this to train an autoML model to help us classify input images as distracted or not. We trained on 10 categories and deployed the model on the google cloud. The data had to be prepared for autoML to train and we trained for 5 hours. For drowsiness detection, we used OpenCV and haar cascades and created a framework using that. Front end: AntiDistracto uses Flask to send a POST request based on the input image to the ML model to detect if the person in the image is distracted or not. It then populated the webpage hosted on Django with the GET request received from the ML model. Using Django also included mapping functionalities of triggering OpenCV and the request logic. In the end, there were some UI changes implemented using HTML and CSS. ## Challenges I ran into Cleaning up data to be able to train was a bit hard, and we had trouble interacting with the autoML Cloud Vision API, possibly because it is still a beta feature. Doing ML on GCP was also new for us. ## Accomplishments that I'm proud of Making things work, after having tons of trouble interfacing backend and front end Successfully implemented the frontend features/components/routing with Django Explored the amazing uses of OpenCV Working with GCP's machine learning stack ## What I learned Doing machine learning on the cloud is really cool and convenient and computer vision is really powerful too! ## What's next for AntiDistracto * Seamlessly fitting inside the vehicle utilizing embedded architecture * Subtle alerts based on announcements, beeps that are customizable by the driver and don’t interfere with the driving experience * Features to incorporate movement of the vehicle to avoid false alerts
winning
## Inspiration The world is constantly chasing after smartphones with bigger screens and smaller bezels. But why wait for costly display technology, and why get rid of old phones that work just fine? We wanted to build an app to create the effect of the big screen using the power of multiple small screens. ## What it does InfiniScreen quickly and seamlessly links multiple smartphones to play videos across all of their screens. Breathe life into old phones by turning them into a portable TV. Make an eye-popping art piece. Display a digital sign in a way that is impossible to ignore. Or gather some friends and strangers and laugh at memes together. Creative possibilities abound. ## How we built it Forget Bluetooth, InfiniScreen seamlessly pairs nearby phones using ultrasonic communication! Once paired, devices communicate with a Heroku-powered server written in node.js, express.js, and socket.io for control and synchronization. After the device arrangement is specified and a YouTube video is chosen on the hosting phone, the server assigns each device a region of the video to play. Left/right sound channels are mapped based on each phone's location to provide true stereo sound support. Socket-emitted messages keep the devices in sync and provide play/pause functionality. ## Challenges we ran into We spent a lot of time trying to implement all functionality using the Bluetooth-based Nearby Connections API for Android, but ended up finding that pairing was slow and unreliable. The ultrasonic+socket.io based architecture we ended up using created a much more seamless experience but required a large rewrite. We also encountered many implementation challenges while creating the custom grid arrangement feature, and trying to figure out certain nuances of Android (file permissions, UI threads) cost us precious hours of sleep. ## Accomplishments that we're proud of It works! It felt great to take on a rather ambitious project and complete it without sacrificing any major functionality. The effect is pretty cool, too—we originally thought the phones might fall out of sync too easily, but this didn't turn out to be the case. The larger combined screen area also emphasizes our stereo sound feature, creating a surprisingly captivating experience. ## What we learned Bluetooth is a traitor. Mad respect for UI designers. ## What's next for InfiniScreen Support for different device orientations, and improved support for unusual aspect ratios. Larger selection of video sources (Dailymotion, Vimeo, random MP4 urls, etc.). Seeking/skip controls instead of just play/pause.
## Inspiration We created this app to address a problem that our creators were facing: waking up in the morning. As students, the stakes of oversleeping can be very high. Missing a lecture or an exam can set you back days or greatly detriment your grade. It's too easy to sleep past your alarm. Even if you set multiple, we can simply turn all those off knowing that there is no human intention behind each alarm. It's almost as if we've forgotten that we're supposed to get up after our alarm goes off! In our experience, what really jars you awake in the morning is another person telling you to get up. Now, suddenly there is consequence and direct intention behind each call to wake up. Wake simulates this in an interactive alarm experience. ## What it does Users sync their alarm up with their trusted peers to form a pact each morning to make sure that each member of the group wakes up at their designated time. One user sets an alarm code with a common wakeup time associated with this alarm code. The user's peers can use this alarm code to join their alarm group. Everybody in the alarm group will experience the same alarm in the morning. After each user hits the button when they wake up, they are sent to a soundboard interface, where they can hit buttons to send try to wake those that are still sleeping with real time sound effects. Each time one user in the server hits a sound effect button, that sound registers on every device, including their own device to provide auditory feedback that they have indeed successfully sent a sound effect. Ultimately, users exit the soundboard to leave the live alarm server and go back to the home screen of the app. They can finally start their day! ## How we built it We built this app using React Native as a frontend, Node.js as the server, and Supabase as the database. We created files for the different screens that users will interact with in the front end, namely the home screen, goodnight screen, wakeup screen, and the soundboard. The home screen is where they set an alarm code or join using someone else's alarm code. The "goodnight screen" is what screen the app will be on while the user sleeps. When the app is on, it displays the current time, when the alarm is set to go off, who else is in the alarm server, and a warm message, "Goodnight, sleep tight!". Each one of these screens went through its own UX design process. We also used Socket.io to establish connections between those in the same alarm group. When a user sends a sound effect, it would go to the server which would be sent to all the users in the group. As for the backend, we used Supabase as a database to store the users, alarm codes, current time, and the wake up times. We connected the front and back end and the app came together. All of this was tested on our own phones using Expo. ## Challenges we ran into We ran into many difficult challenges during the development process. It was all of our first times using React Native, so there was a little bit of a learning curve in the beginning. Furthermore, incorporating Sockets with the project proved to be very difficult because it required a lot of planning and experimenting with the server/client relationships. The alarm ringing also proved to be surprisingly difficult to implement. If the alarm was left to ring, the "goodnight screen" would continue ringing and would not terminate. Many of React Native's tools like setInterval didn't seem to solve the problem. This was a problematic and reoccurring issue. Secondly, the database in Supabase was also quite difficult and time consuming to connect, but in the end, once we set it up, using it simply entailed brief SQL queries. Thirdly, setting up the front end proved quite confusing and problematic, especially when it came to adding alarm codes to the database. ## Accomplishments that we're proud of We are super proud of the work that we’ve done developing this mobile application. The interface is minimalist yet attention- grabbing when it needs to be, namely when the alarm goes off. Then, the hours of debugging, although frustrating, was very satisfying once we finally got the app running. Additionally, we greatly improved our understanding of mobile app developement. Finally, the app is also just amusing and fun to use! It’s a cool concept! ## What we learned As mentioned before, we greatly improved our understanding of React Native, as for most of our group, this was the first time using it for a major project. We learned how to use Supabase and socket. Additionally, we improved our general Javascript and user experience design skills as well. ## What's next for Wakey We would like to put this app on the iOS App Store and the Android Play Store, which would take more extensive and detailed testing, especially as for how the app will run in the background. Additionally, we would like to add some other features, like a leaderboard for who gets up most immediately after their alarm gets off, who sends the most sound effects, and perhaps other ways to rank the members of each alarm server. We would also like to add customizable sound effects, where users can record themselves or upload recordings that they can add to their soundboards.
## Inspiration The moment we formed our team, we all knew we wanted to create a hack for social good which could even aid in a natural calamity. According to the reports of World Health Organization, there are 285 million people worldwide who are either blind or partially blind. Due to the relevance of this massive issue in our world today, we chose to target our technology towards people with visual impairments. After discussing several ideas, we settled on an application to ease communication and navigation during natural calamities for people with visual impairment. ## What it does It is an Android application developed to help the visually impaired navigate through natural calamities using peer to peer audio and video streaming by creating a mesh network that does not rely on wireless or cellular connectivity in the event of a failure. In layman terms, it allows people to talk with and aid each other during a disastrous event in which the internet connection is down. ## How we built it We decided to build an application on the Android operating system. Therefore, we used the official integrated development environment: Android Studio. By integrating Google's nearby API into our java files and using NFC and bluetooth resources, we were able to establish a peer to peer communication network between devices requiring no internet or cellular connectivity. ## Challenges we ran into Our entire concept of communication without an internet or cellular network was a tremendous challenge. Specifically, the two greatest challenges were to ensure the seamless integration of audio and video between devices. Establishing a smooth connection for audio was difficult as it was being streamed live in real time and had to be transferred without the use of a network. Furthermore, transmitting a video posed to be even a greater challenge. Since we couldn’t transmit data of the video over the network either, we had to convert the video to bytes and then transmit to assure no transmission loss. However, we persevered through these challenges and, as a result, were able to create a peer to peer network solution. ## Accomplishments that we're proud of We are all proud of having created a fully implemented application that works across the android platform. But we are even more proud of the various skills we gained in order to code a successful peer to peer mesh network through which we could transfer audio and video. ## What we learned We learned how to utilize Android Studio to create peer to peer mesh networks between different physical devices. More importantly, we learned how to live stream real time audio and transmit video with no internet connectivity. ## What's next for Navig8. We are very proud of all that we have accomplished so far. However, this is just the beginning. Next, we would like to implement location-based mapping. In order to accomplish this, we would create a floor plan of a certain building and use cameras such as Intel’s RealSense to guide visually impaired people to the nearest exit point in a given building.
winning
## Inspiration A patient can not move their legs and arms? No worries! We got this face recognition software for you! As long as the patient can rotate their head a bit, the cursor can be moved. ## What it does control a circle(cursor) to move as what the patient wanted and reduce the work of caregivers. ## How I built it Using pycharm, to import opencv and python to work with a pre-trained machine learning face recognition algorithm. ## Challenges I ran into The face recognition algorithms are not too stable which have about 10% failure rate toward the output. ## Accomplishments that I'm proud of I have learned the so many about opencv and combine the knowledge of machine learning, python, and opencv together to form a project that can actually help patients that are not able to move their bodies. ## What I learned Knowledge of opencv, tensorflow, google cloud. As well as the current lack of health care on the recovery of patients. ## What's next for hotel dieu shaver face reader Use a different AI face training algorithm.
## Inspiration Post-operation recovery is a vital aspect of healthcare. Keeping careful track of rehabilitation can ensure that patients get better as quickly as possible. We were inspired to create this project when thinking of elderly and single people who are in need of assistance when recovering from surgery. We imagined Dr. Debbie as a continuation of the initial care that patients received, which could carry on into their homes. ## What it does The program currently provides services like physical therapy routines aided by computer vision, and a medicine routine which the AI model reinforces. Along the way, patients can interact with a live AI assistant which speaks to them. This makes the experience feel authentic while remaining professional and informational. ## How we built it Our application consisted of two primary machine learning models interacting with our Bootstrap frontend and Flask backend. * We choose to use Bootstrap in the frontend to seamlessly interact with the Rive animation avatar of Dr. Debbie and mockup a cohesive theme for our web application. * Utilizing Flask, we were able to incorporate our model logic for the person pose detection model via Roboflow directly into Python. * In order to support real-time physical therapy correction, we use a Deep Neural Network trained to perform keypoint recognition on your local live camera feed. It makes keypoint detections, calculating the angles of figures in the frame, allowing us to determine how correctly the patient is performing a certain physical therapy exercise. * Additionally, we utilized Cerebus Llama 3.1-8b to interact directly with the application user, serving as the direct AI companion for the user to communicate with either through text or a text-to-speech option! ## Challenges we ran into We ran into challenges with the computer vision models. Tracking the patient's body was a balancing act between accuracy and speed. Some models were tailored towards different applications, which meant experimenting with different training sets and parameters. Integrating the live avatar of Dr. Debbie was a challenge because the animation service, Rive, was new to the team. Interfacing the animation in a way that felt interactive and realistic was very challenging. Because of the time constraints of the hackathon, we were forced to prioritize certain aspects of the project, while scrapping others. For example, we found some features like medicine scanning and predictive modeling were feasible, but not worth the effort considering our resources. ## What's next for Dr. Debbie Our next two initiatives for Dr. Debbie include converting the application to a mobile app or a Windows desktop application. To reach more users on a more diverse variety of technologies, being able to load Dr. Debbie on your phone or your home computer can be much more convenient for our users. In addition to this, we hope to expand the statistical analysis of Dr. Debbie. To improve the connection between healthcare professionals and patients when they cannot be together in person, Dr. Debbie can provide data to professionals asynchronously helping them stay current with patient status.
## 💡 Inspiration You have another 3-hour online lecture, but you’re feeling sick and your teacher doesn’t post any notes. You don’t have any friends that can help you, and when class ends, you leave the meet with a blank document. The thought lingers in your mind “Will I ever pass this course?” If you experienced a similar situation in the past year, you are not alone. Since COVID-19, there have been many struggles for students. We created AcadeME to help students who struggle with paying attention in class, missing class, have a rough home environment, or just want to get ahead in their studies. We decided to build a project that we would personally use in our daily lives, and the problem AcadeME tackled was the perfect fit. ## 🔍 What it does First, our AI-powered summarization engine creates a set of live notes based on the current lecture. Next, there are toggle features for simplification, definitions, and synonyms which help you gain a better understanding of the topic at hand. You can even select text over videos! Finally, our intuitive web app allows you to easily view and edit previously generated notes so you are never behind. ## ⭐ Feature List * Dashboard with all your notes * Summarizes your lectures automatically * Select/Highlight text from your online lectures * Organize your notes with intuitive UI * Utilizing Google Firestore, you can go through your notes anywhere in the world, anytime * Text simplification, definitions, and synonyms anywhere on the web * DCP, or Distributing Computing was a key aspect of our project, allowing us to speed up our computation, especially for the Deep Learning Model (BART), which through parallel and distributed computation, ran 5 to 10 times faster. ## ⚙️ Our Tech Stack * Chrome Extension: Chakra UI + React.js, Vanilla JS, Chrome API, * Web Application: Chakra UI + React.js, Next.js, Vercel * Backend: AssemblyAI STT, DCP API, Google Cloud Vision API, DictionariAPI, NLP Cloud, and Node.js * Infrastructure: Firebase/Firestore ## 🚧 Challenges we ran into * Completing our project within the time constraint * There was many APIs to integrate, making us spend a lot of time debugging * Working with Google Chrome Extension, which we had never worked with before. ## ✔️ Accomplishments that we're proud of * Learning how to work with Google Chrome Extensions, which was an entirely new concept for us. * Leveraging Distributed Computation, a very handy and intuitive API, to make our application significantly faster and better to use. ## 📚 What we learned * The Chrome Extension API is incredibly difficult, budget 2x as much time for figuring it out! * Working on a project where you can relate helps a lot with motivation * Chakra UI is legendary and a lifesaver * The Chrome Extension API is very difficult, did we mention that already? ## 🔭 What's next for AcadeME? * Implementing a language translation toggle to help international students * Note Encryption * Note Sharing Links * A Distributive Quiz mode, for online users!
losing
## Inspiration While we were coming up with ideas on what to make, we looked around at each other while sitting in the room and realized that our postures weren't that great. You We knew that it was pretty unhealthy for us to be seated like this for prolonged periods. This inspired us to create a program that could help remind us when our posture is terrible and needs to be adjusted. ## What it does Our program uses computer vision to analyze our position in front of the camera. Sit Up! takes your position at a specific frame and measures different distances and angles between critical points such as your shoulders, nose, and ears. From there, the program throws all these measurements together into mathematical equations. The program compares the results to a database of thousands of positions to see if yours is good. ## How we built it We built it using Flask, Javascript, Tensorflow, Sklearn. ## Challenges we ran into The biggest challenge we faced, is how inefficient and slow it is for us to actually do this. Initially our plan was to use Django for an API that gives us the necessary information but it was slower than anything we’ve seen before, that is when we came up with client side rendering. Doing everything in flask, made this project 10x faster and much more efficient. ## Accomplishments that we're proud of Implementing client side rendering for an ML model Getting out of our comfort zone by using flask Having nearly perfect accuracy with our model Being able to pivot our tech stack and be so versatile ## What we learned We learned a lot about flask We learned a lot about the basis of ANN We learned more on how to implement computer vision for a use case ## What's next for Sit Up! Implement a phone app Calculate the accuracy of our model Enlarge our data set Support higher frame rates
## Inspiration Being sport and fitness buffs, we understand the importance of right form. Incidentally, suffering from a wrist injury himself, Mayank thought of this idea while in a gym where he could see almost everyone following wrong form for a wide variety of exercises. He knew that it couldn't be impossible to make something that easily accessible yet accurate in recognizing wrong exercise form and most of all, be free. He was sick of watching YouTube videos and just trying to emulate the guys in it with no real guidance. That's when the idea for Fi(t)nesse was born, and luckily, he met an equally health passionate group of people at PennApps which led to this hack, an entirely functional prototype that provides real-time feedback on pushup form. It also lays down an API that allows expansion to a whole array of exercises or even sports movements. ## What it does A user is recorded doing the push-up twice, from two different angles. Any phone with a camera can fulfill this task. The data is then analyzed and within a minute, the user has access to detailed feedback pertaining to the 4 most common push-up mistakes. The application uses custom algorithms to detect these mistakes and also their extent and uses this information to provide a custom numerical score to the user for each category. ## How we built it Human Pose detection with a simple camera was achieved with OpenCV and deep neural nets. We tried using both the COCO and the MPI datasets for training data and ultimately went with COCO. We then setup an Apache server running Flask using the Google Computing Engine to serve as an endpoint for the input videos. Due to lack of access to GPUs, a 24 core machine on the Google Cloud Platform was used to run the neural nets and generate pose estimations. The Fi(t)nesse website was coded in HTML+CSS while all the backend was written in Python. ## Challenges we ran into Getting the Pose Detection right and consistent was a huge challenge. After a lot of tries, we ended and a model that works surprisingly accurately. Combating the computing power requirements of a large neural network was also a big challenge. We were initially planning to do the entire project on our local machines but when they kept slowing down to a crawl, we decided to shift everything to a VM. The algorithms to detect form mistakes and generate scores for them were also a challenge since we could find no mathematical information about the right form for push-ups, or any of the other popular exercises for that matter. We had to come up with the algorithms and tweak them ourselves which meant we had to do a LOT of pushups. But to our pleasant surprise, the application worked better than we expected. Getting a reliable data pipeline setup was also a challenge since everyone on our team was new to deployed systems. A lot of hours and countless tutorials later, even though we couldn't reach exactly the level of integration we were hoping for, we were able to create something fairly streamlined. Every hour of the struggle taught us new things though so it was all worth it. ## Accomplishments that we're proud of -- Achieving accurate single body human pose detection with support for multiple bodies as well from a simple camera feed. -- Detecting the right frames to analyze from the video since running every frame through our processing pipeline was too resource intensive -- Developing algorithms that can detect the most common push-up mistakes. -- Deploying a functioning app ## What we learned Almost every part of this project involved a massive amount of learning for all of us. Right from deep neural networks to using huge datasets like COCO and MPI, to learning how deployed app systems work and learning the ins and outs of the Google Cloud Service. ## What's next for Fi(t)nesse There is an immense amount of expandability to this project. Adding more exercises/movements is definitely an obvious next step. Also interesting to consider is the 'gameability' of an app like this. By giving you a score and sassy feedback on your exercises, it has the potential to turn exercise into a fun activity where people want to exercise not just with higher weights but also with just as good form. We also see this as being able to be turned into a full-fledged phone app with the right optimizations done to the neural nets.
## Inspiration A love of music production, the obscene cost of synthesizers, and the Drive soundtrack ## What it does In its simplest form, it is a synthesizer. It creates a basic wave using wave functions and runs it through a series of custom filters to produce a wide range of sounds. Finally the sounds are bound to a physical "keyboard" made using an Arduino. ## How we built it The input driver and function generator ares written in python using numpy and pyaudio libraries for calculating wave functions and to output the result to audio output.. ## Challenges we ran into -pyaudio doesn't play nice with multiprocessing -multithreading wasn't as good an option because it doesn't properly parallelize due to Python's GIL -parsing serial input from a constant stream lead to a few issues ## What we learned We learned a lot about realtime signal processing, the performance limitations of Python and the ins and outs of creating an controller device from the hardware level to driver software. ## What's next for Patch Cable -We'd like to rewrite the signal processing in a faster language. Python couldn't keep up with realtime transformation as well was we would have liked. -We'd like to add a command line and visual interface for making the function chains to make it easier to make sounds as you go.
winning
## What it does Danstrument lets you video call your friends and create music together using only your actions. You can start a call which generates a code that your friend can use to join. ## How we built it We used Node.js to create our web app which employs WebRTC to allow video calling between devices. Movements are tracked with pose estimation from tensorflow and then vector calculations are done to trigger audio files. ## Challenges we ran into Connecting different devices with WebRTC over an unsecured site proved to be very difficult. We also wanted to have continuous sound but found that libraries that could accomplish this caused too many problems so we chose to work with discrete sound bites instead. ## What's next for Danstrument Annoying everyone around us.
## Inspiration Imagine: A major earthquake hits. Thousands call 911 simultaneously. In the call center, a handful of operators face an impossible task. Every line is ringing. Every second counts. There aren't enough people to answer every call. This isn't just hypothetical. It's a real risk in today's emergency services. A startling **82% of emergency call centers are understaffed**, pushed to their limits by non-stop demands. During crises, when seconds mean lives, staffing shortages threaten our ability to mitigate emergencies. ## What it does DispatchAI reimagines emergency response with an empathetic AI-powered system. It leverages advanced technologies to enhance the 911 call experience, providing intelligent, emotion-aware assistance to both callers and dispatchers. Emergency calls are aggregated onto a single platform, and filtered based on severity. Critical details such as location, time of emergency, and caller's emotions are collected from the live call. These details are leveraged to recommend actions, such as dispatching an ambulance to a scene. Our **human-in-the-loop-system** enforces control of human operators is always put at the forefront. Dispatchers make the final say on all recommended actions, ensuring that no AI system stands alone. ## How we built it We developed a comprehensive systems architecture design to visualize the communication flow across different softwares. ![Architecture](https://i.imgur.com/FnXl7c2.png) We developed DispatchAI using a comprehensive tech stack: ### Frontend: * Next.js with React for a responsive and dynamic user interface * TailwindCSS and Shadcn for efficient, customizable styling * Framer Motion for smooth animations * Leaflet for interactive maps ### Backend: * Python for server-side logic * Twilio for handling calls * Hume and Hume's EVI for emotion detection and understanding * Retell for implementing a voice agent * Google Maps geocoding API and Street View for location services * Custom-finetuned Mistral model using our proprietary 911 call dataset * Intel Dev Cloud for model fine-tuning and improved inference ## Challenges we ran into * Curated a diverse 911 call dataset * Integrating multiple APIs and services seamlessly * Fine-tuning the Mistral model to understand and respond appropriately to emergency situations * Balancing empathy and efficiency in AI responses ## Accomplishments that we're proud of * Successfully fine-tuned Mistral model for emergency response scenarios * Developed a custom 911 call dataset for training * Integrated emotion detection to provide more empathetic responses ## Intel Dev Cloud Hackathon Submission ### Use of Intel Hardware We fully utilized the Intel Tiber Developer Cloud for our project development and demonstration: * Leveraged IDC Jupyter Notebooks throughout the development process * Conducted a live demonstration to the judges directly on the Intel Developer Cloud platform ### Intel AI Tools/Libraries We extensively integrated Intel's AI tools, particularly IPEX, to optimize our project: * Utilized Intel® Extension for PyTorch (IPEX) for model optimization * Achieved a remarkable reduction in inference time from 2 minutes 53 seconds to less than 10 seconds * This represents a 80% decrease in processing time, showcasing the power of Intel's AI tools ### Innovation Our project breaks new ground in emergency response technology: * Developed the first empathetic, AI-powered dispatcher agent * Designed to support first responders during resource-constrained situations * Introduces a novel approach to handling emergency calls with AI assistance ### Technical Complexity * Implemented a fine-tuned Mistral LLM for specialized emergency response with Intel Dev Cloud * Created a complex backend system integrating Twilio, Hume, Retell, and OpenAI * Developed real-time call processing capabilities * Built an interactive operator dashboard for data summarization and oversight ### Design and User Experience Our design focuses on operational efficiency and user-friendliness: * Crafted a clean, intuitive UI tailored for experienced operators * Prioritized comprehensive data visibility for quick decision-making * Enabled immediate response capabilities for critical situations * Interactive Operator Map ### Impact DispatchAI addresses a critical need in emergency services: * Targets the 82% of understaffed call centers * Aims to reduce wait times in critical situations (e.g., Oakland's 1+ minute 911 wait times) * Potential to save lives by ensuring every emergency call is answered promptly ### Bonus Points * Open-sourced our fine-tuned LLM on HuggingFace with a complete model card (<https://huggingface.co/spikecodes/ai-911-operator>) + And published the training dataset: <https://huggingface.co/datasets/spikecodes/911-call-transcripts> * Submitted to the Powered By Intel LLM leaderboard (<https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard>) * Promoted the project on Twitter (X) using #HackwithIntel (<https://x.com/spikecodes/status/1804826856354725941>) ## What we learned * How to integrate multiple technologies to create a cohesive, functional system * The potential of AI to augment and improve critical public services ## What's next for Dispatch AI * Expand the training dataset with more diverse emergency scenarios * Collaborate with local emergency services for real-world testing and feedback * Explore future integration
## Inspiration We were inspired by all the people who go along their days thinking that no one can actually relate to what they are experiencing. The Covid-19 pandemic has taken a mental toll on many of us and has kept us feeling isolated. We wanted to make an easy to use web-app which keeps people connected and allows users to share their experiences with other users that can relate to them. ## What it does Alone Together connects two matching people based on mental health issues they have in common. When you create an account you are prompted with a list of the general mental health categories that most fall under. Once your account is created you are sent to the home screen and entered into a pool of individuals looking for someone to talk to. When Alone Together has found someone with matching mental health issues you are connected to that person and forwarded to a chat room. In this chat room there is video-chat and text-chat. There is also an icebreaker question box that you can shuffle through to find a question to ask the person you are talking to. ## How we built it Alone Together is built with React as frontend, a backend in Golang (using Gorilla for websockets), WebRTC for video and text chat, and Google Firebase for authentication and database. The video chat is built from scratch using WebRTC and signaling with the Golang backend. ## Challenges we ran into This is our first remote Hackathon and it is also the first ever Hackathon for one of our teammates (Alex Stathis)! Working as a team virtually was definitely a challenge that we were ready to face. We had to communicate a lot more than we normally would to make sure that we stayed consistent with our work and that there was no overlap. As for the technical challenges, we decided to use WebRTC for our video chat feature. The documentation for WebRTC was not the easiest to understand, since it is still relatively new and obscure. This also means that it is very hard to find resources on it. Despite all this, we were able to implement the video chat feature! It works, we just ran out of time to host it on a cloud server with SSL, meaning the video is not sent on localhost (no encryption). Google App Engine also doesn't allow websockets in standard mode, and also doesn't allow `go.mod` on `flex` mode, which was inconvenient and we didn't have time to rewrite parts of our webapp. ## Accomplishments that we're proud of We are very proud for bringing our idea to life and working as a team to make this happen! WebRTC was not easy to implement, but hard work pays off. ## What we learned We learned that whether we work virtually together or physically together we can create anything we want as long as we stay curious and collaborative! ## What's next for Alone Together In the future, we would like to allow our users to add other users as friends. This would mean in addition of meeting new people with the same mental health issues as them, they could build stronger connections with people that they have already talked to. We would also allow users to have the option to add moderation with AI. This would offer a more "supervised" experience to the user, meaning that if our AI detects any dangerous change of behavior we would provide the user with tools to help them or (with the authorization of the user) we would give the user's phone number to appropriate authorities to contact them.
winning
## Inspiration Before the hackathon started, we noticed that the oranges were really tasty so we desired to make a project about oranges, but we also wanted to make a game so we put the ideas together. ## What it does Squishy Orange is a multiplayer online game where each user controls an orange. There are two game modes: "Catch the Squish" and "Squish Zombies". In "Catch the Squish", one user is the squishy orange and every other user is trying to tag the squish. In "Squish Zombies", one user is the squishy orange and tries to make other oranges squishy by touching them. The winner is the last surviving orange. ## How we built it We used phaser.io for the game engine and socket.io for real-time communication. Lots of javascript everywhere! ## Challenges we ran into * getting movements to stay in sync across various scenarios (multiple players in general, new players, inactive tabs, etc.) * getting different backgrounds * animated backgrounds * having different game rooms * different colored oranges ## Accomplishments that we're proud of * getting the oranges in sync for movements! * animated backgrounds ## What we learned * spritesheets * syncing game movement ## What's next for Squishy Orange * items in game * more game modes
## Inspiration We planned weeks ahead to do a hardware hack but on the day of the event, we couldn't secure the hardware within reasonable starting time. As such we were contemplating our future in the lineup for the funnel cakes, when suddenly we were blessed with a vision from the future: coding through interpretive dance. Jokes aside, we noticed that a lot of programmers spend a lot of time sitting at their desk. This is atrocious for your physical and mental well-being: insufficient physical activity is the 4th leading risk factor for death and is responsible for 3.2 million deaths per year worldwide ([citation](https://www.who.int/data/gho/indicator-metadata-registry/imr-details/3416)). Shifty Tech enables programmers to be productive and get their morning yoga done at the same time. ## What it does Unlike treadmills under tables or other office products, Shifty Tech fuses physical exercise and programming so that movement is the source of productivity. Upon running our program, you will see a camera appear that tracks your movement. Every a few seconds, a frame will be captured and translated into Python code. We don't aim for efficiency. We aim to improve the mental and physical health of fellow programmers by recovering the creativity and joy they experience when they complete a program that was difficult to write, through dance-like exercise. Our project help programmers **truly enjoy** every component that goes into making their programs run. ## How we built it The backed manoeuvre for recognizing user poses was done in Python, with the pose estimation model TensorFlow MoveNet Thunder. The output of our MoveNet Thunder model was a multi-dimensional tensor that we flat-mapped and normalized into its vector representation, which is stored on Milvus through the Google Cloud Platform. Our website and code-editing is done through Replit, Next, and TypeScript. Our domain name is from .Tech and deployed on Vercel. ## Challenges we ran into There were a number of challenges that we ran into. First, we went through a variety of pose estimation models before settling on TensorFlow Thunder, such as TensorFlow MoveNet Lightning and Openpose. Each had their own benefits and drawbacks; for example while the Lightning model was very fast, it compromised on accuracy. Finding the right model for our live recognition took some time. We also tried a few different ways of storing our vector representations of our data. As our product interprets the poses live, optimizing the speed in which we interpret images matters. We tried Google Cloud for an instantiation of a virtual machine and MongoDB for their vector database, but settled on Milvus for optimizing running kNN on all the pose categories. ## Accomplishments that we're proud of For some of us, this was our first hack-a-thon and we were really proud of how we were able to create a full-fledged programming language using the full human body, which is something we've never seen before. Not only were we able to make that happen, we were also able to divide our time well and work diligently, so that we were able to deploy a full website dedicated to our new language as well. ## What we learned That cold showers are absolute doodoo. As well, we learned the hard way that when different local machines have different permissions and abilities, that can result in many many hours of debugging to change one line of code. ## What's next for Shifty Tech Shifty Tech has a lot more in store and there are many functionalities that could be added to the existing MVP. For example, you could add increased functionality in types of poses to increase editing and flexibility with your code. You could record a short video and upload it to Shifty Tech and have it return the result of your video to you, as opposed to solely coding live. Since Replit also allows for collaboration, we could extend the Shifty Tech functionality to include collaboration on the same code as well. We're excited for all the ways Shifty Tech can continue to evolve!
## Inspiration Over this past semester, Alp and I were in the same data science class together, and we were really interested in how data can be applied through various statistical methods. Wanting to utilize this knowledge in a real-world application, we decided to create a prediction model using machine learning. This would allow us to apply the concepts that we learned in class, as well as to learn more about various algorithms and methods that are used to create better and more accurate predictions. ## What it does This project consists of taking a dataset containing over 280,000 real-life credit card transactions made by European cardholders over a two-day period in September 2013, with a variable determining whether the transaction was fraudulent, also known as the ground truth. After conducting exploratory data analysis, we separated the dataset into training and testing data, before training the classification algorithms on the training data. After that, we observed how accurately each algorithm performed on the testing data to determine the best-performing algorithm. ## How we built it We built it in Python using Jupyter notebooks, where we imported all our necessary libraries for plotting, visualizing and modeling the dataset. From there, we began to do some explanatory data analysis to figure out the imbalances of the data and the different variables. However, we discovered that there were several variables that were unknown to us due to customer confidentiality. From there, we first applied principal component analysis (PCA) to reduce the dimensionality of the dataset by removing the unknown variables and analyzing the data using the only two variables that were known to us, the amount and time of each transaction. Thereafter, we had to balance the dataset using the SMOTE technique in order to balance the dataset outcomes, as the majority of the data was determined to be not fraudulent. However, in order to detect fraud, we had to ensure that the training had an equal proportion of data values that were both fraudulent and not fraudulent in order to return accurate predictions. After that, we applied 6 different classification algorithms to the training data to train it to predict the respective outcomes, such as Naive Bayes, Decision Tree, Random Forest, K-Nearest Neighbor, Logistic Regression and XGBoost. After training the data, we then applied these algorithms to the testing data and observed how accurately does each algorithm predict fraudulent transactions. We then cross-validated each algorithm by applying it to every subset of the dataset in order to reduce overfitting. Finally, we used various evaluation metrics such as accuracy, precision, recall and F-1 scores to compare which algorithm performed the best in accurately predicting fraudulent transactions. ## Challenges we ran into The biggest challenge was the sheer amount of research and trial and error required to build this model. As this was our first time building a prediction model, we had to do a lot of reading to understand the various steps and concepts needed to clean and explore the dataset, as well as the theory and mathematical concepts behind the classification algorithms in order to model the data and check for accuracy. ## Accomplishments that we're proud of We are very proud that we are able to create a working model that is able to predict fraudulent transactions with very high accuracy, especially since this was our first major ML model that we have made. ## What we learned We learned a lot about the processing of building a machine learning application, such as cleaning data, conducting explanatory data analysis, creating a balanced sample, and modeling the dataset using various classification strategies to find the model with the highest accuracy. ## What's next for Credit Card Fraud Detection We want to do more research into the theory and concepts behind the modeling process, especially the classification strategies, as we work towards fine-tuning this model and building more machine learning projects in the future.
partial
## 💫 Inspiration Inspired by our grandparents, who may not always be able to accomplish certain tasks, we wanted to create a platform that would allow them to find help locally . We also recognize that many younger members of the community might be more knowledgeable or capable of helping out. These younger members may be looking to make some extra money, or just want to help out their fellow neighbours. We present to you.... **Locall!** ## 🏘 What it does Locall helps members of a neighbourhood get in contact and share any tasks that they may need help with. Users can browse through these tasks, and offer to help their neighbours. Those who post the tasks can also choose to offer payment for these services. It's hard to trust just anyone to help you out with daily tasks, but you can always count on your neighbours! For example, let's say an elderly woman can't shovel her driveway today. Instead of calling a big snow plowing company, she can post a service request on Locall, and someone in her local community can reach out and help out! By using Locall, she's saving money on fees that the big companies charge, while also helping someone else in the community make a bit of extra money. Plenty of teenagers are looking to make some money whenever they can, and we provide a platform for them to get in touch with their neighbours. ## 🛠 How we built it We first prototyped our app design using Figma, and then moved on to using Flutter for actual implementation. Learning Flutter from scratch was a challenge, as we had to read through lots of documentation. We also stored and retrieved data from Firebase ## 🦒 What we learned Learning a new language can be very tiring, but also very rewarding! This weekend, we learned how to use Flutter to build an iOS app. We're proud that we managed to implement some special features into our app! ## 📱 What's next for Locall * We would want to train a Tensorflow model to better recommend services to users, as well as improve the user experience * Implementing chat and payment directly in the app would be helpful to improve requests and offers of services
**In times of disaster, there is an outpouring of desire to help from the public. We built a platform which connects people who want to help with people in need.** ## Inspiration Natural disasters are an increasingly pertinent global issue which our team is quite concerned with. So when we encountered the IBM challenge relating to this topic, we took interest and further contemplated how we could create a phone application that would directly help with disaster relief. ## What it does **Stronger Together** connects people in need of disaster relief with local community members willing to volunteer their time and/or resources. Such resources include but are not limited to shelter, water, medicine, clothing, and hygiene products. People in need may input their information and what they need, and volunteers may then use the app to find people in need of what they can provide. For example, someone whose home is affected by flooding due to Hurricane Florence in North Carolina can input their name, email, and phone number in a request to find shelter so that this need is discoverable by any volunteers able to offer shelter. Such a volunteer may then contact the person in need through call, text, or email to work out the logistics of getting to the volunteer’s home to receive shelter. ## How we built it We used Android Studio to build the Android app. We deployed an Azure server to handle our backend(Python). We used Google Maps API on our app. We are currently working on using Twilio for communication and IBM watson API to prioritize help requests in a community. ## Challenges we ran into Integrating the Google Maps API into our app proved to be a great challenge for us. We also realized that our original idea of including a blood donation as one of the resources would require some correspondence with an organization such as the Red Cross in order to ensure the donation would be legal. Thus, we decided to add a blood donation to our future aspirations for this project due to the time constraint of the hackathon. ## Accomplishments that we're proud of We are happy with our design and with the simplicity of our app. We learned a great deal about writing the server side of an app and designing an Android app using Java (and Google Map’s API” during the past 24 hours. We had huge aspirations and eventually we created an app that can potentially save people’s lives. ## What we learned We learned how to integrate Google Maps API into our app. We learn how to deploy a server with Microsoft Azure. We also learned how to use Figma to prototype designs. ## What's next for Stronger Together We have high hopes for the future of this app. The goal is to add an AI based notification system which alerts people who live in a predicted disaster area. We aim to decrease the impact of the disaster by alerting volunteers and locals in advance. We also may include some more resources such as blood donations.
## Inspiration Life’s swift pace often causes us to overlook meaningful experiences, leading to nostalgia and a sense of lost connection with friends. Social media exacerbates this by showcasing others' highlight reels, which we envy, prompting a cycle of negative comparison and distraction through platforms like TikTok. This digital interaction cultivates social isolation and self-doubt. We propose using personal data positively, steering attention from addictive social feeds to appreciating our life’s journey. ## What it does Recall is a wellness app that helps people remember more of their life by letting people retain the smallest details of their life, see a big picture of it, and have a conversation with their past. ## How we built it We kicked off by defining Recall’s key features and objectives, followed by UI design in Figma. Our plan for implementation was to use Flutter/Dart for app building and host the data locally. Google Cloud Platform and Google Maps SDK would have been used for mapping, Python for photo metadata, OpenAI API for data embedding and clustering, and Milvus for the Recall Bot chatbot function. ## Challenges we ran into It was the first hackathon for 3/4 of us. There were roadblocks at every step of the way, but we worked hard as a team to adapt quickly and continue pushing through. Flutter/Dart was new territory for most of us, leading to a one-day project delay due to setup issues and unreliable internet. Figuring out the tech stack was difficult as our skill sets were essentially incompatible. Additionally, we had miscommunication at many steps in the implementation process. Lastly, we tackled our problems by dividing and conquering, but that was a misguided approach as we had difficulties integrating everything together since there was no centralized codebase. ## Accomplishments that we're proud of We were able to make a few screens of the app on Flutter, having very little experience with it, so we are so freaking proud. We are proud of the team spirit, motivation, and positivity we all contributed while tackling so many roadblocks along our way. We had a great idea and an awesome UI! ## What we learned Our entire project was a big learning experience and taught us invaluable skills. Key takeaways across the entire team: 1. Matching the idea to the skillset of the team is vital to success 2. The importance of a solid implementation plan from the start 3. Working on different features in isolation = bound for failure 4. Maintain composure during tough times to adapt quickly and find alternative solutions 5. Beware of activities that can distract the team along the way (like talking during quiet hack time) 6. Take advantage of the mentorship and resources ## What's next for Recall We’ll take the skills and insights we’ve gained from the Hackathon to move the app forward. We believe there is a demand for an app like this, so the next step is for us to validate the market and gather the resources and skills to build out an MVP that we can test with users. :)))))) If you'd like to be an early adopter and feedback provider please reach out to Casey ([[email protected]](mailto:[email protected])).
winning
## Inspiration I've been really interested in Neural Networks and I wanted to step up my game by learning deep learning (no pun intended). I built an Recurrent Neural Network that learns and predicts words based on its training. I chose Trump because I expected to get a funny outcome, but it seems that my AI is obsessed with "controlling this country" (seriously)... so I turned out to be a little more political than expected. I wanted a challenge, to make something fun. I used 2 programming languages, did the app UI, implemented speech-text, NLP, did the backend, trained 2 models of ~10h each, learned a lot and had a lot of fun. ## What it does Using Nuance API, I am able to get user input as voice and in real time the iOS app gives feedback to the user. The user then submits its text (which he can edit in case Nuance didn't recognize everything properly) and I send it to my server (which is physically located in my home, at this right moment :D). In my server, I built a RESTfull API in Python using Flask. There, I get the request sent from iOS, take the whole sentences given by Nuance, then I use the Lexalytics API (Semantria) to get the sense of the sentence, to finally feed my Trump AI model which is going to generate a sentence automatically. ## How I built it First I built the RNN which works with characters instead of words. Instead of finding probabilities of words, I do with characters. I used the Keras library. After, I trained two models with two different parameters and I chose the best one (Unfortunately, the best one means that it has a 0.97 LOSS -remember the closer to 0, the better it is). The RNN didn't have enough time and resources to learn punctuation properly and grammar sometimes is... Trumpish. Later I built the API in Python using flask, where I implemented Semantria. I exposed an endpoint to receive a string so magic could happen. With Semantria I get the sense of the string (which is submitted by the user as VOICE), and I Trumpify it. The IOS app is built on Swift and I use two pods: Nuance SpeechKit and Alamofire (to make my requests cleaner). The UI is beautiful (trust me, the best) and I interact with the user with text, sound and change of colours. ## Challenges I ran into Building the RNN was really difficult. I didn't know where to start but fortunately it went good. I am happy with the results, and I am definitely going to spend more time improving it. Also, the Simantria API has not so good documentation. I had to figure out how everything worked by tracking sample codes and THAT made it a challenge. I can't recall of something that was still valid in the current code while reading the current documentation. I still thank all the Lexalytics peoplele assistance over Slack.. you guys are great. ## Accomplishments that I'm proud of Pretty much everything. Learning deep learning and building a Recurrent Neural Network is... something to be proud of. I learned python over the past week, and today I built two major things with it (an API and a RNN). I am proud of my Swift skills but I consider myself an iOS developer so it is nothing new. Mmmm... I'm also proud of the way I implemented all those services, which are complex (RNN, NLP, Voice-To-Speech, the API, etc), alone. I solo'd this hackathon and I am also proud of that. I think that today I achieved a new level of knowledge and it feels great. My main goal was to step up my game, and it indeed happened. ## What I learned Flask, Python, Nuance API, Simantria API... I even learned some Photoshop while working on the UI. I learned how to build an API (I had no idea before)... I learned about RNNs... Wow. A lot. ## What's next for Trumpify Since I already have a server at home, I can host Trumpify backend without much cost. I'd say that the next step for Trumpify would be improving the Recurrent Neural Network so Trumpify gets smarter and says less non sense. I'll also publish this app in the App Store, so people can interact with trump, but in a funny way. If someone is interested I could also feature this app somewhere... who knows. I may even use this technology to create things in a more serious approach, instead of Trump.
## Inspiration When reading news articles, we're aware that the writer has bias that affects the way they build their narrative. Throughout the article, we're constantly left wondering—"What did that article not tell me? What convenient facts were left out?" ## What it does *News Report* collects news articles by topic from over 70 news sources and uses natural language processing (NLP) to determine the common truth among them. The user is first presented with an AI-generated summary of approximately 15 articles on the same event or subject. The references and original articles are at your fingertips as well! ## How we built it First, we find the top 10 trending topics in the news. Then our spider crawls over 70 news sites to get their reporting on each topic specifically. Once we have our articles collected, our AI algorithms compare what is said in each article using a KL Sum, aggregating what is reported from all outlets to form a summary of these resources. The summary is about 5 sentences long—digested by the user with ease, with quick access to the resources that were used to create it! ## Challenges we ran into We were really nervous about taking on a NLP problem and the complexity of creating an app that made complex articles simple to understand. We had to work with technologies that we haven't worked with before, and ran into some challenges with technologies we were already familiar with. Trying to define what makes a perspective "reasonable" versus "biased" versus "false/fake news" proved to be an extremely difficult task. We also had to learn to better adapt our mobile interface for an application that’s content varied so drastically in size and available content. ## Accomplishments that we're proud of We’re so proud we were able to stretch ourselves by building a fully functional MVP with both a backend and iOS mobile client. On top of that we were able to submit our app to the App Store, get several well-deserved hours of sleep, and ultimately building a project with a large impact. ## What we learned We learned a lot! On the backend, one of us got to look into NLP for the first time and learned about several summarization algorithms. While building the front end, we focused on iteration and got to learn more about how UIScrollView’s work and interact with other UI components. We also got to worked with several new libraries and APIs that we hadn't even heard of before. It was definitely an amazing learning experience! ## What's next for News Report We’d love to start working on sentiment analysis on the headlines of articles to predict how distributed the perspectives are. After that we also want to be able to analyze and remove fake news sources from our spider's crawl.
## Inspiration <https://www.youtube.com/watch?v=lxuOxQzDN3Y> Robbie's story stuck out to me at the endless limitations of technology. He was diagnosed with muscular dystrophy which prevented him from having full control of his arms and legs. He was gifted with a Google home that crafted his home into a voice controlled machine. We wanted to take this a step further and make computers more accessible for people such as Robbie. ## What it does We use a Google Cloud based API that helps us detect words and phrases captured from the microphone input. We then convert those phrases into commands for the computer to execute. Since the python script is run in the terminal it can be used across the computer and all its applications. ## How I built it The first (and hardest) step was figuring out how to leverage Google's API to our advantage. We knew it was able to detect words from an audio file but there was more to this project than that. We started piecing libraries to get access to the microphone, file system, keyboard and mouse events, cursor x,y coordinates, and so much more. We build a large (~30) function library that could be used to control almost anything in the computer ## Challenges I ran into Configuring the many libraries took a lot of time. Especially with compatibility issues with mac vs. windows, python2 vs. 3, etc. Many of our challenges were solved by either thinking of a better solution or asking people on forums like StackOverflow. For example, we wanted to change the volume of the computer using the fn+arrow key shortcut, but python is not allowed to access that key. ## Accomplishments that I'm proud of We are proud of the fact that we had built an alpha version of an application we intend to keep developing, because we believe in its real-world applications. From a technical perspective, I was also proud of the fact that we were able to successfully use a Google Cloud API. ## What I learned We learned a lot about how the machine interacts with different events in the computer and the time dependencies involved. We also learned about the ease of use of a Google API which encourages us to use more, and encourage others to do so, too. Also we learned about the different nuances of speech detection like how to tell the API to pick the word "one" over "won" in certain context, or how to change a "one" to a "1", or how to reduce ambient noise. ## What's next for Speech Computer Control At the moment we are manually running this script through the command line but ideally we would want a more user friendly experience (GUI). Additionally, we had developed a chrome extension that numbers off each link on a page after a Google or Youtube search query, so that we would be able to say something like "jump to link 4". We were unable to get the web-to-python code just right, but we plan on implementing it in the near future.
partial
## Inspiration Databases are wonderfully engineered for specific tasks. Every time someone wants to add a different type of data or use their data with different access pattern, they either need to either use a sub-optimal choice of database (one that they already support), or support a totally new database. The former damages performance, while the latter is extremely costly in both complexity and engineering effort. For example, Druid on 100GB of time series data is about 100x faster than MySQL, but it's slower on other types of data. ## What it does We set up a simple database auto-selector that makes the decision of whether to use Druid or MySQL. We set up a metaschema for data -- thus we can accept queries and then direct them to the database containing the relevant data. Our core technical contributions are a tool that assigns data to the appropriate database based on the input data and a high-level schema for incoming data. We demonstrated our approach by building a web app, StockSolver, that shows these trade-offs and the advantages of using 1DB for database selection. It has both time-series data and text data. Using our metaschema we and 1DB can easily mix-and-match data between Druid and MongoDB. 1DB finds that the time-series data should be stored on Druid, while MongoDB should store the text data. We show the results of making these decisions in our demo! ## How we built it We created a web app for NASDAQ financial data. We used react and node.js to build our website. We set up MongoDB on Microsoft's Cosmos DB and Druid on the Google Cloud Platform. ## Challenges we ran into It was challenging just to set up each of these databases and load large amounts of data onto them. It was even more challenging to try to load data and build queries that the database was not necessarily made for in order to make clear comparisons between the performance of the databases in differ use-cases. Building the queries to back the metaschema was also quite challenging. ## Accomplishments that we're proud of Building an end-to-end system from databases to 1DB to our data visualizations. ## What we learned We collectively had relatively little database experience and thus we learned how to better work with different databases. ## What's next for 1DB: One Database to rule them all We would like to support more databases and to experiment with using more complex heuristics to select among databases. An extension that follows naturally from our work is to have 1DB track query usage statistics and over time, make the decision to select among supported databases. The extra level of indirection makes these switches natural and can be potentially automated.
## Inspiration Data analytics can be **extremely** time-consuming. We strove to create a tool utilizing modern AI technology to generate analysis such as trend recognition on user-uploaded datasets.The inspiration behind our product stemmed from the growing complexity and volume of data in today's digital age. As businesses and organizations grapple with increasingly massive datasets, the need for efficient, accurate, and rapid data analysis became evident. We even saw this within one of our sponsor's work, CapitalOne, in which they have volumes of financial transaction data, which is very difficult to manually, or even programmatically parse. We recognized the frustration many professionals faced when dealing with cumbersome manual data analysis processes. By combining **advanced machine learning algorithms** with **user-friendly design**, we aimed to empower users from various domains to effortlessly extract valuable insights from their data. ## What it does On our website, a user can upload their data, generally in the form of a .csv file, which will then be sent to our backend processes. These backend processes utilize Docker and MLBot to train a LLM which performs the proper data analyses. ## How we built it Front-end was very simple. We created the platform using Next.js and React.js and hosted on Vercel. The back-end was created using Python, in which we employed use of technologies such as Docker and MLBot to perform data analyses as well as return charts, which were then processed on the front-end using ApexCharts.js. ## Challenges we ran into * It was some of our first times working in live time with multiple people on the same project. This advanced our understand of how Git's features worked. * There was difficulty getting the Docker server to be publicly available to our front-end, since we had our server locally hosted on the back-end. * Even once it was publicly available, it was difficult to figure out how to actually connect it to the front-end. ## Accomplishments that we're proud of * We were able to create a full-fledged, functional product within the allotted time we were given. * We utilized our knowledge of how APIs worked to incorporate multiple of them into our project. * We worked positively as a team even though we had not met each other before. ## What we learned * Learning how to incorporate multiple APIs into one product with Next. * Learned a new tech-stack * Learned how to work simultaneously on the same product with multiple people. ## What's next for DataDaddy ### Short Term * Add a more diverse applicability to different types of datasets and statistical analyses. * Add more compatibility with SQL/NoSQL commands from Natural Language. * Attend more hackathons :) ### Long Term * Minimize the amount of work workers need to do for their data analyses, almost creating a pipeline from data to results. * Have the product be able to interpret what type of data it has (e.g. financial, physical, etc.) to perform the most appropriate analyses.
## Inspiration The inspiration for Nova came from the overwhelming volume of emails and tasks that professionals face daily. We aimed to create a solution that simplifies task management and reduces cognitive load, allowing users to focus on what truly matters. ## What it does Nova is an automated email assistant that intelligently processes incoming emails, identifies actionable items, and seamlessly adds them to your calendar. It also sends timely text reminders, ensuring you stay organized and on top of your commitments without the hassle of manual tracking. ## How we built it We built Nova using natural language processing algorithms to analyze email content and extract relevant tasks. By integrating with calendar APIs and SMS services, we created a smooth workflow that automates task management and communication, making it easy for users to manage their schedules. ## Challenges we ran into One of the main challenges was accurately interpreting the context of emails to distinguish between urgent tasks and general information. Additionally, ensuring seamless integration with various calendar platforms and messaging services required extensive testing and refinement. ## Accomplishments that we're proud of We are proud of developing a fully functional prototype of Nova that effectively reduces users' daily load by automating task management. Initial user feedback has been overwhelmingly positive, highlighting the assistant's ability to streamline workflows and enhance productivity. ## What we learned Throughout the development process, we learned the importance of user feedback in refining our algorithms and improving the overall user experience. We also gained insights into the complexities of integrating multiple services to create a cohesive solution. ## What's next for Nova Moving forward, we plan to enhance Nova's capabilities by incorporating machine learning to improve task recognition and prioritization. Our goal is to expand its features and ultimately launch it as a comprehensive productivity tool that transforms how users manage their daily tasks.
partial
## Inspiration The transition from high school to university is indeed a challenging phase wherein a student not only navigates through the curriculum but at the same time he/she/they are figuring out who they are as individuals, their passion etc. For students coming from culturally diverse backgrounds, things become even more challenging. There are many school factors that affect the success of culturally diverse students the school's atmosphere and overall attitudes toward diversity, involvement of the community, and culturally responsive curriculum, to name a few. To help those coming from culturally diverse backgrounds and help the transition process be a smooth and easy one, we bring to you Multicultural Matrix, a platform celebrating cultural diversity and providing a safe space with tools and equipment to help students sail through the college smoothly making the max use of opportunities out there. ## What it does The platform provides tools and resources required for students to navigate through the university. The platform focuses on three main aspects- 1)Accommodation Many students look for nearby affordable accommodations due to personal and financial constraints wherein the accomodations welcome diversity and different communities. To help with that the accommodation section plans to help students find accomodations that matches their requirements. 2) Legal Documentations Handling paperwork can be difficult for international students coming from across the globe and the whole documentation process gets 10x more complicated. The website also caters to handling paperwork by providing all the resources and information on one platform so that the student doesn't have to hop here and there looking for help and guidance. 3) Food and Culture Adjusting to a new University and a new place can be very challenging as you're thrown completely outside of your comfort zone and it's often that you crave the familiarity and comfort of your community, culture and food. To help students find "Home Away From Home" we have also added a food section that shows the cheapest and best-rated restaurants nearby your location based on the cuisine and food you are craving. ## How we built it The front end was built using HTML, CSS and JavaScript We also used Twilio, BotDoc, Auth0 and Coil. ## Challenges we ran into The CSS adjustment and integration of the tools and tech with the website was a bit challenging but we were able to find our way through the challenges as a team ## Accomplishments that we're proud of Working together as a team and coming up with a solution that solves real-world problems is something we're proud of at the same time celebrating diversity. ## What we learned We learned how to use parallax scrolling on the website and learned how to work with Twilio, BotDoc etc. ## What's next for Multicultural Matrix We do plan on making the different aspects of the website functional and using the MERN stack to implement the backend. We also plan on providing resources such as internship opportunities and scholarships for students coming from culturally diverse backgrounds. ## Domain: Domain.com: multicultural-matrix.tech
## My Samy helps: **Young marginalized students** * Anonymous process: Ability to ask any questions anonymously without feeling judged * Get relevant resources: More efficient process for them to ask for help and receive immediately relevant information and resources * Great design and user interface: Easy to use platform with kid friendly interface * Tailored experience: Computer model is trained to understand their vocabulary * Accessible anytime: Replaces the need to schedule an appointment and meet someone in person which can be intimidating. App is readily available at any time, any place. * Free to use platform **Schools** * Allows them to support every student simultaneously * Provides a convenient process as the recommendation system is automatized * Allows them to receive a general report that highlights the most common issues students experience **Local businesses** * Gives them an opportunity to support their community in impactful ways * Allows them to advertise their services Business Plan: <https://drive.google.com/file/d/1JII4UGR2qWOKVjF3txIEqfLUVgaWAY_h/view?usp=sharing>
## Inspiration We were inspired by the fact that **diversity in disability is often overlooked** - individuals who are hard-of-hearing or deaf and use **American Sign Language** do not have many tools that support them in learning their language. Because of the visual nature of ASL, it's difficult to translate between it and written languages, so many forms of language software, whether it is for education or translation, do not support ASL. We wanted to provide a way for ASL-speakers to be supported in learning and speaking their language. Additionally, we were inspired by recent news stories about fake ASL interpreters - individuals who defrauded companies and even government agencies to be hired as ASL interpreters, only to be later revealed as frauds. Rather than accurately translate spoken English, they 'signed' random symbols that prevented the hard-of-hearing community from being able to access crucial information. We realized that it was too easy for individuals to claim their competence in ASL without actually being verified. All of this inspired the idea of EasyASL - a web app that helps you learn ASL vocabulary, translate between spoken English and ASL, and get certified in ASL. ## What it does EasyASL provides three key functionalities: learning, certifying, and translating. **Learning:** We created an ASL library - individuals who are learning ASL can type in the vocabulary word they want to learn to see a series of images or a GIF demonstrating the motions required to sign the word. Current ASL dictionaries lack this dynamic ability, so our platform lowers the barriers in learning ASL, allowing more members from both the hard-of-hearing community and the general population to improve their skills. **Certifying:** Individuals can get their mastery of ASL certified by taking a test on EasyASL. Once they start the test, a random word will appear on the screen and the individual must sign the word in ASL within 5 seconds. Their movements are captured by their webcam, and these images are run through OpenAI's API to check what they signed. If the user is able to sign a majority of the words correctly, they will be issued a unique certificate ID that can certify their mastery of ASL. This certificate can be verified by prospective employers, helping them choose trustworthy candidates. **Translating:** EasyASL supports three forms of translation: translating from spoken English to text, translating from ASL to spoken English, and translating in both directions. EasyASL aims to make conversations between ASL-speakers and English-speakers more fluid and natural. ## How we built it EasyASL was built primarily with **typescript and next.js**. We captured images using the user's webcam, then processed the images to reduce the file size while maintaining quality. Then, we ran the images through **Picsart's API** to filter background clutter for easier image recognition and host images in temporary storages. These were formatted to be accessible to **OpenAI's API**, which was trained to recognize the ASL signs and identify the word being signed. This was used in both our certification stream, where the user's ASL sign was compared against the prompt they were given, and in the translation stream, where ASL phrases were written as a transcript then read aloud in real time. We also used **Google's web speech API** in the translation stream, which converted English to written text. Finally, the education stream's dictionary was built using typescript and a directory of open-source web images. ## Challenges we ran into We faced many challenges while working on EasyASL, but we were able to persist through them to come to our finished product. One of our biggest challenges was working with OpenAI's API: we only had a set number of tokens, which were used each time we ran the program, meaning we couldn't test the program too many times. Also, many of our team members were using TypeScript and Next.js for the first time - though there was a bit of a learning curve, we found that its similarities with JavaScript helped us adapt to the new language. Finally, we were originally converting our images to a UTF-8 string, but got strings that were over 500,000 characters long, making them difficult to store. We were able to find a workaround by keeping the images as URLs and passing these URLs directly into our functions instead. ## Accomplishments that we're proud of We were very proud to be able to integrate APIs into our project. We learned how to use them in different languages, including TypeScript. By integrating various APIs, we were able to streamline processes, improve functionality, and deliver a more dynamic user experience. Additionally, we were able to see how tools like AI and text-to-speech could have real-world applications. ## What we learned We learned a lot about using Git to work collaboratively and resolve conflicts like separate branches or merge conflicts. We also learned to use Next.js to expand what we could do beyond JavaScript and HTML/CSS. Finally, we learned to use APIs like Open AI API and Google Web Speech API. ## What's next for EasyASL We'd like to continue developing EasyASL and potentially replacing the Open AI framework with a neural network model that we would train ourselves. Currently processing inputs via API has token limits reached quickly due to the character count of Base64 converted image. This results in a noticeable delay between image capture and model output. By implementing our own model, we hope to speed this process up to recreate natural language flow more readily. We'd also like to continue to improve the UI/UX experience by updating our web app interface.
winning
# FarSight Safety Features \*Over Extension Indicator \*Struggle Detector \*Person Detector \*Fall Detector \*Email Notification \*GPS Location Made with an Arduino as a sensor hub. The Arduino then sends alerts as it detects them over serial communications to the Raspberry Pi. The Pi acts as our main microcomputer and interprets the information. The Pi also takes images and uses open cv to locate the person using the walker. When a potential issue occurs, all family members on the email list receive an email notify them of the issue, the walkers current gps location, and the last image taken.
## Inspiration A twist on the classic game '**WikiRacer**', but better! Players play Wikiracer by navigating from one Wikipedia page to another with the fewest clicks. WikiChess adds layers of turn-based strategy, a dual-win condition, and semantic similarity guesswork that introduces varying playstyles/adaptability in a way that is similar to chess. It introduces **strategic** elements where players can choose to play **defensively or offensively**—misleading opponents or outsmarting them. **Victory can be achieved through both extensive general knowledge and understanding your opponent's tactics.** ## How to Play **Setup**: Click your chess piece to reveal your WikiWord—a secret word only you should know. Remember it well! **Game Play**: Each turn, you can choose to either **PLAY** or **GUESS**. ### **PLAY Mode**: * You will start on a randomly selected Wikipedia page. * Your goal is to navigate to your WikiWord's page by clicking hyperlinks. + For example, if your WikiWord is "BANANA," your goal is to reach the "Banana" Wikipedia article. * You can click up to three hyperlinks per turn. * After each click, you'll see a semantic similarity score indicating how close the current page's title is to your WikiWord. * You can view the last ten articles you clicked, along with their semantic scores, by holding the **TAB** key. * Be quick—if you run out of time, your turn is skipped! ### **GUESS Mode**: * Attempt to guess your opponent’s WikiWord. You have three guesses per turn. * Each guess provides a semantic similarity score to guide your future guesses. * Use the article history and semantic scores shown when holding **TAB** to deduce your opponent's target word based on their navigation path. **Example**: If your opponent’s target is "BANANA," they might navigate through articles like "Central America" > "Plantains" > "Tropical Fruit." Pay attention to their clicks and semantic scores to infer their WikiWord. ## Let's talk strategy! **Navigate Wisely in PLAY Mode!** * Your navigation path's semantic similarity indicates how closely related each page's title is to your WikiWord. Use this to your advantage by advancing towards your target without being too predictable. Balance your moves between progress and deception to keep your opponent guessing. **Leverage the Tug-of-War Dynamic**: * Since both players share the same Wikipedia path, the article you end on affects your opponent's starting point in their next PLAY turn. Choose your final article wisely—landing on a less useful page can disrupt your opponent's strategy and force them to consider guessing instead. + However, if you choose a dead end, your opponent may choose to GUESS and skip their PLAY turn—you’ll be forced to keep playing the article you tried to give them! **Semantic Similarity**: **May Not Be the Edge You Think It Is** * Semantic similarity measures how closely related the page's title is to your target WikiWord, not how straightforward it is to navigate to; use this to make strategic moves that might seem less direct semantically, but can be advantageous to navigate through. **To Advance or To Mislead?** * It's tempting to sprint towards your WikiWord, but consider taking detours that maintain a high semantic score but obscure your ultimate destination. This can mislead your opponent and buy you time to plan your next moves. **Adapt to Your Opponent**: * Pay close attention to your opponent's navigation path and semantic scores. This information can offer valuable clues about their WikiWord and inform your GUESS strategy. Be ready to shift your tactics if their path becomes more apparent. **Use GUESS Mode Strategically**: * If you're stuck or suspect you know your opponent’s WikiWord, use GUESS mode to gain an advantage. Your guesses provide semantic feedback, helping refine your strategy and closing in on their target. + Choosing GUESS also automatically skips your PLAY turn and forces your opponent to click more links. You can get even more semantic feedback from this, however it may also be risky—the more PLAY moves you give them, the more likely they are to eventually navigate to their own WikiWord. ## How we built it Several technologies and strategies were used to develop WikiChess. First, we used **web scraping** to fetch and clean Wikipedia content while bypassing iframes issues, allowing players to navigate and interact with real-time data from the site. To manage the game's state and progression, we updated game status based on each hyperlink click and used **Flask** for our framework. We incorporated **semantic analysis** using spaCy to calculate **NLP**similarity scores between articles to display to players. The game setup is coded with **Python**, featuring five categories—animals, sports, foods, professions, and sciences—generating two words from the same category to provide a cohesive and engaging experience. Players start from a page outside the common category to add an extra challenge. For the front end, we prioritized a user-friendly and interactive design, focusing on a minimalist aesthetic with **dynamic animations** and many smooth transitions. Front-end techstack was made up of **HTML/CSS, JS, image generation tools, Figma.** ## Challenges we ran into One of our biggest challenges was dealing with **iframe access controls**. We faced persistent issues with blocked access, which prevented us from executing any logic beyond simply displaying the Wikipedia content. Despite trying various methods to bypass this limitation, including using proxy servers, the frequent need to check for user victories made it clear that iframes were not a viable solution. This challenge forced us to pivot from our initial plan of handling much of the game logic on the client side using JavaScript. Instead, we had to **rely heavily on backend solutions**, particularly web scraping, to manage and update game state. ## Accomplishments that we're proud of Despite the unexpected shift in our solution approach, which significantly increased the complexity of our backend and required **major design adjustments**, we managed to overcome several challenges. Integrating web scraping with real-time game updates and **ensuring a smooth user experience** were particularly demanding. We tackled these issues by strengthening our backend logic and refining the frontend to enhance user engagement. Despite the text-heavy nature of the Wikipedia content, we aimed to make the interface visually appealing and fun, ensuring a seamless and enjoyable user experience. ## What we learned As beginners in hacking, we are incredibly proud of our perseverance through these challenges. The experience was a great learning opportunity, and we successfully delivered a product that we find both enjoyable and educational. **Everyone was able to contribute in their own way!** ## What's next for WikiChess Our first priority would be to implement an **online lobby feature** that would allow users to play virtually with their friends rather than only locally in person. We would also like to introduce more word categories, and to have a **more customized metric than semantics for the similarity score**. Ideally, the similarity score would account for the structure of Wikipedia and the strategies that WikiRacers use to reach their target article other than just by going with the closest in word meaning. We would also like to introduce a **timer-based gameplay** where players would be limited to time instead of turns to encourage a faster-paced game mode.
## Inspiration We realized how visually-impaired people find it difficult to perceive the objects coming near to them, or when they are either out on road, or when they are inside a building. They encounter potholes and stairs and things get really hard for them. We decided to tackle the issue of accessibility to support the Government of Canada's initiative to make work and public places completely accessible! ## What it does This is an IoT device which is designed to be something wearable or that can be attached to any visual aid being used. What it does is that it uses depth perception to perform obstacle detection, as well as integrates Google Assistant for outdoor navigation and all the other "smart activities" that the assistant can do. The assistant will provide voice directions (which can be geared towards using bluetooth devices easily) and the sensors will help in avoiding obstacles which helps in increasing self-awareness. Another beta-feature was to identify moving obstacles and play sounds so the person can recognize those moving objects (eg. barking sounds for a dog etc.) ## How we built it Its a raspberry-pi based device and we integrated Google Cloud SDK to be able to use the vision API and the Assistant and all the other features offered by GCP. We have sensors for depth perception and buzzers to play alert sounds as well as camera and microphone. ## Challenges we ran into It was hard for us to set up raspberry-pi, having had no background with it. We had to learn how to integrate cloud platforms with embedded systems and understand how micro controllers work, especially not being from the engineering background and 2 members being high school students. Also, multi-threading was a challenge for us in embedded architecture ## Accomplishments that we're proud of After hours of grinding, we were able to get the raspberry-pi working, as well as implementing depth perception and location tracking using Google Assistant, as well as object recognition. ## What we learned Working with hardware is tough, even though you could see what is happening, it was hard to interface software and hardware. ## What's next for i4Noi We want to explore more ways where i4Noi can help make things more accessible for blind people. Since we already have Google Cloud integration, we could integrate our another feature where we play sounds of living obstacles so special care can be done, for example when a dog comes in front, we produce barking sounds to alert the person. We would also like to implement multi-threading for our two processes and make this device as wearable as possible, so it can make a difference to the lives of the people.
partial
## Inspiration Star Wars inspired us. ## What it does The BB-8 droid navigates its environment based on its user's gaze. The user simply has to change their gaze and the BB-8 will follow their eye-gaze. ## How we built it We built it with an RC car kit placed into a styrofoam sphere. The RC car was created with an Arduino and Raspberry Pi. The Arduino controlled the motor controllers, while the Raspberry Pi acted as a Bluetooth module and sent commands to the Arduino. A separate laptop was used with eye-tracking glasses from AdHawk to send data to the Raspberry Pi. ## Challenges we ran into We ran into issues with the magnets not being strong enough to keep BB-8's head on. We also found that BB-8 was too top-heavy, and the RC car on the inside would sometimes roll over, causing the motors to spin out and stop moving the droid. ## Accomplishments that we're proud of We are proud of being able to properly develop the communication stack with Bluetooth. It was complex connecting the Arduino -> Raspberry Pi -> Computer -> Eye-tracking glasses. ## What we learned We learned about AdHawk's eye tracking systems and developed a mechanical apparatus for BB-8/ ## What's next for BB8 Droid We would like to find proper magnets to add the head for BB-8
## What it does XEN SPACE is an interactive web-based game that incorporates emotion recognition technology and the Leap motion controller to create an immersive emotional experience that will pave the way for the future gaming industry. # How we built it We built it using three.js, Leap Motion Controller for controls, and Indico Facial Emotion API. We also used Blender, Cinema4D, Adobe Photoshop, and Sketch for all graphical assets.
## Inspiration💡 Syllabi are the most important documents of any course, yet they are skimmed over and forgotten by most students. When we took a look at some of the common challenges with learning, such as finding the right study techniques and supplementary materials, the syllabus provides the details necessary to find these answers. Our goal is to create a smarter learning experience for students by translating their syllabus into actionable insights and automatically preparing their calendars for the new semester. ## Introducing SyllaHomie SyllaHomie is a web application that takes an uploaded syllabus and parses through it to extract dates, evaluations, class times, and learning objectives. Users will first login with their Google account. Afterwards, they are able to manually edit the information in case something needs to be changed. The deadlines are turned into calendar events and will automatically be added to the user's GCal. They will also receive advice on how to best study for the course material and which resources would supplement their learning. ## How We Built It The front-end of the web application uses HTML, CSS, and Javascript. The backend was developed using node.js and calls the Google Cloud API to extract the information. We also used Open AI API to serve as a search for resources/relevant study tips. ## Challenges We Ran Into Figuring out how to parse different syllabi proved to be a challenge because of the different formatting. We needed to relate words with each other to provide full context in order to create events e.g. associating a date with a midterm. We also had to switch from our original plan of using Python to using Node.js to support extracting the information. ## What We Learned This was our first time using all the technologies included in this project, including working with Javascript, APIs, node.js, Google Cloud, and Figma. We had to do a lot of researching and workarounds for the issues we encountered along the way. However, we are satisfied with the result and we can't wait to continue our journey in coding!
winning
## Inspiration There are so many apps out there that track your meals, but they often don't work for people who struggle with access to healthy and affordable food. FoodPrint is creating a meal-tracking app that is designed with people who struggle with food security in mind. By using the app, anonymous data can be collected to help researchers learn about community-specific gaps in food security and nutrition. By taking a picture of your food, FoodPrint assesses its nutritional content and tells you what’s in it in real time as well as easy and affordable ways to improve your nutritional intake. With FoodPrint, it is not just people taking care of their own health but, collectively across potentially diverse populations, rich data is being gathered to help better understand and improve public health – something that’s especially useful for highlighting the nutrition challenges of at-risk communities. Every meal you enter is not just a personal learning experience about how to eat better, but it also helps your community address food insecurity. ## What it does FoodPrint is an app that allows users to take a picture of their meal, analyzes its nutritional content, and provides suggestions on how to improve the biggest gaps in nutrition using affordable and accessible ingredients. It is designed for people in food deserts in mind, who have limited access to fresh and affordable ingredients. Through the anonymous collection of these photos, it contributes to data on the types of foods consumed and largest areas where nutrition is lacking in local communities. The data from this app contributes to valuable insights for researchers working to address food insecurity in areas lacking local and community-specific context. ## How we built it 1. User Interface Design: Designed a user-friendly mobile app interface with screens for food scanning, meal calendar viewing and preferences/allergies input. Created a camera interface for users to take pictures of food. 2. User Personal Information: Developed a user profile system within the app. Allowed users to input their dietary preferences and allergies, storing this information for future recommendations. 3. Image Recognition: Using the image recognition from Open AI to extract the ingredients out of the food picture 4. Nutrition Data Processing: Processed and cleaned the extracted ingredients into different kinds of nutrition. Identified and categorized nutrition types and provide the list of the nutrition. 5. Database Integration: Store and manage food calendar data, user profiles, and recommendation data in a database. 6. Recommendation Engine: Implemented a recommendation engine that factors in user preferences and allergies. Using AI, the algorithm we built will suggest users taking some nutritional and affordable meal based on their preference and food history (calendar). Depending on complexity, integrated machine learning models that learn user preferences over time. 7. Integration Testing: Test the integration of different components to ensure that the food recognition, user preferences, and recommendations work together. ## Challenges we ran into One challenge we ran into was coordinating our time well and delegating tasks early on was something important to do. ## Accomplishments that we're proud of Implemented an image recognition system that identifies ingredients from images. Developed a user profile system to keep track of long-term dietary habits and changes. Designed a user-friendly mobile app interface to enhance the overall user experience. We are proud of the team-building and how we grew as a team. ## What we learned We learned that it's important to have a clear vision about the project for everyone on the team before moving forward. ## What's next for Food Print There are many ways that we can expand FoodPrint. Some directions we were thinking about are working with food banks to show where ingredients can be found and working with research organizations or non-profits to perform more in-depth analysis of the research data to better understand the conditions of food security in communities. A potential expansion for FoodPrint includes enabling family members to share and view each other’s dietary data, fostering better awareness of their family members' nutritional habits and conditions. This feature could help families support each other in making healthier choices. Additionally, users across different communities would be able to learn from one another’s dietary patterns, offering insights into various cultural eating habits and solutions to food insecurity.
## Inspiration ITE\_McMaster challenge/Google Challenge. ## What it does It is a website that allows registered users to share their rides. ## How we built it Vue.js/CSS/Bootstrap/Netlify/Firebase/Google-map api ## Challenges we ran into We were learning while working on this project. Implementing the firebase/ ## Accomplishments that we're proud of We integrate google map into our web app. ## What we learned Vue/Figma/Google map apis. ## What's next for poolit Using firebase to feature phone/email authentication.
## Inspiration Despite being a global priority in the eyes of the United Nations, food insecurity still affects hundreds of millions of people. Even in the developed country of Canada, over 5.8 million individuals (>14% of the national population) are living in food-insecure households. These individuals are unable to access adequate quantities of nutritious foods. ## What it does Food4All works to limit the prevalence of food insecurity by minimizing waste from food corporations. The website addresses this by serving as a link between businesses with leftover food and individuals in need. Businesses with a surplus of food are able to donate food by displaying their offering on the Food4All website. By filling out the form, businesses will have the opportunity to input the nutritional values of the food, the quantity of the food, and the location for pickup. From a consumer’s perspective, they will be able to see nearby donations on an interactive map. By separating foods by their needs (e.g., high-protein), consumers will be able to reserve the donated food they desire. Altogether, this works to cut down unnecessary food waste by providing it to people in need. ## How we built it We created this project using a combination of multiple languages. We used Python for the backend, specifically for setting up the login system using Flask Login. We also used Python for form submissions, where we took the input and allocated it to a JSON object which interacted with the food map. Secondly, we used Typescript (JavaScript for deployable code) and JavaScript’s Fetch API in order to interact with the Google Maps Platform. The two major APIs we used from this platform are the Places API and Maps JavaScript API. This was responsible for creating the map, the markers with information, and an accessible form system. We used HTML/CSS and JavaScript alongside Bootstrap to produce the web-design of the website. Finally, we used the QR Code API in order to get QR Code receipts for the food pickups. ## Challenges we ran into Some of the challenges we ran into was using the Fetch API. Since none of us were familiar with asynchronous polling, specifically in JavaScript, we had to learn this to make a functioning food inventory. Additionally, learning the Google Maps Platform was a challenge due to the comprehensive documentation and our lack of prior experience. Finally, putting front-end components together with back-end components to create a cohesive website proved to be a major challenge for us. ## Accomplishments that we're proud of Overall, we are extremely proud of the web application we created. The final website is functional and it was created to resolve a social issue we are all passionate about. Furthermore, the project we created solves a problem in a way that hasn’t been approached before. In addition to improving our teamwork skills, we are pleased to have learned new tools such as Google Maps Platform. Last but not least, we are thrilled to overcome the multiple challenges we faced throughout the process of creation. ## What we learned In addition to learning more about food insecurity, we improved our HTML/CSS skills through developing the website. To add on, we increased our understanding of Javascript/TypeScript through the utilization of the APIs on Google Maps Platform (e.g., Maps JavaScript API and Places API). These APIs taught us valuable JavaScript skills like operating the Fetch API effectively. We also had to incorporate the Google Maps Autofill Form API and the Maps JavaScript API, which happened to be a difficult but engaging challenge for us. ## What's next for Food4All - End Food Insecurity There are a variety of next steps of Food4All. First of all, we want to eliminate the potential misuse of reserving food. One of our key objectives is to prevent privileged individuals from taking away the donations from people in need. We plan to implement a method to verify the socioeconomic status of users. Proper implementation of this verification system would also be effective in limiting the maximum number of reservations an individual can make daily. We also want to add a method to incentivize businesses to donate their excess food. This can be achieved by partnering with corporations and marketing their business on our webpage. By doing this, organizations who donate will be seen as charitable and good-natured by the public eye. Lastly, we want to have a third option which would allow volunteers to act as a delivery person. This would permit them to drop off items at the consumer’s household. Volunteers, if applicable, would be able to receive volunteer hours based on delivery time.
losing
## Inspiration Homelessness is a rampant problem in the US, with over half a million people facing homelessness daily. We want to empower these people to be able to have access to relevant information. Our goal is to pioneer technology that prioritizes the needs of displaced persons and tailor software to uniquely address the specific challenges of homelessness. ## What it does Most homeless people have basic cell phones with only calling and sms capabilities. Using kiva, they can use their cell phones to leverage technologies previously accessible with the internet. Users are able to text the number attached to kiva and interact with our intelligent chatbot to learn about nearby shelters and obtain directions to head to a shelter of their choice. ## How we built it We used freely available APIs such as Twilio and Google Cloud in order to create the beta version of kiva. We search for nearby shelters using the Google Maps API and communicate formatted results to the user’s cell phone Twilio’s SMS API. ## Challenges we ran into The biggest challenge was figuring out how to best utilize technology to help those with limited resources. It would be unreasonable to expect our target demographic to own smartphones and be able to download apps off the app market like many other customers would. Rather, we focused on providing a service that would maximize accessibility. Consequently, kiva is an SMS chat bot, as this allows the most users to access our product at the lowest cost. ## Accomplishments that we're proud of We succeeded in creating a minimum viable product that produced results! Our current model allows for homeless people to find a list of nearest shelters and obtain walking directions. We built the infrastructure of kiva to be flexible enough to include additional capabilities (i.e. weather and emergency alerts), thus providing a service that can be easily leveraged and expanded in the future. ## What we learned We learned that intimately understanding the particular needs of your target demographic is important when hacking for social good. Often, it’s easier to create a product and find people who it might apply to, but this is less realistic in philanthropic endeavors. Most applications these days tend to be web focused, but our product is better targeted to people facing homeslessness by using SMS capabilities. ## What's next for kiva Currently, kiva provides information on homeless shelters. We hope to be able to refine kiva to let users further customize their requests. In the future kiva should be able to provide information about other basic needs such as food and clothing. Additionally, we would love to see kiva as a crowdsourced information platform where people could mark certain places as shelter to improve our database and build a culture of alleviating homelessness.
## Echocare **TABLE NUMBER 100** ## Inspiration In the U.S. and S.F., homelessness continues to be a persistent issue, affecting more than 650,000 individuals in 2023 alone. Despite the lack of stable housing, many people experiencing homelessness own mobile phones. Research indicates that as many as 94% of homeless individuals have access to a cellphone, with around 70% having smartphones. This is crucial because phones act as a lifeline for communication, health services, and access to support networks. Phones help users connect with essential services like medical care, job opportunities, and safety alerts. Unfortunately, barriers such as maintaining a charge or affording phone plans remain common issues for this population. In addition, U.S. faces a huge problem of food wastage. 150,000 tonnes of food are wasted each year JUST in SF. With Echocare, we aim to kill these 2 problems with 1 stone. We leverage S.O.T.A Artificial Intelligence to provide essential services to homeless and people suffering from food insecurity in a more accessible and intuitive manner. Additionally, we allow restaurants to donate and keep track of leftover food using an inventory tracker which can be used to feed the homeless people. ## What it does Echocare is an intuitive platform designed to assist users in locating and connecting with nearby care services, such as medical centers, pharmacies, or home care providers. The application uses voice input, interactive maps, and real-time search functionalities to help users find the services they need quickly and seamlessly. With user-friendly navigation and smart recommendations, Echocare empowers people to get help with minimal effort. In addition, it offers a separate platform for restaurants to keep track of leftover food and donate them at the end of each business day to homeless people. ## How we built it * Next.js for server-side rendering and static generation. * React to build interactive and modular UI components for the front end. * TypeScript for type safety and a robust backend. * Tailwind CSS for rapid, utility-first styling of the application. * Framer Motion for smooth and declarative animations. * GSAP (GreenSock Animation Platform) for high-performance animations. (Used for Echo) * Three.js for creating 3D graphics in the browser, adding depth and interactivity. (Echo has 3d graphics built in it) * Vercel Postgres to communicate with Neon DB (Serverless Postgres) * Neon Database a serverless PostgreSQL option, for robust backend storage to store the donated food items of the restaurants. Uses minimal compute and is good for developing low-latency applications * Clerk to implement secure and seamless user authentication for managers of the restaurants who want to donate food. * Google Maps API to power the mapping functionality and location services (This was embedded with Echo to provide precise directions). * Google Places API for autocomplete suggestions and retrieving detailed place information. * Ant Design and Aceternity UI for building the forms in the food donation page and having a clean look for the landing page (inspired by the multicolored and vibrant lights of SF in the night) * Axios for making API requests to external services easily (Google Maps and Places API) * Lucide React for all of the icons used in the application. * Vapi.ai For creating the one and only assistant * Google Gemini Flash 1.5 to potentially assist with generating user-facing responses. * Groq 3.1 70b versatile (fine-tuned) to assist Vapi.ai with insights. * Cartesia to provide hyper-realistic service for users. * Deepgram for encoding ## Challenges we ran into We found it very difficult to transcribe the conversations between the user and Vapi.ai echo agent. Rishit had to code for 16 hours straight to get it working. We also found the Google Gemini Integration to be hard because the multimodal functionality wasn't easy to implement. Especially with Typescript which isn't well documented as compared to Python. Finally, stitching the backend and frontend together in the food donation page also took a lot of time to carry out. ## Accomplishments that we're proud of * Getting 4 sponsored tools to be seamlessly integrated in the application * Using a grand total of 18 tools to build Echocare from the ground up * Finishing the entire product 10 hours before the deadline * Creating a product which can truly be used to help burdened communities thrive * Having a lot of fun and enjoying the process of building Echocare! ## What we learned Typescript - We have never used it to build a project before and through this hackathon, we gained understanding of how we can use it to build cohesive applications Vapi.AI - Using the dashboard and integrating custom APIs into Vapi was a tricky operation, but we toughed it out and made it work. NeonDB - We used this DB and learned basic SQL queries to insert and get data from the DB which we used to setup the donation page. Google Maps/Places API - Although they were relatively easy to implement, some time had to spent to initialize them. Groq, Cartesia - They were definitely tricky to implement with Vapi's dashboard. Gemini Integration with TS - Although it was < 100 lines of code, we kept running into errors. Turned out that Gemini Pro Vision was deprecated and we had to use Gemini 1.5 Flash instead ;( ## What's next for Echocare We would like to train our VAPI RAG with more datasets from other cities in the United States. Also, we would like to improve the responsive dimensions of our website.
## Inspiration We all know that everyone in this country probably has a mobile phone, even the less-fortunate. We wanted to help them by developing something useful. ## What it does Our team decided to make a web application which will help the less-fortunate connect with valuable resources, find shelters, addiction therapy centers, connect with appropriate help in a moment of crisis and help themselves by learning how to be financially stable, all between a few clicks of one another. ## How we built it We built this using HTML,CSS and Bootstrap for the front end, and JavaScript, Firebase and the google map APIs for the back end. We were able to finish this project sooner than expected. ## Challenges we ran into Struggles were met when developing the custom maps for the user. And since it was our first time working with Firebase and the DOM(Document Object Model), we experienced difficultlywith making the front and back end communicate with each other. ## Accomplishments that we're proud of We were able to make a working model of our initial idea, and we also had time to develop a Flutter app which has similar functionalities as the web app. ## What we learned We gained invaluable experience working with APIs and Firebase. We are now confident with using them in the future. ## What's next for UpHelping
partial
# ThreatTrace AI: Your Solution for Swift Violence Detection ## ThreatTrace AI eliminates the need for constant human monitoring of camera footage, facilitating faster threat identification and enhanced safety. ## Inspiration In response to the rising weapon-related crimes in Toronto, we developed a weapon detection tool using artificial intelligence to reduce the potential for violence. ## What It Does ThreatTrace AI is a peacekeeping platform aiming to utilize AI object detection to identify weapons and alert the authorities regarding potential threats and violence. Our ultimate vision was to monitor real-time security footage for violence detection, eliminating the need for human oversight across multiple screens. However, due to time constraints, we focused on training our machine learning model solely to detect weapons, namely pistols and guns from images. ## How We Built It Our frontend utilizes Python's Flask library with HTML and CSS, while the backend is powered by TensorFlow and various other libraries and repositories. We trained our machine learning model with a specific dataset and organized selected images in a folder, which we iterate through to create a 'slideshow' on the frontend. ## Challenges We Ran Into Throughout the event, we encountered numerous challenges that shaped the course of our inspiration development as a team. We started off as two separate groups of three people, but then one person from one team fell ill on the day of the event, and the other team was looking to form a team of four. However, since there were many teams of two or three looking for more members, one team had to disassemble into a group of two and join another group of two to finally form a group of four. During our setup phase, our computers ran into a lot of troubles while trying to install the necessary packages for the machine learning model to function. Our original goal was to have three to all four of us train the learning model with datasets, but unfortunately, we spent half of the time, and only two members managed to set up the machine learning model by the end of the event. In the end, only one member managed to train the model, making this the greatest technical challenge we've encountered in the event and slowing our progress by a margin. ## Accomplishments That We're Proud Of We're proud of the progress made from utilizing machine learning APIs such as TensorFlow and object detection for the very first time. Additionally, two members had no experience with Flask, and yet we were still able to develop a functional frontend in conjunction with HTML and CSS. ## What We Learned We have learned about numerous new technical knowledge regarding libraries, packages, and technologies such as TensorFlow, virtual environments, GitHub, CocoAPI, and ObjectDetectionAPI. Since three of our members use Windows as the main operating system, we had to utilize both the Windows and the bash terminals to set up our repository. Through our numerous trials, we have also learned about the vast amount of time it takes to train a machine learning model and dataset before being able to make use of it. Finally, the most important lesson we have learned was team collaboration, as each of us has made use of our strengths, and we utilized our abilities well to facilitate this project. ## What's Next for ThreatTrace AI Our goal is to continue training our learning model to accept more weapons at different lighting levels and angles, so that our dataset can be more refined as time goes. Then we will transition into training video footages with the learning model. Ultimately, we will reach our original vision of a real-time video violence detection AI.
## Inspiration In the aftermath of the recent mass shooting in Las Vegas, our prayers are with the 59 people who lost their lives and the 527 people who were badly injured. But are our prayers sufficient? In this technology age, with machine learning and other powerful AI algorithms being invented and used heavily in systems all over the world, we took this opportunity provided by Cal Hacks to build something that has the potential to have a social impact on the world by aiming to prevent such incidents of mass shootings. ## What it does? **Snipe** is a real-time recognition system to prevent the use of illegal guns for evil purposes. Firstly, the system analyses real-time video feed provided by security cams or cell phone by sampling frames and detecting whether is there a person holding a gun/rifle in the frames. This detects a suspicious person holding a gun in public and alerts the watching security. Most of the guns used for malicious intent are bought illegally, this system can also be used to tackle this scenario (avoidance scenario). Secondly, it also detects the motion of the arm before someone is about to fire a gun. The system then immediately notifies the police and the officials via email and SMS, so that they can prepare to mobilize without delay and tackle the situation accordingly. Thirdly, the system recognizes the sound of a rifle/gun being fired and sounds an alarm in nearby locations to warn the people around and avoid the situation. This system meant to aid the law enforcement to be more efficient and help people avoid such situations. This is an honest effort to use our knowledge in computer science in order to create something to make this world a better place. ## How we built it? Our system relies on OpenCV to sample and process the real-time video stream for all machine learning components to use. 1) Image Recognition/ Object Detection: This component uses the powerful Microsoft's Cognitive Services API and Azure to classify the frames. We trained the model using our custom data images. The sampled frames are classified on the cloud telling us whether is there a gun bearing human in the image with a probability distribution. 2) Arm Motion recognition: The Arm Motion detection detects whether a person is about to fire a gun/rifle. It also uses the Microsoft's custom vision cognitive API to detect the position of the arms. We use an optimized algorithm on the client side to detect whether the returned probability distribution from API represents the gun firing motion. 3) Sound Detection: Finally to detect the sound of a rifle/gun been fired, we use NEC's sound recognition API to detect the same. Once the program detects that the API returned true for the sound chunk, we sound an alarm to warn people. The entire application was built in C++. ## Challenges we ran into Machine learning algorithms need a lot of data for training in order to get highly accurate results. This being a limited time hackathon, we found it hard to collect a lot of data to achieve the desired high accuracy of classification. But we managed to train the model reasonably well (around 80% accuracy in object detection, around 75% in arm motion detection) using the time we had to collect the required data. ## Accomplishments that we're proud of The issue of mass shootings is pressing globally. We are proud to achieve a solution for the same that has a practical application. Our system may not be perfect but if it can prevent even 1 of 5 shooting incidents, then we are on the right route to accomplish our mission. The social impact that our product can have is inspiring for us and hopefully, it motivates other hackers to build applications that can truly have a positive impact on the world. ## What we learned? Hackathons are a great way to develop something cool and useful in a day or two. Thanking CalHacks for hosting such a great event and helping us network with the industry's brightest and fellow hackers. Machine learning is the future of computing and using it for our project was a great experience. ## What's next for Snipe? With proper guidance from our university's professors, we aim to better the machine learning algorithms and get better results. In the future scope, we plan to include emotion and behavior detection in the system to improve it further in detecting suspicious activity before a mishap.
## Inspiration We were inspired by how there were many instances of fraud with regards to how donations were handled. It is heartbreaking to see how mismanagement can lead to victims of disasters not receiving the help that they desperately need. ## What it does TrustTrace introduces unparalleled levels of transparency and accountability to charities and fundraisers. Donors will now feel more comfortable donating to causes such as helping earthquake victims since they will now know how their money will be spent, and where every dollar raised is going to. ## How we built it We created a smart contract that allowed organisations to specify how much money they want to raise and how they want to segment their spending into specific categories. This is then enforced by the smart contract, which puts donors at ease as they know that their donations will not be misused, and will go to those who are truly in need. ## Challenges we ran into The integration of smart contracts and the web development frameworks were more difficult than we expected, and we overcame them with lots of trial and error. ## Accomplishments that we're proud of We are proud of being able to create this in under 36 hours for a noble cause. We hope that this creation can eventually evolve into a product that can raise interest, trust, and participation in donating to humanitarian causes across the world. ## What we learned We learnt a lot about how smart contract works, how they are deployed, and how they can be used to enforce trust. ## What's next for TrustTrace Expanding towards serving a wider range of fundraisers such as GoFundMes, Kickstarters, etc.
partial
## Inspiration Politicians make a lot of money. Like, a lot. After a thorough analysis of how politicians apply their skill sets to faithfully "serve the general public", we realized that there's a hidden skill that many of them possess. It is a skill that hasn't been in the spotlight for some time. It is the skill which allowed US senator, Richard Burr, to sell 95% of his holdings in his retirement account just before a downturn in the market (thus avoiding $80,000 in losses and generating a profit upwards of $100k). This same skill allowed several senators like Spencer Bachus who had access to top secret meetings discussing the 2008 stock market crash and it's inevitably to heavily short the market just before the crash, generating a 200% per 1% drop in the NASDAQ. So, we decided that... senators know best! Our project is about outsider trading, which essentially means an outsider (you) get to trade! It allows you to track and copy the live trades of whatever politician you like. ## Why the idea works and what problem is solves We have a term called "recognition by ignition", a simple set of words that describe how this unconventional approach works. Our system has the ability to, by virtue of the data that is available to it, prioritize the activity of Senators previously engaged in suspicious trading activities. When these Senators make a big move, everyone following them receives a message via our InfoBip integration, and knows, temporarily acting as a catalyst for changes in the value of that stock, at best just enough to draw the scrutiny of financial institutions, essentially serving as an extra layer of safety checks while consistently being a trustworthy platform to trade on, ensured by wallet management and transactions through Circle integration. ## What it does Outsider trading gets the trading data of a large set of politicians, showing you their trading history and allowing you to select one or more politicians to follow. After depositing an amount into outsider trading, our tool either lets you manually assess the presented data and invest wherever you want, or automatically follow the actions of the politicians that you have followed. Sp. when they invest in stock x, our tools proportionally invest for you in the same stock. When they pull out of stock y, your bot will pull out of that stock too! This latter feature can be simulated or followed through with actual funds to track the portfolio performance of a certain senator. ## How we built it We built our web app using ReactJS for the front end connected to a snappy Node.js backend, and a MySQL server hosted on RDS. Thanks in part to the STOCK act of 2012, most trading data for US senators has to be made public information, so we used the BeautifulSoup library to Web Scrape and collect our data. ## Challenges we ran into Naturally, 36 hours (realistically less) of coding, would involve tense sessions of figuring out how to do certain things. We saw the direct value that the services offered by InfoBip and Circle could have on our project so we got to work implementing that and as expected had to traverse a learning curve with implementing the APIs, this job was however made easier because of the presence of mentors and good documentation online that allowed us to integrate an SMS notification system and a system that sets up crypto wallets for any user that signs up. Collaborating effectively is one of the most important parts of a hackathon, so we as a team learnt a lot more about effective and efficient version control measures and how to communicate and divide roles and work in a software focused development environment. ## Accomplishments that we're proud of * A complete front-end for the project was finished in due time * A fully functional back-end and database system to support our front-end * InfoBip integration to set up effective communication with the customer. Our web-app automatically sends an SMS when a senator you are following makes a trade. * Crypto wallets for payment, implemented with Circle! * A well designed and effective database hosted on an RDS instance ## What we learned While working on this project we had to push ourselves outside of all of our comfort zones. Going into this project none of us knew how to web scrape, set up crypto wallets, create SMS/email notification systems, or work with RDS instances. Although some of these features may be bare bones, we are leaving this project with new knowledge and confidence in these areas. We learnt how to effectively work and scale a full stack web-app, and got invaluable experience on how to collaborate and version control as a team. ## What's next for Outsider Trading This is only the beginning for us, there's a lot more to come! * Increased InfoBip integration with features like: 1) Weekly summary email of your portfolio (We have investment data and financial data on Circle that can simply be summarized with the API we use to make charts on the portfolio page and then attach that through Infobip through the email. 2) SMS 2.0 features can be used to directly allow the user to invest from their messaging app of choice * Improved statistical summaries of your own trades and those of each senator with models trained on trading datasets that can detect the likelihood of foul play in the market. * Zapier integration with InfoBip to post updates about senator trades regularly to a live Twitter (X) page. * An iOS and android native app, featuring all of our current features an more.
## Inspiration We constantly have friends asking us for advice for investing, or ask for investing advice ourselves. We realized how easy a platform that allowed people to make collaboratively make investments would make sharing information between people. We formed this project out of inspiration to solve a problem in our own lives. The word 'Omada' means group in Greek, and we thought it sounded official and got our message across. ## What it does Our platform allows you to form groups with other people, put your money in a pool, and decide which stocks the group should buy. We use a unanimous voting system to make sure that everyone who has money involved agrees to the investments being made. We also allow for searching up stocks and their graphs, as well as individual portfolio analysis. The way that the buying and selling of stocks actually works is as follows: let's say a group has two members, A and B. If A has $75 on the app, and person B has $25 on the app, and they agree to buy a stock costing $100. When they sell the stock, person A gets 75% of the revenue from selling the stock and person B gets 25%. Person A: $75 Person B: $25 Buy stock for $100 Stock increases to $200 Sell Stock Person A: $150 Person B: $200 We use a proposal system in order to buy stocks. One person finds a stock that they want to buy with the group, and makes a proposal for the type of order, the amount, and the price they want to buy the stock at. The proposal then goes up for a vote. If everyone agrees to purchasing the stock, then the order is sent to the market. The same process occurs for selling a stock. ## How we built it We built the webapp using Flask, specifically to handle routing and so that we could use python for the backend. We used BlackRock for the charts, and NASDAQ for live updates of charts. Additionally, we used mLab with MongoDB and Azure for our databases, and Azure for cloud hosting. Our frontend is JavaScript, HTML, and CSS. ## Challenges we ran into We had a hard time initially with routing the app using Flask, as this was our first time using it. Additionally, Blackrock has an insane amount of data, so getting that organized and figuring out what we wanted to do with that and processing it was challenging, but also really fun. ## Accomplishments that we're proud of I'm proud that we got the service working as much as we did! We decided to take on a huge project, which could realistically take months of time to make if this was a workplace, but we got a lot of features implemented and plan on continuing to work on the project as time moves forward. None of us had ever used Flask, MongoDB, Azure, BlackRock, or Nasdaq before this, so it was really cool getting everything together and working the way it does. ## What's next for Omada We hope to polish everything off, add features we didn't have time to implement, and start using it for ourselves! If we are able to make it work, maybe even publishing it!
# *Inspiration* ## We wanted to create an **innovative** tool to **protect** and **shield** new investors that are trying to learn the ins and outs of the Stock Market. # *What it does* ## **The Stock Lock provides the user with financial guidance based on *real-market values*** # *Challenges* ## Converting our original C Algorithm to C# **in order** for us to turn it into a fully-functioning application. # *Accomplishments* ## Our Algorithm runs in **Θ(n)**! # *What we learned* ## We learned to simplify and generate a polished finished product by sticking to a more *grass-roots* approach. # *What's next?* ## The next step for this Stock tool would be a Mobile implementation; which would make it much more *versatile* .
partial
# Welcome to TrashCam 🚮🌍♻️ ## Where the Confusion of Trash Sorting Disappears for Good ### The Problem 🌎 * ❓ Millions of people struggle with knowing how to properly dispose of their trash. Should it go in compost, recycling, or garbage? * 🗑️ Misplaced waste is a major contributor to environmental pollution and the growing landfill crisis. * 🌐 Local recycling rules are confusing and inconsistent, making proper waste management a challenge for many. ### Our Solution 🌟 TrashCam simplifies waste sorting through real-time object recognition, turning trash disposal into a fun, interactive experience. * 🗑️ Instant Sorting: With TrashCam, you never have to guess. Just scan your item, and our app will tell you where it belongs—compost, recycling, or garbage. * 🌱 Gamified Impact: TrashCam turns eco-friendly habits into a game, encouraging users to reduce their waste through challenges and a leaderboard. * 🌍 Eco-Friendly: By helping users properly sort their trash, TrashCam reduces contamination in recycling and compost streams, helping protect the environment. ### Experience It All 🎮 * 📸 Snap and Sort: Take a picture of your trash and TrashCam will instantly categorize it using advanced object recognition. * 🧠 AI-Powered Classification: After detecting objects with Cloud Vision and COCO-SSD, we pass them to Gemini, which accurately classifies the items, ensuring they’re sorted into the correct waste category. * 🏆 Challenge Friends: Compete on leaderboards to see who can make the biggest positive impact on the environment. * ♻️ Learn as You Play: Discover more about what can be recycled, composted, or thrown away with each interaction. ### Tech Stack 🛠️ * ⚛️ Next.js & TypeScript: Powering our high-performance web application for smooth, efficient user experiences. * 🛢️ PostgreSQL & Prisma: Storing and managing user data securely, ensuring fast and reliable access to information. * 🌐 Cloud Vision API & COCO-SSD: Using state-of-the-art object recognition to accurately identify and classify waste in real time. * 🤖 Gemini AI: Ensuring accurate classification of waste objects to guide users in proper disposal practices. ### Join the Movement 🌿 TrashCam isn’t just about proper waste management—it’s a movement toward a cleaner, greener future. * 🌍 Make a Difference: Every time you sort your trash correctly, you help reduce landfill waste and protect the planet. * 🎯 Engage and Compete: By playing TrashCam, you're not just making eco-friendly choices—you're inspiring others to do the same. * 🏆 Be a Waste Warrior: Track your progress, climb the leaderboard, and become a leader in sustainable living.
## We got inspired by the lack of motivation household participation in local recycling programs. Many households have the ability to participate in curbside recycling and all have the ability to drop off recycling at a local recycling collection center. We thought if we provide a financial incentive or some form of gamification we could encourage users to recycle . We believe the government will assist us in this as in the long run it makes their jobs easier and costs them less money than large scale clean ups. ## What it does The app takes pictures of garbage. Recognizes objects which are garbage, classifies it and gives users instructions on how to safely dispose or recycle the garbage and gives rewards for participating. The app also gives users the option of uploading to snapchat with captions and filters showing they recycled. ## How we built it We built the app by using building an app and integrating various pieces together. We used various API's to help the app run smoothly such as Snapchat and Google Vision and camera kit. When we got all the necessary data we wrote the logic to classify this data with Toronto's waste database and pass back instructions to our users ## Challenges we ran into ## Accomplishments that we're proud of We used google vision Api for the first time and properly identified objects. We also spent a great deal of time on the snap chat api so we were happy we got that to work. ## What we learned We learned a lot about android studio and how to develop for android devices, we also significantly improved our debugging skills as android sdk came with a lot of problems , We had to refresh our memory of git and use proper version control as all four of us worked on different components and had to stay synced with each other. ## What's next for Enviro-bin Partner with government for rewards system and have them also respond to pings from our app to pick up major garbage in location.
## Inspiration We saw how Pokémon Go inspired a ton of people to walk and travel way more than they did before. We thought we could design an app to incentivize a culture of sustainability. We also feel that this application could be used by non-profit organizations or volunteers on finding the most polluted areas in their community. We could even find the most optimal location for trash cans! ## What it does Load the website on any internet device (not deployed so picture taking works only on local machines but phones can access the site), take a photo of a spot of trash, and get points for reporting it. Find a spot of trash near you on the website, take a photo of it, and get even more points for clearing it! By using the google cloud computer vision API, we can gather information of the trash that is contained within an image. We can then give you points based on how polluted the location is. ## How we built it We used Vue.js for the front-end, Django for the back-end, and Google Cloud to perform image recognition tasks and handled the PostgreSQL SQL. Web browser APIs help manage location data. ## Challenges we ran into Getting an image from the front-end web app, encoding it and sending it to the back-end and then feeding that to the google API took a very long time a lots of testing. There is also very limited functionality on capturing multiple objects from one static image and specifying what objects we are looking for in an image (to our understanding). Regardless the machine vision API was immensely helpful. We also ran into challenges finding the best database for our project. We ran local ones in docker images which crashed and had problems using some cloud providers. We then settled on Google's SQL engine as it gave us a nice interface and made it very easy to access the database instance for developing. ## Accomplishments that we're proud of Being able to present a easy to use interface for taking photos and passing the picture across multiple back-ends and APIs in a process that seems near real-time to the user. While we can't do it on the phone we feel that we accomplished quite a bit. ## What we learned Lots of Web Programming! ## What's next for Where Da Trash At Getting the app to work on the phone Filtering points by geolocation- such that you can get points in say a .5 mile radius or any arbitrary distance a user wants Make it look even better.
partial
## Inspiration While trying to decide on a project, we were joking about building an app to roast your friends. Saturday morning after one idea failed, we decided to work on the bot as something fun until we found a 'real' idea. Eventually it became the real project. ## What it does @roastmebot will reply to any user that says "roast me" or the name of the bot. It replies with a roast that has been scraped from r/roastme on reddit, as well as a roast that has been generated through Markov chains. Additionally, a sentiment analysis is performed on each statement and their respective scores are printed. ## How we built it The bot is built using the 'slackbots' api in node.js. A custom web scaper scrapes the top posts in the roastme subreddit for comments, finds all the comments with a score greater than 4, then sorts them but upvotes. The 'markov-chains' npm module was used to generate the custom roasts. One of our team members wrote a custom wrapper that uses Slack's api for mentions, because the slackbots api does not support mentioning. ## Challenges we ran into Many of our functions are asynchronous, so handling their successful (and expected) operations was a huge challenge. We definitely learned a lot about how callbacks work. Finding a library for the Markov Chains that worked was difficult, and assembling the necessary data in the proper format (an array of single dimension arrays containing roasts) was challenging. Implementing a mention system was difficult as well, due to the lack of support from our chosen bot-building API. Small changes to bot properties, such as the name, led to hard to debug errors as well. ## Accomplishments that we're proud of We turned a silly project that could have been called finished at the first message sent into something meaningful that took a lot of effort and has hilarious results. The Markov Chain roasts are often harsher than the reddit roasts, leading to shocking, yet amusing, results. ## What we learned Nested callbacks are hell. Not every npm module is created equal. Reddit's denizens are incredibly witty and acerbic, making for very interesting roasts and generated roasts. According to our sentiment analysis, (very) sarcastic posts are generally about as positive as the (very) mean posts are negative. While the algorithm has trouble differentiating sincerity from sarcasm, this interesting correlation helps us notice it in the data. ## What's next for Roastbot Assuming that the Slack integration app store is ok with non-PC topics, we would love to submit it and allow other teams to roast each other. Another goal is setting up more user-friendly configuration, and also creating a custom command (/roast @John for example) that will directly roast a user, without involving the bot. Our scraper, Markov Chain algorithm, and sentiment analysis are also not tied to Slack, so we could extend the project to Twitter, Facebookm etc. as well.
## TRY IT OUT [Slack invitation](https://join.slack.com/t/calhackssandbox/shared_invite/zt-1i96jhrkw-YsrwxVF3yRAeZzcp_YQJOQ) ## Inspiration Have you ever gotten a text and didn't know how to reply? Maybe you're too tired to come up with something clever, or perhaps you've left them on read for two weeks and don't know what to say. **Say no more.** Just send our co:here Natural-Language-Processing (NLP) powered slack bot the message to reply to and it will generate three completely original responses for you—a positive, negative, and random one—along with a suitable meme to go with the reply. ## What it does To activate the Say No More bot, users can send a direct message to user Say No More containing the contents of the text they’d like to reply to. The Slackbot then sends this information to our backend. The first stage in the backend pipeline uses the `gptd-instruct-tft` text generation model from co:here along with an associated prompt to generate replies to a given message. In order to generate a range of responses to the message, we designed the prompt to provide a one-line response with a positive, negative, and a random sentiment. This prompt specified both the general goal for the model, and provided several examples of desired performance. Different parameters were tuned including the temperature, training examples, and p value to output the most appropriate, yet interesting, responses. Next, we use co:here again to figure out which meme to associate with the generated text. We first found a database of the 100 most popular memes annotated with their descriptions. These descriptions are represented in vector space through co:here's text embedding model and cached. Given replies are also projected to vector space and compared with the meme embeddings to find the meme with the minimum L2 error. The resulting meme and reply are then put into a standard format and sent to the Slackbot. From there, the Slackbot sends the memes to the user as a series of direct messages. These memes can then be forwarded directly to the corresponding channel. ## Challenges we ran into Finding the similarity between a meme and a given reply was difficult. In addition, formatting the resulting reply and meme was difficult due to the different sizes of memes and different lengths of text. The Slack API was difficult to work with at first because of how spread out the documentation was, but turned out to be fine once we got an initial prototype working. ## Accomplishments that we're proud of It was great to be able to get all the different parts of the project working individually, as well as integrate everything together. ## What we learned We learned how to use the co:here API to generate text and learn text sentiment embeddings. We also learned how to use the Slack API to create an interactive bot which could respond to user events, like tagging the bot in a channel or sending the bot a direct message. ## What's next for Say No More? Why don’t we ask the Say No More bot? We would also like to expand into Twitter, Instagram, TikTok, and a separate website in order to allow more users to interact with Say No More! ### Query: What’s next for this project? ### Positive: More Research! ![alt text](https://raw.githubusercontent.com/anthonyzhou-1/Say-No-More/main/assets/Positive_small.png?token=GHSAT0AAAAAABZ7ZR4TBRBSRBOFQP5QL2Q6Y2ML6JQ) ### Negative: More important things like stopping the world from ending. ![Alt text](https://raw.githubusercontent.com/anthonyzhou-1/Say-No-More/main/assets/Negative_small.png?token=GHSAT0AAAAAABZ7ZR4TBINKPRYXT73R27SEY2MMADQ) ### Random: Discovering new universes and proving the existence of god. ![alt text](https://raw.githubusercontent.com/anthonyzhou-1/Say-No-More/main/assets/Random_small.png?token=GHSAT0AAAAAABZ7ZR4TW7QPSKKID7TT5VEKY2ML6VQ)
This project was developed with the RBC challenge in mind of developing the Help Desk of the future. ## What inspired us We were inspired by our motivation to improve the world of work. ## Background If we want technical support, we usually contact companies by phone, which is slow and painful for both users and technical support agents, especially when questions are obvious. Our solution is an online chat that responds to people immediately using our own bank of answers. This is a portable and a scalable solution. ## Try it! <http://www.rbcH.tech> ## What we learned Using NLP, dealing with devilish CORS, implementing docker successfully, struggling with Kubernetes. ## How we built * Node.js for our servers (one server for our webapp, one for BotFront) * React for our front-end * Rasa-based Botfront, which is the REST API we are calling for each user interaction * We wrote our own Botfront database during the last day and night ## Our philosophy for delivery Hack, hack, hack until it works. Léonard was our webapp and backend expert, François built the DevOps side of things and Han solidified the front end and Antoine wrote + tested the data for NLP. ## Challenges we faced Learning brand new technologies is sometimes difficult! Kubernetes (and CORS brought us some painful pain... and new skills and confidence. ## Our code <https://github.com/ntnco/mchacks/> ## Our training data <https://github.com/lool01/mchack-training-data>
losing
## Inspiration Inspired by the learning incentives offered by Duolingo, and an idea from a real customer (Shray's 9 year old cousin), we wanted to **elevate the learning experience by integrating modern technologies**, incentivizing students to learn better and teaching them about different school subjects, AI, and NFTs simultaneously. ## What it does It is an educational app, offering two views, Student and Teacher. On Student view, compete with others in your class through a leaderboard by solving questions correctly and earning points. If you get questions wrong, you have the chance to get feedback from Together.ai's Mistral model. Use your points to redeem cool NFT characters and show them off to your peers/classmates in your profile collection! For Teachers, manage students and classes and see how each student is doing. ## How we built it Built using TypeScript, React Native and Expo, it is a quickly deployable mobile app. We also used Together.ai for our AI generated hints and feedback, and CrossMint for verifiable credentials and managing transactions with Stable Diffusion generated NFTs ## Challenges we ran into We had some trouble deciding which AI models to use, but settled on Together.ai's API calls for its ease of use and flexibility. Initially, we wanted to do AI generated questions but understandably, these had some errors so we decided to use AI to provide hints and feedback when a student gets a question wrong. Using CrossMint and creating our stable diffusion NFT marketplace was also challenging, but we are proud of how we successfully incorporated it and allowed each student to manage their wallets and collections in a fun and engaging way. ## Accomplishments that we're proud of Using Together.ai and CrossMint for the first time, and implementing numerous features, such as a robust AI helper to help with any missed questions, and allowing users to buy and collect NFTs directly on the app. ## What we learned Learned a lot about NFTs, stable diffusion, how to efficiently prompt AIs, and how to incorporate all of this into an Expo React Native app. Also met a lot of cool people and sponsors at this event and loved our time at TreeHacks! ## What's next for MindMint: Empowering Education with AI & NFTs Our priority is to incorporate a spaced repetition-styled learning algorithm, similar to what Anki does, to tailor the learning curves of various students and help them understand difficult and challenging concepts efficiently. In the future, we would want to have more subjects and grade levels, and allow the teachers to input questions for the student to solve. Another interesting idea we had was to create a mini real-time interactive game for students to play among themselves, so they can encourage each other to play between themselves.
## Inspiration Our team focuses extensively on opportunities to ignite change through innovation. In this pursuit, we began to investigate the impact of sign-language glove technology on the hard of hearing community. In this, we learned that despite previous efforts to enhance accessibility, feedback from deaf advocates highlighted a critical gap: earlier technologies often facilitated communication for hearing individuals with the deaf, rather than empowering the deaf to communicate on their terms. After discussing this problem with a friend of ours who faces disabilities related to her hearing, we realized that this problem impacts many people's daily lives, significantly affecting their ability to engage with those around them. By focusing on human centered design and integrating feedback presented in numerous journals, we solve these problems by developing an accessible, easy to use interface that enables the hard of hearing and mute to converse seamlessly. Through integration of a wearable component and a sophisticated LLM, we completely change the landscape of interpersonal communications for the hard of hearing. ## What it does Our solution consists of two components, a wearable glove and a mobile video call interface. The wearable glove is meant to be utilized by a deaf or hard of hearing individual when conversing with another person. This glove, fitted with numerous flex sensors and an Inertial Measurement Unit (IMU), can discern what gloss (term for a word in ASL) the wearer is signing at any moment in time. From here, the data moves to the second component of the solution - the mobile video call interface. Here, the user's signs are converted into both text and speech during a call. The text is displayed on the screen while the speech is stated, ensuring to include emotional cues as picked up by the integrated computer vision model. This effectively helps users communicate with others, especially loved ones, in a manner that accurately represents their intent and emotions. This experience is one that is currently not offered anywhere else on the market. In tandem, both of these technologies enable us to understand body language, emotion, and signs from a user, and also help vocalize the feelings of a person who is hard of hearing. ## How we built it Two vastly different components call for drastically different approaches. However, we needed to ensure that these two approaches still stayed true to the same intent. We first began by identifying our design strategy, based in our problem statement and objective. From here, we moved forward with set goals and milestones. On the hardware side of things, we spent an extensive amount of time in the on-site lab fabricating our prototype. In order to ensure the validity of our design, we researched circuit diagrams and characteristics, ultimately building our own. We performed a variety of tests on this prototype, including practical use testing by taking it around campus while interacting with others. The glove withstood numerous handshakes and even a bit of rain! On the software side, we also had two problems to face - interfacing with the glove, and creating the mobile application. To interface with the glove, we began with the Arduino IDE for testing. After we ensured that our design was functional and gained test data, we moved to a python implementation that sends sensed words up to an API, which can later be accessed by the mobile application. Moving to the mobile application, we utilized SwiftUI for our design. From there, we used the StreamAPI to build a FaceTime style infrastructure. We prototyped and integrated between Figma designs and our prototype to best understand where we could increase capabilities and improve the user experience. ## Challenges we ran into This project was ambitious, and as such, was also chock full of complications. Initially, we faced extensive challenges on the hardware side. Due to the nature of the design, we have many components that are trying to draw power or ground from the same source. This provided increased complexity in our manufacturing process, as we had to come up with an innovative solution to a sleek design that maintained functionality. Even after we found our first solution, our prototype was inconsistent due to manufacturing flaws. On the last day, 2 hours before the submission deadline, we completely disassembled and rebuilt our prototype using a new methodology. This proved to be successful, minimizing the issues seen previously and resulting in an amazing product. On the software side, we also pursued ambitious desires that didn't distinctly align with our team's expertise. Due to this, we faced great difficulty when troubleshooting the numerous errors we faced in initial implementation. This set us back quite extensively, but we were able to successfully recover. ## Accomplishments that we're proud of We are proud of the magnitude of success we were able to show in the short frame of this hackathon. We came in knowing that we had ambitious and lofty goals, but were unsure if we would truly be able to achieve them. Thankfully, we complete this hackathon with a functional, viable MVP that clearly represents our goals and desires for this project. ## What we learned Because of the cross discipline nature of this project. All of our team members got the opportunity to explore new spaces. Through collaboration, we all learned about these fields and technologies from and with each other and how we can integrate them into our systems in the future. We also learned about best practices for manufacturing in general. Additionally, we were able to become more comfortable with SwiftUI and creating our own APIs for our video calling component. These valuable skills shaped our experiences at TreeHacks and will stick with us for many years to come. ## What's next for Show and Tell - Capturing Emotion in Sign Language We hope to continue to pursue this idea and bring independence to the hard of hearing population worldwide. In a market that has underserved the deaf population, we see Show and Tell as the optimal solution for accessibility. In the future, we want to flesh out the hardware prototype further by investing in custom PCB's, streamlining the production process and making it much more professional. Additionally, we want to build out functionality within the video calling app, adding in as many helpful features as possible.
# Course Connection ## Inspiration College is often heralded as a defining time period to explore interests, define beliefs, and establish lifelong friendships. However the vibrant campus life has recently become endangered as it is becoming easier than ever for students to become disconnected. The previously guaranteed notion of discovering friends while exploring interests in courses is also becoming a rarity as classes adopt hybrid and online formats. The loss became abundantly clear when two of our members, who became roommates this year, discovered that they had taken the majority of the same courses despite never meeting before this year. We built our project to combat this problem and preserve the zeitgeist of campus life. ## What it does Our project provides a seamless tool for a student to enter their courses by uploading their transcript. We then automatically convert their transcript into structured data stored in Firebase. With all uploaded transcript data, we create a graph of people they took classes with, the classes they have taken, and when they took each class. Using a Graph Attention Network and domain-specific heuristics, we calculate the student’s similarity to other students. The user is instantly presented with a stunning graph visualization of their previous courses and the course connections to their most similar students. From a commercial perspective, our app provides businesses the ability to utilize CheckBook in order to purchase access to course enrollment data. ## High-Level Tech Stack Our project is built on top of a couple key technologies, including React (front end), Express.js/Next.js (backend), Firestore (real time graph cache), Estuary.tech (transcript and graph storage), and Checkbook.io (payment processing). ## How we built it ### Initial Setup Our first task was to provide a method for students to upload their courses. We elected to utilize the ubiquitous nature of transcripts. Utilizing python we parse a transcript, sending the data to a node.js server which serves as a REST api point for our front end. We chose Vercel to deploy our website. It was necessary to generate a large number of sample users in order to test our project. To generate the users, we needed to scrape the Stanford course library to build a wide variety of classes to assign to our generated users. In order to provide more robust tests, we built our generator to pick a certain major or category of classes, while randomly assigning different category classes for a probabilistic percentage of classes. Using this python library, we are able to generate robust and dense networks to test our graph connection score and visualization. ### Backend Infrastructure We needed a robust database infrastructure in order to handle the thousands of nodes. We elected to explore two options for storing our graphs and files: Firebase and Estuary. We utilized the Estuary API to store transcripts and the graph “fingerprints” that represented a students course identity. We wanted to take advantage of the web3 storage as this would allow students to permanently store their course identity to be easily accessed. We also made use of Firebase to store the dynamic nodes and connections between courses and classes. We distributed our workload across several servers. We utilized Nginx to deploy a production level python server that would perform the graph operations described below and a development level python server. We also had a Node.js server to serve as a proxy serving as a REST api endpoint, and Vercel hosted our front-end. ### Graph Construction Treating the firebase database as the source of truth, we query it to get all user data, namely their usernames and which classes they took in which quarters. Taking this data, we constructed a graph in Python using networkX, in which each person and course is a node with a type label “user” or “course” respectively. In this graph, we then added edges between every person and every course they took, with the edge weight corresponding to the recency of their having taken it. Since we have thousands of nodes, building this graph is an expensive operation. Hence, we leverage Firebase’s key-value storage format to cache this base graph in a JSON representation, for quick and easy I/O. When we add a user, we read in the cached graph, add the user, and update the graph. For all graph operations, the cache reduces latency from ~15 seconds to less than 1. We compute similarity scores between all users based on their course history. We do so as the sum of two components: node embeddings and domain-specific heuristics. To get robust, informative, and inductive node embeddings, we periodically train a Graph Attention Network (GAT) using PyG (PyTorch Geometric). This training is unsupervised as the GAT aims to classify positive and negative edges. While we experimented with more classical approaches such as Node2Vec, we ultimately use a GAT as it is inductive, i.e. it can generalize to and embed new nodes without retraining. Additionally, with their attention mechanism, we better account for structural differences in nodes by learning more dynamic importance weighting in neighborhood aggregation. We augment the cosine similarity between two users’ node embeddings with some more interpretable heuristics, namely a recency-weighted sum of classes in common over a recency-weighted sum over the union of classes taken. With this rich graph representation, when a user queries, we return the induced subgraph of the user, their neighbors, and the top k most people most similar to them, who they likely have a lot in common with, and whom they may want to meet! ## Challenges we ran into We chose a somewhat complicated stack with multiple servers. We therefore had some challenges with iterating quickly for development as we had to manage all the necessary servers. In terms of graph management, the biggest challenges were in integrating the GAT and in maintaining synchronization between the Firebase and cached graph. ## Accomplishments that we're proud of We’re very proud of the graph component both in its data structure and in its visual representation. ## What we learned It was very exciting to work with new tools and libraries. It was impressive to work with Estuary and see the surprisingly low latency. None of us had worked with next.js. We were able to quickly ramp up to using it as we had react experience and were very happy with how easily it integrated with Vercel. ## What's next for Course Connections There are several different storyboards we would be interested in implementing for Course Connections. One would be a course recommendation. We discovered that chatGPT gave excellent course recommendations given previous courses. We developed some functionality but ran out of time for a full implementation.
partial
This code allows the user to take photos of animals, and the app determines whether the photos are pleasing enough for people to see the cuteness of the animals.
## Inspiration We took inspiration from the multitude of apps that help to connect those who are missing to those who are searching for their loved ones and others affected by natural disaster, especially flooding. We wanted to design a product that not only helped to locate those individuals, but also to rescue those in danger. Through the combination of these services, the process of recovering after natural disasters is streamlined and much more efficient than other solutions. ## What it does Spotted uses a drone to capture and send real-time images of flooded areas. Spotted then extracts human shapes from these images and maps the location of each individual onto a map and assigns each victim a volunteer to cover everyone in need of help. Volunteers can see the location of victims in real time through the mobile or web app and are provided with the best routes for the recovery effort. ## How we built it The backbone of both our mobile and web applications is HERE.com’s intelligent mapping API. The two APIs that we used were the Interactive Maps API to provide a forward-facing client for volunteers to get an understanding of how an area is affected by flood and the Routing API to connect volunteers to those in need in the most efficient route possible. We also used machine learning and image recognition to identify victims and where they are in relation to the drone. The app was written in java, and the mobile site was written with html, js, and css. ## Challenges we ran into All of us had a little experience with web development, so we had to learn a lot because we wanted to implement a web app that was similar to the mobile app. ## Accomplishments that we're proud of Accomplishment: We are most proud that our app can collect and stores data that is available for flood research and provide real-time assignment to volunteers in order to ensure everyone is covered in the shortest time ## What we learned We learned a great deal about integrating different technologies including XCode, . We also learned a lot about web development and the intertwining of different languages and technologies like html, css, and javascript. ## What's next for Spotted Future of Spotted: We think the future of Spotted is going to be bright! Certainly, it is tremendously helpful for the users, and at the same time, the program improves its own functionality as data available increases. We might implement a machine learning feature to better utilize the data and predict the situation in target areas. What's more, we believe the accuracy of this prediction function will grow exponentially as data size increases. Another important feature is that we will be developing optimization algorithms to provide a real-time most efficient solution for the volunteers. Other future development might be its involvement with specific charity groups and research groups and work on specific locations outside US.
## Inspiration Picture this. Our team arrives in by Train from Ottawa. We eagerly await the big city Toronto nightlife, food of course! We get to a restaurant that seems interesting. And we wait... And wait... And waited some more. This sparked an idea! "Imagine a Service that streamlines the sit-down restaurant experience.", and so Ordr was born. ## What it does Ordr is a web and and mobile app that aims to make the restaurant experience more efficient by using technology. We provide a service that customers can use to walk into restaurants and streamlining the process of getting for to them. Ordr is a full end-to-end service for clients and businesses to streamline interactions. The client-side Mobile app allows the user to find a restaurant, order, and pay all without leaving the app. The restaurant-facing side allows businesses to manage their restaurants as well as orders, payments and more. ## How we built it Blood, sweat, tears, and no sleep! What our stack *actually* runs on: * Kotlin for our Android Native client-facing app, integrating various Google APIs and Services such as Maps, and Auth * React for the Restaurant-Facing side, integrating our API endpoints and handling user data * Python based back-end running on GCP ## Challenges we ran into Developing a complex API such as hours was not easy. Connecting all the data points through Google Cloud Platform and making all the data collaborate was perhaps the biggest challenge. ## Accomplishments that we're proud of Our Mobile App uses the latest software and design practices, leading to a optimized and beautiful user experience following the latest trends in Material Design. Our entire stack also seamlessly integrates and dynamically updates whenever new data is called. ## What's next for Ordr Full Implementation of our roadmap features, and a initial public test market in the Ottawa area!
winning
## Inspiration Our app idea brewed from a common shared stressor of networking challenges. Recognizing the lack of available mentorship and struggle to form connections effortlessly, we envisioned a platform that seamlessly paired mentors and students to foster meaningful connections. ## What it does mocha mentor is a web application that seamlessly pairs students and mentors based on their LinkedIn profiles. It analyzes user LinkedIn profiles, utilizes our dynamic backend structure and Machine Learning algorithm for accurate matching, and then as a result pairs a mentor and student together. ## How we built it mocha mentor leverages a robust tech stack to enhance the mentor-student connection. MongoDB stores and manages profiles, while an Express.js server is ran on the backend. This server also executes Python scripts which employ pandas for data manipulation, scikit-learn for our ML cosine similarity-based matching algorithm, and reaches into the LinkedIn API for profile extraction. Our frontend was entirely built with React.js. ## Challenges we ran into The hackathon's constrained timeframe led us to prioritize essential features. Additionally, other challenges we ran into were handling asynchronous events, errors integrating the backend and frontend, working with limited documentation, and running Python scripts efficiently in JavaScript. ## Accomplishments that we're proud of We are proud of developing a complex technical project that had a diverse tech stack. Our backend was well designed and saved a lot of time when integrating with the frontend. With this year's theme of "Unlocking the Future with AI", we wanted to go beyond using a GPT backend, therefore, we utilized machine learning to develop our matching algorithm that gave accurate matches. ## What we learned * The importance of good teamwork! * How to integrate Python scripts in our Express server * More about AI/ML and Cosine similarities ## What's next for mocha mentor * Conduct outreach and incorporate community feedback * Further develop UI * Expand by adding additional features * Improve efficiency in algorithms
## Inspiration In 2020, Canada received more than 200,000 refugees and immigrants. The more immigrants and BIPOC individuals I spoke to, the more I realized, they were only aiming for employment opportunities as cab drivers, cleaners, dock workers, etc. This can be attributed to a discriminatory algorithm that scraps their resumes, and a lack of a formal network to engage and collaborate in. Corporate Mentors connects immigrants and BIPOC individuals with industry professionals who overcame similar barriers as mentors and mentees. This promotion of inclusive and sustainable economic growth has the potential of creating decent jobs and significantly improving living standards can also aid in their seamless transition into Canadian society. Thereby, ensuring that no one gets left behind. ## What it does To tackle the global rise of unemployment and increasing barriers to mobility for marginalized BIPOC communities and immigrants due to racist and discriminatory machine learning algorithms and lack of networking opportunities by creating an innovative web platform that enables people to receive professional mentorship and access to job opportunities that are available through networking. ## How we built it The software architecture model being used is the three-tiered architecture, where we are specifically using the MERN Stack. MERN stands for MongoDB, Express, React, Node, after the four key technologies that make up the stack: React(.js) make up the top ( client-side /frontend), Express and Node make up the middle (application/server) tier, and MongoDB makes up the bottom(Database) tier. System Decomposition explains the relationship better below. The software architecture diagram below details the interaction of varying components in the system. ## Challenges we ran into The mere fact that we didn't have a UX/UI designer on the team made us realize how difficult it was to create an easy-to-navigate user interface. ## Accomplishments that we're proud of We are proud of the matching algorithm we created to match mentors with mentees based on their educational qualifications, corporate experience, and desired industry. Additionally, we would also be able to monetize the website utilizing the Freemium subscription model we developed if we stream webinar videos using Accedo. ## What's next for Corporate Mentors 1) The creation of a real mentor pool with experienced corporate professionals is the definite next step. 2) Furthermore, the development of the freemium model (4 hrs of mentoring every month) @ $60 per 6 months or $100 per 12 months. 3) Paid Webinars (price determined by the mentor with 80% going to them) and 20% taken as platform maintenance fee. 4) Create a chat functionality between mentor and mentee using Socket.io and add authorization in the website to limit access to the chats from external parties 5) Create an area for the mentor and mentee to store and share files
## Inspiration Food is a universal language. People bond and share memories over food. As teammates, we bonded over running to Codegen's free boba giveaways and rating the new DoorDash business model of homecooked meals. Since food was able to bring us together, we thought, why isn't there a better way to learn about people's food preferences, and even meet people based off of food preferences. All of us have been sharing our food reviews on apps like Beli, and we wanted to leverage AI technologies and other innovations to make it easier for others to meet based off of their tastes in food. ## What it does BeliMatch, like DataMatch, helps match you to your most compatible partner through your food preferences and suggests a first date location for both of you to enjoy a great time. After you submit the introduction form, we gather information like your top 25 restaurants on Beli, and we use this to run a matching between what you're looking for, your food preferences, and what others are looking for. At the end of the matching, we send emails to each user with personalized recommendations and their new friends / date matches. ## How we built it BeliMatch uses React framework with a Firebase backend, and our unique matching algorithm that leverages AI to generate the best matches. ## Challenges we ran into Creating the pipeline to get the relevant features from a user's Beli or Yelp profile through web scraping and collecting detailed restaurant data was more challenging than anticipated since this was an unexpected technology hurdle we had to overcome. The matching also ended up being more difficult than anticipated due to rules around school matching and preferences that users inputted into the app. ## Accomplishments that we're proud of Being able to actually provide matches that could potentially lead to friendships and more! We went around Treehacks getting users to sign up for our service and many were actually very excited about the idea and filled out our form. ## What we learned From a technical skills perspective, we learned how to use AI in matching algorithms and use Twilio to send out emails. Additionally, we learned some soft skills when going around pitching our idea and learned how to make people the most excited about it. ## What's next for BeliMatch BeliMatch will run our first batch of matchings at the end of TreeHacks, after receiving overwhelming interest from fellow hackers. Beyond that, we hope that BeliMatch will take off into a platform that helps connect people through our shared universal language -- food.
partial
## Inspiration One of our team member's father and their grandfather were both veterans. When speaking with them about the biggest problem currently facing veterans, they were unanimous in describing the inefficiencies of the healthcare and assistance systems that are available to veterans. In this way, we took on the challenge of simplifying the entire veteran help process by streamlining appointment scheduling, granting digital resource access through a personal assistant UI, and adding additional frameworks for future integration by the VA that so desperately needs to move into the digital age. ## What it does -Serves as a foundation for health assistance via digital access to physical, mental, and emotional help -Priority scheduler (based on severity) for the VA to utilize -Geolocation and voice commands for user accessibility -Provides instant resources for mental challenges such as depression, anxiety, and family issues -Allows veterans to access a network of applications such as auto-generating healthcare vouchers and requesting care reimbursement ## How I built it We used Java on the mobile avenue to build a native Android application. Using Google Voice Recognition, we process user speech inputs with our own NLP systems that takes into account context to determine response. We then push that response back out with TTS for a personal assistant experience. On the web side, we're using Django to run and host an SQLite database to which we send user data when they need to schedule a VA appointment. ## Challenges I ran into We couldn't find a suitable nlp api to employ that understood our context, so we had to create our own. ## What's next for Assist We anticipate an opportunity for this app to grow in capabilities as new policies push for system reform. We developed not only several front-end capabilities and useful tools that are instantly usable, but also a framework for future integration into the VA's system for the automation and digitalization of complicated paper processes.
## Inspiration Knowing there are many students who cannot afford a well-done education system due to Covid-19 pandemic, our team thinks it is crucial to create a charity platform providing online tutors and video courses for free. ## What it does It is a charity platform providing online tutors and video courses for free. ## How we built it We used Figma to design and used React for the Front-End. And we used AWS, Docker and implement APIs for video chat. ## Challenges we ran into Time management. ## Accomplishments that we're proud of We have successfully created the video chat function without prior knowledge. And in general, it is exciting to form an idea and implement it in 36 hours! ## What we learned We learned more about back-end services. ## What's next for Student Division: Free Online Tutors for Minority Students We will finish building the platform and start hiring tutors, and marketing the product.
## Inspiration The best way to learn to code is usually through trial and error. As a team, we all know first hand how hard it can be to maintain the proper standards, techniques, and security practices necessary to keep your applications secure. SQLidify is a teaching tool and a security tool all in one, with the goal of helping coders keep their applications secure. ## What it does SQLidify uses our own unique dataset/training model which consists of over 250 labelled data entries to identify SQL vulnerabilities in an application. To use it, simply paste your code into our website where our machine learning model will identify vulnerabilities in your back-end code, and then will suggest strategies to fix these issues. ## How we built it We used a Flask, python based backend that handles API calls from a front end designed in React.js and Tailwind CSS. When called, our python backend reads data from users and then sends the data to our AI model. At the same time, our own simplified natural language processing model identifies keywords in specific lines of code and sends these lines individually to our AI model. The model makes a prediction for each which is then compared to help improve reliability. If the predictions don't match, further instructions are sent to the user in order to improve our accuracy. The AI is designed using Cohere's classification workflow. We generated over 250 code snippets labeled as either vulnerable or safe. We have another model that is triggered if the code is determined to be vulnerable, which will then generate 3 appropriate options to resolve the vulnerabilities. ## Challenges we ran into We had trouble setting up cohere and getting it to integrate with our application, but we were luckily able to resolve the issues in time to build our app. We also had a lot of trouble finding a dataset fit for our needs so we ended up creating our own from scratch. ## Accomplishments that we're proud of Despite setbacks, we managed to integrate the AI and React frontend and Flask backend all together in less than 24 hours. ## What we learned We all learned so much about machine learning and Cohere in particular, since none of us were experienced at working with AI until McHacks. ## What's next for SQLidify Expansion. We hope to eventually integrate detection for other vulnerabilities such as buffer overflow and many more.
losing
## Inspiration We wanted to find a way to implement music into our everyday lives that is also applicable in many environments. First, we imagined how music can indicate fatigue in endurance tasks, such as weight-lifting. But we decided to demonstrate this functionality with varying sound. ## What it does Our product records an acceleration signal and recreates it real-time based on movement into an instrument-like sin wave. ## How we built it We built it by attaching an accelerometer to an Arduino, spending a majority of our time working out the math algorithms on the IDE for instrument like tones. ## Challenges we ran into We were unfamiliar with most hardware and software applications so we had to navigate them ourselves first before we can use them for our project. Sometimes, we did not know what was wrong and had to troubleshoot, which took a lot of patience and time. ## Accomplishments that we're proud of We are proud of our abilities to develop our idea into real-life without changing our idea too much. ## What we learned We learned how to combine our skills together as a team and utilize our strengths to create a product we are all proud of. ## What's next for M-Wrist One thing we can work on with M-Wrist is adding a function where users can remix their default sounds. Another thing we can add is bluetooth functionality so that the sounds can be recorded and used to track movement patterns for health as well as making sounds for musical application.
## Inspiration For this project, we wanted to work with hardware, and we wanted to experiment with processing sound or music on a microcontroller. From this came the idea of using a gyroscope and accelerometer as a means to control music. ## The project The Music Glove has an accelerometer and gyroscope module combined with a buzzer. The user rotates their and to control the pitch of the music (cycle through a scale). We planned to combine this basic functionality with musical patterns (chords and rythms) generated from a large table of scales stored in flash memory, but we were unable to make the gyroscope work properly with this code. ## Challenges We were initially very ambitious with the project. We wanted to have multiple speakers/buzzers to make chords or patterns that could be changed with one axis of hand rotation ("roll") as well as a speaker to establish a beat that could be changed with the other axis of hand rotation ("pitch"). We also wanted to have a third input (in the form of a keypad with multiple buttons or potentiometer) to change the key of the music. Memory constraints on the Arduino made it initially more appealing to use an MP3 playing module to contain different background rythm patterns. However, this proved to be difficult to combine with the gyroscope functionality, especially considering the gyroscope module would sometimes crash inexplicably. It appears that the limited number of timers on an Arduino Uno/Nano (three in total) can make combining different functionalities into a single sketch difficult. For example, each tone being played at a set time requires its own timer, but serial communication often also requires a timer. We were also unable to add a third input (keypad) for changing the key separately. ## What we learned We learned how to use a gyroscope and manipulate tones with an Arduino. We learned how to use various libraries to create sounds and patterns without a built-in digital to analog converter on the Arduino. We also learned how to use a keypad and an mp3 module, and working around combining different functionalities. Most of all, we had fun!
## Inspiration: ``` Sound is a precious thing. Unfortunately, some people are unable to experience as a result of hearing loss or of being hearing impaired. Although, we firmly believe that communication should be as simple as a flick of the wrist and we aim to bring simplicity and ease to those affected by hearing loss and impairment. ``` ## What it does: The HYO utilizes hand gesture input and relays it to an Android powered mobile device/ or PC. The unique set of gestures allow a user to select a desired phrase or sentence to communicate with someone using voice over. ## How we built it: HYO = Hello + Myo We used the C++ programming language to write the code for the PC to connect to the MYO and recognize the gestures in a multilevel menu and output instructions. We developed the idea to create and deploy an Android app for portability and ease of use. ## Challenges we ran into: We encountered several challenges along the way while building our project. We spent large amounts of time troubleshooting code issues. The HYO was programmed using the C++ language which is complex in its concepts. After long hours of continuous programming and troubleshooting, we were able to run the code and connect the MYO to the computer. ## Accomplishments that we're proud of: 1) Not sleeping and working productively. 2) Working with in diverse group of four who are four complete strangers and being able to collaborate with them and work towards a single goal of success despite belonging to different programs and having completely different skill sets. ## What we learned: 1) Github is an extremely valuable tool. 2) Learnt new concepts in C++ 3) Experience working with the MYO armband and Arduino and Edison micro-controllers. 4) How to build an Android app 5) How to host a website ## What's next for HYO WORLD HYO should be optimized to fit criteria for the average technology consumer. Hand gestures can be implemented to control apps via the MYO armband, a useful and complex piece of technology that can be programmed to recognize various gestures and convert them into instructions to be executed.
losing
## Inspiration Save the World is a mobile app meant to promote sustainable practices, one task at a time. ## What it does Users begin with a colorless Earth prominently displayed on their screens, along with a list of possible tasks. After completing a sustainable task, such as saying no to a straw at a restaurant, users obtain points towards their goal of saving this empty world. As points are earned and users level up, they receive lively stickers to add to their world. Suggestions for activities are given based on the time of day. They can also connect with their friends to compete for the best scores and sustainability. Both the fun stickers and friendly competition encourage heightened sustainability practices from all users! ## How I built it Our team created an iOS app with Swift. For the backend of tasks and users, we utilized a Firebase database. To connect these two, we utilized CocoaPods. ## Challenges I ran into Half of our team had not used iOS before this Hackathon. We worked together to get past this learning curve and all contribute to the app. Additionally, we created a setup in Xcode for the wrong type of database at first. At that point, we made a decision to change the Xcode setup instead of creating a different database. Finally, we found that it is difficult to use CocoaPods in conjunction with Github, because every computer needs to do the pod init anyway. We carefully worked through this issue along with several other merge conflicts. ## Accomplishments that I'm proud of We are proud of our ability to work as a team even with the majority of our members having limited Xcode experience. We are also excited that we delivered a functional app with almost all of the features we had hoped to complete. We had some other project ideas at the beginning but decided they did not have a high enough challenge factor; the ambition worked out and we are excited about what we produced. ## What I learned We learned that it is important to triage which tasks should be attempted first. We attempted to prioritize the most important app functions and leave some of the fun features for the end. It was often tempting to try to work on exciting UI or other finishing touches, but having a strong project foundation was important. We also learned to continue to work hard even when the due date seemed far away. The first several hours were just as important as the final minutes of development. ## What's next for Save the World Save the World has some wonderful features that could be implemented after this hackathon. For instance, the social aspect could be extended to give users more points if they meet up to do a task together. There could also be be forums for sustainability blog posts from users and chat areas. Additionally, the app could recommend personal tasks for users and start to “learn” their schedule and most-completed tasks.
## Inspiration When thinking about how we could make a difference within local communities impacted by Covid-19, what came to mind are our frontline workers. Our doctors, nurses, grocery store workers, and Covid-19 testing volunteers, who have tirelessly been putting themselves and their families on the line. They are the backbone and heartbeat of our society during these past 10 months and counting. We want them to feel the appreciation and gratitude they deserve. With our app, we hope to bring moments of positivity and joy to those difficult and trying moments of our frontline workers. Thank you! ## What it does Love 4 Heroes is a web app to support our frontline workers by expressing our gratitude for them. We want to let them know they are loved, cared for, and appreciated. In the app, a user can make a thank you card, save it, and share it with a frontline worker. A user's card is also posted to the "Warm Messages" board, a community space where you can see all the other thank-you-cards. ## How we built it Our backend is built with Firebase. The front-end is built with Next.js, and our design framework is Tailwind CSS. ## Challenges we ran into * Working with different time zones [12 hour time difference]. * We ran into trickiness figuring out how to save our thank you cards to a user's phone or laptop. * Persisting likes with Firebase and Local Storage ## Accomplishments that we're proud of * Our first Hackathon + We're not in the same state, but came together to be here! + Some of us used new technologies like Next.js, Tailwind.css, and Firebase for the first time! + We're happy with how the app turned out from a user's experience + We liked that we were able to create our own custom card designs and logos, utilizing custom made design-textiles ## What we learned * New Technologies: Next.js, Firebase * Managing time-zone differences * How to convert a DOM element into a .jpeg file. * How to make a Responsive Web App * Coding endurance and mental focus -Good Git workflow ## What's next for love4heroes More cards, more love! Hopefully, we can share this with a wide community of frontline workers.
## Inspiration As university students, emergency funds may not be on the top of our priority list however, when the unexpected happens, we are often left wishing that we had saved for an emergency when we had the chance. When we thought about this as a team, we realized that the feeling of putting a set amount of money away every time income rolls through may create feelings of dread rather than positivity. We then brainstormed ways to make saving money in an emergency fund more fun and rewarding. This is how Spend2Save was born. ## What it does Spend2Save allows the user to set up an emergency fund. The user inputs their employment status, baseline amount and goal for the emergency fund and the app will create a plan for them to achieve their goal! Users create custom in-game avatars that they can take care of. The user can unlock avatar skins, accessories, pets, etc. by "buying" them with funds they deposit into their emergency fund. The user will have milestones or achievements for reaching certain sub goals while also giving them extra motivation if their emergency fund falls below the baseline amount they set up. Users will also be able to change their employment status after creating an account in the case of a new job or career change and the app will adjust their deposit plan accordly. ## How we built it We used Flutter to build the interactive prototype of our Android Application. ## Challenges we ran into None of us had prior experience using Flutter, let alone mobile app development. Learning to use Flutter in a short period of time can easily be agreed upon to be the greatest challenge that we faced. We originally had more features planned, with an implementation of data being stored using Firebase, so having to compromise our initial goals and focus our efforts on what is achievable in this time period proved to be challenging. ## Accomplishments that we're proud of This was our first mobile app we developed (as well as our first hackathon). ## What we learned This being our first Hackathon, almost everything we did provided a learning experience. The skills needed to quickly plan and execute a project were put into practice and given opportunities to grow. Ways to improve efficiency and team efficacy can only be learned through experience in a fast-paced environment such as this one. As mentioned before, with all of us using Flutter for the first time, anything we did involving it was something new. ## What's next for Spend2Save There is still a long way for us to grow as developers, so the full implementation of Spend2Save will rely on our progress. We believe there is potential for such an application to appeal to its target audience and so we have planned projections for the future of Spend2Save. These projections include but are not limited to, plans such as integration with actual bank accounts at RBC.
partial
## Inspiration Inspired by the rise and proliferation of facial recognition technology, we decided to take the increasingly important and nuanced matter of security into our own hands. Swiper No Swiping provides users with the peace of mind of knowing their devices and data will always be safe whether they are around curious hands or in large crowds. Our logo of the hedgehog mirrors the tale of the hedgehog and the fox, both in the comedic sense - as the popular character in Dora Explorer was a fox named Swiper - and in a metaphorical sense - as the fox represents the versatility of big data and the hedgehog represents the strength and resilience of security. ## What it does Swiper No Swiping uses AWS Rekognition and multiple AWS Services to lock the device when an unauthorized face is using it. The faces of close ones can be added to the accepted list and the feature can be toggled on and off to provide our users with utmost versatility for their unique needs. Further, if the phone has been stolen and locked, the user will immediately receive an email notification through AWS SNS and will be able to log into our companion website for further support. On the website, users will be able to access their phone's location services to track down the phone and see updated photos taken by the front camera of the potential thief. Not only does this technology protect data and increase the chances of reclaiming a stolen phone, its proliferation will discourage phone theft. ## How we built it Our project represented a unique challenge of having equal importance in web and mobile development - which were integrated through Amazon Web Services. The core of our technology is based upon AWS Rekognition, AWS DynamoDB, AWS S3, AWS Cognito, and AWS SNS which respectively provided us the tools for image recognition, data storage, secure authentication, and detailed notifications, all of which are integral to the product. Then, we used Android Studio, Java, and XML to build the mobile app and connect the user to the tech. As for the website, we used Javascript, HTML, and CSS to develop the content and styling of the website, then used Node.JS to retrieve values from AWS DynamoDB and AWS S3 and connect it with the rest of our project. ## Challenges we ran into From the initial moments of the hackathon, we ran into issues regarding authentication of users, multithreading of 4+ threads, and modification of services to behave as activities. Though one of our members has worked with AWS briefly in the past, this experience still created many puzzling and rewarding moments. As we began web development, further issues ensued as neither of us had experience with back-end. As such, learning Node.JS and using it to serve HTML and CSS and retrieve data from AWS was a very intensive and at times frustrating process. Furthermore, the nature of asynchronous functions resulted in difficulties that caused us to pivot our designs last minute. ## Accomplishments that we're proud of After this very tiring and fulfilling weekend, we are proud of have concluded with a project that consists of both an mobile app and a website, and with a slight disbelief of what complicated issues we have overcome and what simple matters we still do not understand. We're proud to have learned back-end and gained versatility in our respective development skills. ## What's next for Swiper No Swiping To be entirely satisfied of a project made in 36 hours, or to be entirely satisfied of a project at all, is an unlikely occurrence. We hope to expand the capabilities of Swiper No Swiping by creating an option that locks certain apps such as photos or messages rather than the entire phone and we plan to leverage AWS's resources to scale the project to bring to more users. Also, we are aware that the website may result in legal issues, so we plan to gain a better understanding potential legal setbacks and restrictions so we can still carry out a product that protects our users. Furthermore, we wish to fine-tune the user-interfaces to be more cohesive with each other and create a more enjoyable experience.
# SwipeShield Project Story ## Inspiration The idea for SwipeShield was born out of a growing concern for mobile security. With the rise in smartphone theft and unauthorized access, we wanted to create a solution that improves user authentication while maintaining convenience. The concept of using unique swipe patterns as a biometric measure intrigued us, leading us to explore how machine learning could be used to analyze these patterns effectively. ## What it does SwipeShield is a user verification system that analyzes swipe gestures on mobile devices to distinguish between legitimate users and potential imposters. By employing machine learning algorithms, our app identifies swipe patterns and provides real-time feedback on whether the last few swipes match the user's behavior, improving security without compromising user experience. ## How we built it We began by gathering swipe data using the Android Debug Bridge (ADB) to collect the necessary input for our machine learning model. After cleaning and organizing the data, we employed K-Means clustering to analyze swipe patterns and classify them into user and non-user categories. The project was developed using Python for the backend and Streamlit for the interactive web application, allowing us to visualize data and results in a user-friendly manner. ## Challenges we ran into One of the primary challenges was ensuring the accuracy of our machine learning model. Tuning the parameters for K-Means clustering took several iterations, as we needed to balance the sensitivity of the model with its ability to avoid false positives. Additionally, integrating the data collection from ADB into a seamless workflow proved difficult, requiring us to troubleshoot various technical issues along the way. ## Accomplishments that we're proud of We successfully built a functional prototype that demonstrates the potential of using swipe patterns for user verification. Our interactive web application showcases real-time analysis of swipe data, allowing users to visualize their swipe patterns and understand the clustering results. Furthermore, we are proud of our ability to work collaboratively as a team, leveraging each member's strengths to overcome obstacles. ## What we learned Throughout this project, we gained valuable insights into the process of collecting and analyzing biometric data. We deepened our understanding of machine learning algorithms, particularly in the context of unsupervised learning. Additionally, we learned the importance of user-centered design, ensuring that our application is both secure and user-friendly. ## What's next for SwipeShield Looking ahead, we plan to refine our model by incorporating more diverse datasets to improve its accuracy and reliability. We also aim to explore additional biometric authentication methods, such as touch pressure sensitivity and swipe speed, to improve security further. Our goal is to make SwipeShield a comprehensive security solution that adapts to users' unique behaviors while remaining effortless to use.
## Inspiration Ordering delivery and eating out is a major aspect of our social lives. But when healthy eating and dieting comes into play it interferes with our ability to eat out and hangout with friends. With a wave of fitness hitting our generation as a storm we have to preserve our social relationships while allowing these health conscious people to feel at peace with their dieting plans. With NutroPNG, we enable these differences to be settled once in for all by allowing health freaks to keep up with their diet plans while still making restaurant eating possible. ## What it does The user has the option to take a picture or upload their own picture using our the front end of our web application. With this input the backend detects the foods in the photo and labels them through AI image processing using Google Vision API. Finally with CalorieNinja API, these labels are sent to a remote database where we match up the labels to generate the nutritional contents of the food and we display these contents to our users in an interactive manner. ## How we built it Frontend: Vue.js, tailwindCSS Backend: Python Flask, Google Vision API, CalorieNinja API ## Challenges we ran into As we are many first-year students, learning while developing a product within 24h is a big challenge. ## Accomplishments that we're proud of We are proud to implement AI in a capacity to assist people in their daily lives. And to hopefully allow this idea to improve peoples relationships and social lives while still maintaining their goals. ## What we learned As most of our team are first-year students with minimal experience, we've leveraged our strengths to collaborate together. As well, we learned to use the Google Vision API with cameras, and we are now able to do even more. ## What's next for McHacks * Calculate sum of calories, etc. * Use image processing to estimate serving sizes * Implement technology into prevalent nutrition trackers, i.e Lifesum, MyPlate, etc. * Collaborate with local restaurant businesses
losing
## Inspiration In North America alone, there are over 58 million people of all ages experiencing limited hand mobility. Our team strives to better facilitate the daily life for people who have the daunting task of overcoming their disabilities. We identified a disconcerting trend that rehabilitation options for people with limited motor function were too expensive. We challenged ourselves with developing an inexpensive therapeutic device to train fine motor skills, while also preserving normative routines within daily life. ## What it does Xeri is a custom input device. Xeri allows for the translation of simple gestures into commands for a computer. Xeri requires a user to touch a specific digit to their thumbs in order to perform actions like right click, left click, scroll up, and scroll down. This touching action stimulates the muscle-mind connection enhancing it's ability to adapt to changes. With prolonged and consistent use of Xeri, patients will be able to see an improvement in their motor control. Xeri transforms the simple routine action of browsing the internet into a therapeutic medium that feels normal to the patient. ## How we built it Xeri is composed of ESP32 Core v2, MPU6050 Gyroscope, four analog sensors, and four custom analog contacts. The ESP32 allows for bluetooth connection, the MPU allows for tracking of the hand, and the sensors and contacts allow for touch control. Xeri was developed in three prototype stages: P0, P1, and P2. In P0, we developed our custom analog sensors for our hands, created a rudimentary cardboard brace, and determined how to evaluate input. In P1, we replaced our cardboard brace with a simple gloved and modified to fit the needs of the hardware. In P2, we incorporated all elements of our hardware onto our glove and fully calibrated our gyroscope. Ultimately, we created a device that allows for therapeutic motion to be processed into computer input. ## Challenges we ran into Xeri's development was not a trouble-free experience. We first encountered issues developing our custom analog contacts. We had trouble figuring out the necessary capacitance for the circuit, but through trial and error, we eventually succeeded. The biggest setback we had to mitigate was implementing our gyroscope into our code. The gyroscope we were using was not only cheap, but the chip was also defective. Our only solution was to work around this damage by reverse-engineering the supporting libraries once again ## Accomplishments that we're proud of Our greatest achievement by far was being able to create a fully operable glove that could handle custom inputs. Although, Xeri is focused on hand mobility, the device could be implemented for a variety of focuses. Xeri's custom created analog contacts were another major achievement of ours due to their ability to measure analog signal using a Redbull can. One of our developers was very inspired by what we had built and spent some time researching and desiging an Apple Watch App that enables the watch to function as a similar device. This implementation can be found in our github for others to reference. ## What we learned During the development process of Xeri, it was imperative that we had to be innovative with the little hardware we were able to obtain. We learned how to read and reverse-engineer hardware libraries. We also discovered how to create our own analog sensors and contacts. Overall, this project was incredibly rewarding as we truly developed a firm grasp on hardware devices. ## What's next for Xeri? Xeri has a lot of room for improvement. Our first future development will be to make the device fully wireless, as we did not have a remote power source to utilize. Later updates would include, a replacement of the gyrosensor, a slimmed down fully uniform version, being able to change the resistance of each finger to promote muscle growth, and create more custom inputs for more accessibility.
Additional Demos: <https://youtu.be/Or4w_bi1oXQ> <https://youtu.be/VGIBWpB8NAU> <https://youtu.be/TrJp2rsnL6Y> ## Inspiration Helping those with impairments is a cause which is close to all of the team members in Brainstorm. The primary inspiration for this particular solution came from a recent experience of one of our team members Aditya Bora. Aditya’s mother works as a child psychiatrist in one of the largest children's hospitals in Atlanta, GA. On a recent visit to the hospital, Aditya was introduced to a girl who was paralyzed from the neck down due to a tragic motor vehicle accident. While implementing a brain computer interface to track her eyes seemed like an obvious solution, Aditya learned that technology had yet to be commercially available for those with disabilities such as quadriplegia. After sharing this story with the rest of the team and learning that others had also encountered a lack of assistive technology for those with compromised mobility, we decided that we wanted to create a novel BCI solution. Imagine the world of possibilities that could be opened for them if this technology would allow interaction with a person’s surroundings by capturing inputs via neural activity. Brainstorm hopes to lead the charge for this innovative technology. ## What it does We built a wearable electrode headset that gets neural data from the brain, and decodes it into 4 discrete signals. These signals represent the voltage corresponding to neuron activity in the electrode’s surrounding area. In real-time, the computer processes the electrical activity data through a decision tree model to predict the desired action of the user. This output can then be used for a number of different applications which we developed. These include * RC Wheelchair Demo: This system was designed to simulate that control of a motorized wheelchair would be possible via a BCI. In order to control the RC car via the BCI inputs, microcontroller GPIO pins were connected to the remote control’s input pins allowing for the control of the RC system via the parsed out of the BCI rather than the manual control. The microcontroller reads in the BCI input via the serial monitor on the computer and activates the GPIO pins which correspond to control of the RC car * Fan Demo: This was a simple circuit which leverages a DC motor to simulate if a user was controlling a fan or another device which can be controlled via a binary input (i.e. on/off switch). ## Challenges we ran into One of our biggest challenges was in decoding intent from the frontal lobe. We struggled with defining the optimal placement of electrodes in our headset, and then in actually translating the electroencephalography data into discrete commands. We spent a lot of time reading relevant literature in neurotechnology, working on balancing signal-to-noise ratios, and doing signal processing and transform methods to better model and understand our data. ## What we learned We’re proud of developing a product that gives quadriplegics the ability to move solely based on their thoughts, while also creating a companion platform that enables a smart home environment for quadriplegics. We believe that our novel algorithm which is able to make predictions on a user’s thoughts can be used to make completing simple everyday tasks easier for those who suffer from impairments and drastically improve their quality of life. ## What's next for Brainstorm We’ve built out our tech and hardware platform for operation across multiple devices and use-cases. In the future, we hope our technology will have the ability to detect precise motor movements and understand the complex thoughts that occur in the human brain. We anticipate that unlocking the secrets of the human brain will be one of the most influential endeavors of this century.
## Inspiration Around 40% of the lakes in America are too polluted for aquatic life, swimming or fishing.Although children make up 10% of the world’s population, over 40% of the global burden of disease falls on them. Environmental factors contribute to more than 3 million children under age five dying every year. Pollution kills over 1 million seabirds and 100 million mammals annually. Recycling and composting alone have avoided 85 million tons of waste to be dumped in 2010. Currently in the world there are over 500 million cars, by 2030 the number will rise to 1 billion, therefore doubling pollution levels. High traffic roads possess more concentrated levels of air pollution therefore people living close to these areas have an increased risk of heart disease, cancer, asthma and bronchitis. Inhaling Air pollution takes away at least 1-2 years of a typical human life. 25% deaths in India and 65% of the deaths in Asia are resultant of air pollution. Over 80 billion aluminium cans are used every year around the world. If you throw away aluminium cans, they can stay in that can form for up to 500 years or more. People aren’t recycling as much as they should, as a result the rainforests are be cut down by approximately 100 acres per minute On top of this, I being near the Great Lakes and Neeral being in the Bay area, we have both seen not only tremendous amounts of air pollution, but marine pollution as well as pollution in the great freshwater lakes around us. As a result, this inspired us to create this project. ## What it does For the react native app, it connects with the Website Neeral made in order to create a comprehensive solution to this problem. There are five main sections in the react native app: The first section is an area where users can collaborate by creating posts in order to reach out to others to meet up and organize events in order to reduce pollution. One example of this could be a passionate environmentalist who is organizing a beach trash pick up and wishes to bring along more people. With the help of this feature, more people would be able to learn about this and participate. The second section is a petitions section where users have the ability to support local groups or sign a petition in order to enforce change. These petitions include placing pressure on large corporations to reduce carbon emissions and so forth. This allows users to take action effectively. The third section is the forecasts tab where the users are able to retrieve data regarding various data points in pollution. This includes the ability for the user to obtain heat maps regarding the amount of air quality, pollution and pollen in the air and retrieve recommended procedures for not only the general public but for special case scenarios using apis. The fourth section is a tips and procedures tab for users to be able to respond to certain situations. They are able to consult this guide and find the situation that matches them in order to find the appropriate action to take. This helps the end user stay calm during situations as such happening in California with dangerously high levels of carbon. The fifth section is an area where users are able to use Machine Learning in order to figure out whether where they are is in a place of trouble. In many instances, not many know exactly where they are especially when travelling or going somewhere unknown. With the help of Machine Learning, the user is able to place certain information regarding their surroundings and the Algorithm is able to decide whether they are in trouble. The algorithm has 90% accuracy and is quite efficient. ## How I built it For the react native part of the application, I will break it down section by section. For the first section, I simply used Firebase as a backend which allowed a simple, easy and fast way of retrieving and pushing data to the cloud storage. This allowed me to spend time on other features, and due to my ever growing experience with firebase, this did not take too much time. I simply added a form which pushed data to firebase and when you go to the home page it refreshes and see that the cloud was updated in real time For the second section, I used native base in order to create my UI and found an assortment of petitions which I then linked and added images from their website in order to create the petitions tab. I then used expo-web-browser, to deep link the website in opening safari to open the link within the app. For the third section, I used breezometer.com’s pollution api, air quality api, pollen api and heat map apis in order to create an assortment of data points, health recommendations and visual graphics to represent pollution in several ways. The apis also provided me information such as the most common pollutant and protocols for different age groups and people with certain conditions should follow. With this extensive api, there were many endpoints I wanted to add in, but not all were added due to lack of time. For the fourth section, it is very much similar to the second section as it is an assortment of links, proofread and verified to be truthful sources, in order for the end user to have a procedure to go to for extreme emergencies. As we see horrible things happen, such as the wildfires in California, air quality becomes a serious concern for many and as a result these procedures help the user stay calm and knowledgeable. For the fifth section, Neeral please write this one since you are the one who created it. ## Challenges I ran into API query bugs was a big issue in formatting back the query and how to map the data back into the UI. It took some time and made us run until the end but we were still able to complete our project and goals. ## What's next for PRE-LUTE We hope to use this in areas where there is commonly much suffering due to the extravagantly large amount of pollution, such as in Delhi where seeing is practically hard due to the amount of pollution. We hope to create a finished product and release it to the app and play stores respectively.
partial
# seedHacks Drone-mounted tree planter with a splash of ML magic! ## Description **Our planet's in dire straits.** Over the past several decades, as residential, commercial, and industrial demands have skyrocketed across several industries around the globe, deforestation has become a major problem facing humanity. Though we depend on trees and forests for so much, they seem to be one of our fastest depleting natural resources. As our primary provider for oxygen and one of our biggest global carbon sinks, this is very dangerous news. **seedHacks is a tool to help save the world.** Through the use of cloud-based image classification, live video feed, geospatial optimization, and robotic flight optimization, we've made seedHacks to facilitate reforestation at a rate and efficiency that people might not be able to offer. Using our tech, a drone collecting simple birds-eye images of a forest can compute the optimal positions for seeds to be planted, aiming to approach a desired forest density collected from the user. Once it has this map, planting them's just a robotics problem! Easy, right? ## How did we build? We broke the project up into three main parts: video feed and image collection, image tagging and seed location optimization, and user interface/display. To tackle image/video, we landed on using pygame to set up a constant feed and collect images from that feed upon user request. We then send those captured images to a Microsoft could computing server, where we trained an object detection model using Azure's custom-vision platform which returns a tagged image with locations of the trees in the overhead image. Finally, we send to an optimization algorithm that utilizes all the free space possible as well as some distance constraints to fill the available space with as many trees as possible. All this was wrapped up in an elegant and easy-to-interpret UI that allows us to work together with the expertise of users to make the best end-result possible! ## Technical Notes * Azure custom vision was used to train an object detection model that could label tree and trees. We used about 33 images found online to train this machine learning model, resulting in a precision of 82.5% * We used the custom vision API to send our aerial view images of forests to the prediction endpoint which returned predictions consisting of confidence level, label, and a bounding box. * We then parsed the output of the object detection by creating a 2D numpy array in Python representing the original image. We filled indices of the array with 1’s where pixels were labeled as “tree” or “trees” with at least a 50% confidence. At the same time, we extracted the max width and height of the canopy of the trees to automate the process for users. The users are allowed to input a buffer, as a percentage, which increases the bounding box for tree growth based on the current density/present species; this is especially important if the roots of the tree need space to grow or the tree species is competitive. * After the 2D array was filled with pre-existing trees, we iterated through the array to find places where new trees could be planted such that there was enough space for the tree to mature to its full canopy size. We labeled these indices with 2 to differentiate between existing trees and potential new trees. ## What did we learn? First off, that selecting and training good object detector can be complicated and mysterious, but definitely worth putting some time into. Though our initial models had promise, we needed to optimize for overhead forest views, which is not something that's used to train too many. Second, that keeping it simple is sometimes better for realizing ideas well. We were very excited to get our hands on a Jetson Nano and trick it out with AlwaysAI's amazing technologies, but we realized some time in that because we didn't actually end up using the hardware and software to the fullest of their abilities, they might not be the best approach to our particular problems. So, we simplified! Finally, that the applicability of cutting-edge environmental robotics carries a lot of promise going forward. With not too much time, we managed to develop a somewhat sophisticated system that could potentially have a huge impact - and we hope to be able to contribute more to the field in the future! ## What's next for seedHacks? Next steps for our project would include: * Further optimization on seed location (more technical approach using botantical/silvological expertise, etc) * Training object detector better and better to pick out individual and clusters of trees from an overhead view * More training on burnt trees and forests * Robotic pathfinding systems to automatically execute paths through a forest space * Actuators on drones to make seed planting possible * Generalizing to aquatic and other ecosystems
## Inspiration Our inspiration for this project was from the recent spread of pests throughout the U.S., such as Japanese beetles, which have wreaked havoc for crops and home gardens. We personally have gardens at home where we grow plants like cucumbers and tomatoes, and were tired of seeing half-eaten leaves and destroyed plants. ## What it does PestNet is a computer vision model that takes in videos and classifies up to 19 different pests throughout the video. Our webapp creates an easy interface for users to upload their videos, on which we perform inference using our vision model. We provide a list of all pests that appeared on their crops or plants, as well as descriptions for how to combat those pests. ## How we built it To build the computer vision model, we used Roboflow to develop a classification model from a checkpoint of a custom pre-trained on the ImageNet dataset. We aggregated images from two datasets and performed data augmentations to create a combined dataset of about 15,000 images. After fine-tuning the model on our data, we achieved a validation accuracy of 93.5% over 19 classes. To build the web app, we first sketched a wireframe with intended behavior on Figma. We created it with a React frontend (Typescript, Vite, Tailwind CSS, Toastify) with a Python backend (FastAPI). The backend contains both code for the CV model as well as Python processing that we use to clean the results of the CV model. The frontend contains an interactive carousel and a video player for the user to view the results themselves. ## Challenges we ran into One challenge we ran into was integrating Roboflow into our computer vision workflow, since we had never used the platform and ran into some version control issues. Another challenge we ran into was integrating video uploads into our web app. It was difficult at first to get the types right for sending files across our REST API. A third challenge was processing the data from Roboflow, and getting it to the right format for our web app. All in all, we learned that communication was key: with so many moving parts, it was important that each person working on a separate part of the project made clear what they had made, so we could connect them all together in the end. ## Accomplishments that we're proud of We're proud of the web app we made! We're also proud of working together and not giving up. We spent at least the first quarter of the hackathon brainstorming ideas and constantly reviewing. We were worried that we had wasted too much time on thinking of a good plan, but it turned out that our current project used a little bit of all of our previous ideas, and we were able to hack together something by the end! ## What we learned We learned how to easily and quickly prototype computer vision models with Roboflow, as well as how to perform inference using Roboflow's APIs. We also learned how to make a full-stack webapp that incorporates AI models and designs from Figma. Most importantly, we learned how to brainstorm ideas, collaborate, and work under a time crunch :) ## What's next for PennApps If we had more time, we would develop a more sophisticated computer vision model with more aggregated and labeled data. With the time constraints of the hackathon, we did not have time to manually find and label images that would be more difficult to classify (ex. where bugs are smaller). We would also deploy the Roboflow model locally instead of using the Hosted Video Inference API, so that we could perform inference in real-time. Finally, we also want to add more features to the webapp, such as a method to jump to different parts of the video based on the labels.
## Inspiration As is known to all, the wildfire issue is becoming more and more serious during recent years, especially in California. It is very hard to find out where the wildfire happens and where it is heading to because satellite monitoring have half-day delay and the wildfire often spread too quickly before firefighters could possibly take actions. So, we built up the path planning software for drones to detect and image wildfires with efficient and safe path. A beautiful picture of human beings’ future lives is currently drawing by us and we are desired to devote ourselves to this spectacular world. ## What it does It mainly deals with the collection of historic data of wildfire in California. Then the drones are able to learn which path is the most efficient, the most economic way, taking advantage of machine learning algorithms, to the place where fire started. People then can understand the situations of wildfire better. ## How we built it We work on collecting historical wildfire data with the help of datasets provided by API like Google Earth, or NASA public open sources, visualizing them, and then figuring out the most vulnerable sites or most important sites and setting coordinates to them. We established UAVs path planning model using machine learning algorithm, data we get, and simply simulated the hardware part combining software with hardware, in NVIDIA JetBot. We then use JetBot to simulate UAV in terms of motion controlling, image shooting and identification, since they have similar approaches, algorithm, sensors and detectors. ## Challenges we ran into When we were finding data, we didn't know which data was the most reflective one. We had data of fire for the past 24h, 7 days, or past few years. There are lots of factors that can influence the path of drones, and it's hard to consider wind, weather, fire enlargement all into account. It was also hard to do the demos or simulations, since we didn't have a drone that allows deep learning. ## Accomplishments that we're proud of Convincing visualized wildfire historic data; Strong machine learning model that could figure out the most efficient UAVs detection path when wildfire happens; Successful assembled and set a ground robot car and taught it doing basic motions and image processing which could be considered as the same operation approaches on drone; Consumed a lot of food and drink; Enjoyed two days amazing TreeHacks; Exchanged and inspired new ideas; Made new friends...... ## What we learned Too much to cover everything we have learned. To name a few, some useful and powerful API tools, knowledge on computer vision, machine learning, AR/VR, how to assemble and initialize a ground robot, cooperation and communication skills... and fun things like how to find a good place to sleep safe and sound when there is no bed, Stanford is not much worse than Berkeley as we had thought hh.... ## What's next for Fire Drones! Adding various factors like wind direction, wind speed, and vegetation area into the algorithm can make our path model more appliable. We can also build a nice user interface that can show firing areas and the path of drones. Images sent by drones can also be displayed and then use them to train the deep learning model.
partial
## Inspiration Ever felt a shock by thinking where did your monthly salary or pocket money go by the end of a month? When Did you spend it? Where did you spend all of it? and Why did you spend it? How to save and not make that same mistake again? There has been endless progress and technical advancements in how we deal with day to day financial dealings be it through Apple Pay, PayPal, or now cryptocurrencies and also financial instruments that are crucial to create one’s wealth through investing in stocks, bonds, etc. But all of these amazing tools are catering to a very small demographic of people. 68% of the world population still stands to be financially illiterate. Most schools do not discuss personal finance in their curriculum. To enable these high end technologies to help reach a larger audience we need to work on the ground level and attack the fundamental blocks around finance in people’s mindset. We want to use technology to elevate the world's consciousness around their personal finance. ## What it does Where’s my money, is an app that simply takes in financial jargon and simplifies it for you, giving you a taste of managing your money without affording real losses so that you can make wiser decisions in real life. It is a financial literacy app that teaches you A-Z about managing and creating wealth in a layman's and gamified manner. You start as a person who earns $1000 dollars monthly, as you complete each module you are hit with a set of questions which makes you ponder about how you can deal with different situations. After completing each module you are rewarded with some bonus money which then can be used in our stock exchange simulator. You complete courses, earn money, and build virtual wealth. Each quiz captures different data as to how your overview towards finance is. Is it inclining towards savings or more towards spending. ## How we built it The project was not simple at all, keeping in mind the various components of the app we first started by creating a fundamental architecture to as to how the our app would be functioning - shorturl.at/cdlxE Then we took it to Figma where we brainstormed and completed design flows for our prototype - Then we started working on the App- **Frontend** * React. **Backend** * Authentication: Auth0 * Storing user-data (courses completed by user, info of stocks purchased etc.): Firebase * Stock Price Changes: Based on Real time prices using a free tier API (Alpha vantage/Polygon ## Challenges we ran into The time constraint was our biggest challenge. The project was very backend-heavy and it was a big challenge to incorporate all the backend logic. ## What we learned We researched about the condition of financial literacy in people, which helped us to make a better product. We also learnt about APIs like Alpha Vantage that provide real-time stock data. ## What's next for Where’s my money? We are looking to complete the backend of the app to make it fully functional. Also looking forward to adding more course modules for more topics like crypto, taxes, insurance, mutual funds etc. Domain Name: learnfinancewitheaseusing.tech (Learn-finance-with-ease-using-tech)
## Inspiration Everybody struggles with their personal finances. Financial inequality in the workplace is particularly prevalent among young females. On average, women make 88 cents per every dollar a male makes in Ontario. This is why it is important to encourage women to become more cautious of spending habits. Even though budgeting apps such as Honeydue or Mint exist, they depend heavily on self-motivation from users. ## What it does Our app is a budgeting tool that targets young females with useful incentives to boost self-motivation for their financial well-being. The app features simple scale graphics visualizing the financial balancing act of the user. By balancing the scale and achieving their monthly financial goals, users will be provided with various rewards, such as discount coupons or small cash vouchers based on their interests. Users are free to set their goals on their own terms and follow through with them. The app re-enforces good financial behaviour by providing gamified experiences with small incentives. The app will be provided to users free of charge. As with any free service, the anonymized user data will be shared with marketing and retail partners for analytics. Discount offers and other incentives could lead to better brand awareness and spending from our users for participating partners. The customized reward is an opportunity for targeted advertising ## Persona Twenty-year-old Ellie Smith works two jobs to make ends meet. The rising costs of living make it difficult for her to maintain her budget. She heard about this new app called Re:skale that provides personalized rewards for just achieving the budget goals. She signed up after answering a few questions and linking her financial accounts to the app. The app provided simple balancing scale animation for immediate visual feedback of her financial well-being. The app frequently provided words of encouragement and useful tips to maximize the chance of her success. She especially loves how she could set the goals and follow through on her own terms. The personalized reward was sweet, and she managed to save on a number of essentials such as groceries. She is now on 3 months streak with a chance to get better rewards. ## How we built it We used : React, NodeJs, Firebase, HTML & Figma ## Challenges we ran into We had a number of ideas but struggled to define the scope and topic for the project. * Different design philosophies made it difficult to maintain consistent and cohesive design. * Sharing resources was another difficulty due to the digital nature of this hackathon * On the developing side, there were technologies that were unfamiliar to over half of the team, such as Firebase and React Hooks. It took a lot of time in order to understand the documentation and implement it into our app. * Additionally, resolving merge conflicts proved to be more difficult. The time constraint was also a challenge. ## Accomplishments that we're proud of * The use of harder languages including firebase and react hooks * On the design side it was great to create a complete prototype of the vision of the app. * Being some members first hackathon, the time constraint was a stressor but with the support of the team they were able to feel more comfortable with the lack of time ## What we learned * we learned how to meet each other’s needs in a virtual space * The designers learned how to merge design philosophies * How to manage time and work with others who are on different schedules ## What's next for Re:skale Re:skale can be rescaled to include people of all gender and ages. * More close integration with other financial institutions and credit card providers for better automation and prediction * Physical receipt scanner feature for non-debt and credit payments ## Try our product This is the link to a prototype app <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=312%3A3&node-id=375%3A1838&viewport=241%2C48%2C0.39&scaling=min-zoom&starting-point-node-id=375%3A1838&show-proto-sidebar=1> This is a link for a prototype website <https://www.figma.com/proto/nTb2IgOcW2EdewIdSp8Sa4/hack-the-6ix-team-library?page-id=0%3A1&node-id=360%3A1855&viewport=241%2C48%2C0.18&scaling=min-zoom&starting-point-node-id=360%3A1855&show-proto-sidebar=1>
## 📚🙋‍♂️Domain Name📚🙋‍♂️ **DOMAIN.COM: AUTINJOY-WITH.TECH** ## 💡Inspiration💡 Parents know that travelling can be an eye-opening experience for kids, but they also know that travelling even with one kid can be overwhelming, involving lots of bathroom stops and needing to manage boredom, motion sickness, and more. When parents have an autistic kid, it means they have to consider additional factors when travelling. Because when places with sensory overload, closed spaces, or constant movement stress out an autistic child, the child can have what's called an "autistic meltdown", which is very difficult to manage temper tantrums. So what if they had an app that could help them avoid triggering these meltdowns? ## ❓What it does❓ For parents and the caring adults they trust like a babysitter, the app allows them to: * journal when autistic meltdowns happen and how they manifested * visualize this data in a structured way to understand when and how meltdowns happen * find whether businesses and attractions they plan to visit are unexpectedly crowded or have areas to look out for that could trigger meltdowns * contact destinations like airports, hotels, and museums in advance in case accommodations can be made For kids, the app helps them: * communicate their needs; and * journal their thoughts and feelings For businesses, the app allows them to: * alert users when they unexpectedly have large crowds when users search for them * proactively inform users of areas with potential triggers like sensory overload from loud noises or bright lights; and * receive feedback anonymously from users when kids have a meltdown and any relevant environmental factor so that they might decide to put a sign up to caution visitors or indicate this info somewhere ## 🏗️How we built it🏗️ We built our app using HTML, CSS, and JavaScript for the dashboard and Figma for the mobile app. 1. We Use HTML/CSS/JS for building the dashboard. 2. We Use Auth0 as the login and Signup page. 3. We Use Twilio for Multi-factor Authentication and Connecting SMS from Doctors or Social Groups. 4. We Use FIGMA for Mobile Application. 5. We use Google Maps for Localization and Addressing Restaurants or Stoppages of the Journey. 6. We use Google Cloud for the deployment of the Dashboard ## 🚧Challenges we ran into🚧 * Managing a team in different time zones * Limited coding ability on the team ## 🏅🏆ACCOMPLISHMENTS THAT WE ARE PROUD OF🏅🏆 We worked on an idea that has real-world application! We are happy to have devised a solution to a critical problem that affects such a vast number of people with no available methods or resources to assist them. We learned a lot while working on the project not just technically but also in time management. We are proud we could complete the project and deliver a beautiful fully functional Hack this weekend. ## 📚🙋‍♂️What we learned📚🙋‍♂️ We learned so much about HTML and CSS Design while completing Dashboard. Implementing UI/UX to the fullest of our Capabilities using FIGMA to make it look Aesthetic and Attractive for users. We were able to manage the time well and Implement the ideas we had brainstormed. We also learnt the critical use of Backend Services like Twilio and Auth0 for authentication. Google Maps and Google Cloud made the application more userfriendly and helped us learn about deployment. The themes of the website tried to make it as visually attractive as possible for the Autistic Kids to like and Enjoy. ## 💭What's next for Autinjoy💭 1. User testing and feedback by real-time Users 2. Deployment at a large scale for Social Good & Travel ease 3. Auto-generating the travel itinerary using AI & Machine Learning 4. Making a Weekly schedule for Management of Therapy Sessions
winning
## Inspiration While we were brainstorming for ideas, we realized that two of our teammates are international students from India also from the University of Waterloo. Their inspiration to create this project was based on what they had seen in real life. Noticing how impactful access to proper healthcare can be, and how much this depends on your socioeconomic status, we decided on creating a healthcare kiosk that can be used by people in developing nations. By designing interface that focuses heavily on images; it can be understood by those that are illiterate as is the case in many developing nations, or can bypass language barriers. This application is the perfect combination of all of our interests; and allows us to use tech for social good by improving accessibility in the healthcare industry. ## What it does Our service, Medi-Stand is targeted towards residents of regions who will have the opportunity to monitor their health through regular self administered check-ups. By creating healthy citizens, Medi-Stand has the potential to curb the spread of infectious diseases before they become a bane to the society and build a more productive society. Health care reforms are becoming more and more necessary for third world nations to progress economically and move towards developed economies that place greater emphasis on human capital. We have also included supply-side policies and government injections through the integration of systems that have the ability to streamline this process through the creation of a database and eliminating all paper-work thus making the entire process more streamlined for both- patients and the doctors. This service will be available to patients through kiosks present near local communities to save their time and keep their health in check. By creating a profile the first time you use this system in a government run healthcare facility, users can create a profile and upload health data that is currently on paper or in emails all over the interwebs. By inputting this information for the first time manually into the database, we can access it later using the system we’ve developed. Overtime, the data can be inputted automatically using sensors on the Kiosk and by the doctor during consultations; but this depends on 100% compliance. ## How I built it In terms of the UX/UI, this was designed using Sketch. By beginning with the creation of mock ups on various sheets of paper, 2 members of the team brainstormed customer requirements for a healthcare system of this magnitude and what features we would be able to implement in a short period of time. After hours of deliberation and finding ways to present this, we decided to create a simple interface with 6 different screens that a user would be faced with. After choosing basic icons, a font that could be understood by those with dyslexia, and accessible colours (ie those that can be understood even by the colour blind); we had successfully created a user interface that could be easily understood by a large population. In terms of developing the backend, we wanted to create the doctor’s side of the app so that they could access patient information. IT was written in XCode and connects to Firebase database which connects to the patient’s information and simply displays this visually on an IPhone emulator. The database entries were fetched in Json notation, using requests. In terms of using the Arduino hardware, we used Grove temperature sensor V1.2 along with a Grove base shield to read the values from the sensor and display it on the screen. The device has a detectable range of -40 to 150 C and has an accuracy of ±1.5 C. ## Challenges I ran into When designing the product, one of the challenges we chose to tackle was an accessibility challenge. We had trouble understanding how we can turn a healthcare product more accessible. Oftentimes, healthcare products exist from the doctor side and the patients simply take home their prescription and hold the doctors to an unnecessarily high level of expectations. We wanted to allow both sides of this interaction to understand what was happening, which is where the app came in. After speaking to our teammates, they made it clear that many of the people from lower income households in a developing nation such as India are not able to access hospitals due to the high costs; and cannot use other sources to obtain this information due to accessibility issues. I spent lots of time researching how to make this a user friendly app and what principles other designers had incorporated into their apps to make it accessible. By doing so, we lost lots of time focusing more on accessibility than overall design. Though we adhered to the challenge requirements, this may have come at the loss of a more positive user experience. ## Accomplishments that I'm proud of For half the team, this was their first Hackathon. Having never experienced the thrill of designing a product from start to finish, being able to turn an idea into more than just a set of wireframes was an amazing accomplishment that the entire team is proud of. We are extremely happy with the UX/UI that we were able to create given that this is only our second time using Sketch; especially the fact that we learned how to link and use transactions to create a live demo. In terms of the backend, this was our first time developing an iOS app, and the fact that we were able to create a fully functioning app that could demo on our phones was a pretty great feat! ## What I learned We learned the basics of front-end and back-end development as well as how to make designs more accessible. ## What's next for MediStand Integrate the various features of this prototype. How can we make this a global hack? MediStand is a private company that can begin to sell its software to the governments (as these are the people who focus on providing healthcare) Finding more ways to make this product more accessible
## Inspiration Every year hundreds of thousands of preventable deaths occur due to the lack of first aid knowledge in our societies. Many lives could be saved if the right people are in the right places at the right times. We aim towards connecting people by giving them the opportunity to help each other in times of medical need. ## What it does It is a mobile application that is aimed towards connecting members of our society together in times of urgent medical need. Users can sign up as respondents which will allow them to be notified when people within a 300 meter radius are having a medical emergency. This can help users receive first aid prior to the arrival of an ambulance or healthcare professional greatly increasing their chances of survival. This application fills the gap between making the 911 call and having the ambulance arrive. ## How we built it The app is Android native and relies heavily on the Google Cloud Platform. User registration and authentication is done through the use of Fireauth. Additionally, user data, locations, help requests and responses are all communicated through the Firebase Realtime Database. Lastly, the Firebase ML Kit was also used to provide text recognition for the app's registration page. Users could take a picture of their ID and their information can be retracted. ## Challenges we ran into There were numerous challenges in terms of handling the flow of data through the Firebase Realtime Database and providing the correct data to authorized users. ## Accomplishments that we're proud of We were able to build a functioning prototype! Additionally we were able to track and update user locations in a MapFragment and ended up doing/implementing things that we had never done before.
## Inspiration We wanted to make the world a better place by giving patients control over their own data and allow easy and intelligent access of patient data. ## What it does We have built an intelligent medical record solution where we provide insights on reports saved in the application and also make the personal identifiable information anonymous before saving it in our database. ## How we built it We have used Amazon Textract Service as OCR to extract text information from images. Then we use the AWS Medical Comprehend Service to redact(Mask) sensitive information(PII) before using Groq API to extract inferences that explain the medical document to the user in the layman language. We have used React, Node.js, Express, DynamoDB, and Amazon S3 to implement our project. ## Challenges we ran into ## Accomplishments that we're proud of We were able to fulfill most of our objectives in this brief period of time.We also got to talk to a lot of interesting people and were able to bring our project to completion despite a team member not showing up.We also got to learn about a lot of cool stuff that companies like Groq, Intel, Hume, and You.com are working on and we had fun talking to everyone. ## What we learned ## What's next for Pocket AI
winning
## Inspiration With the recent COVID-19 pandemic, musicians around the globe don't have a reliable way to reach their audience and benefit from what they love to do — perform. I personally have seen my music teacher suffer from this, because his business performing at weddings, concerts, and at other public events has been put on hold due to COVID-19. Because of this, we wanted to find a way for musicians, and any other live artist, to be able to perform to a live audience and make money as if it were a real concert. ## What it does WebStage is an online streaming platform where a performer can perform for their audience. Users can make accounts with their email or with Google sign-in. Once they do so, they can create their own "stages" with set times to perform, and they can also buy tickets and go to the stages of other musicians. ## How we built it We used Figma to design and prototype the site. For the front-end, we used HTML/CSS/JS. For the back-end, we used Node.js with Express.js for the server and WebRTC for the live-streaming. We also used Firebase and Cloud Firestore. ## Challenges we ran into Figuring out how to live-stream, establishing chat functionality, ensuring that stages would be visible on the site with proper pop-up messages when needed, saving usernames, verifying authentication tokens, tweaking the CSS to make sure everything fit on screen ## Accomplishments that we're proud of Everything, from the design to the functionality of things like the live-streaming. It's amazing that we were able to build the platform in such a short period of time. ## What we learned We learned how to use WebRTC, Firebase authentication, and Cloud Firestore. We also grew much more comfortable collaborating with others while working towards a strict deadline. ## What's next for WebStage We plan to add various payment options so that users would have to buy tickets (if the streamer charges for them); in the near future for purposes of the hackathon, we elected to not implement this feature in the current build. Other features such as saving streams as videos are possibilities as well.
## Inspiration In the “new normal” that COVID-19 has caused us to adapt to, our group found that a common challenge we faced was deciding where it was safe to go to complete trivial daily tasks, such as grocery shopping or eating out on occasion. We were inspired to create a mobile app using a database created completely by its users - a program where anyone could rate how well these “hubs” were following COVID-19 safety protocol. ## What it does Our app allows for users to search a “hub” using a Google Map API, and write a review by rating prompted questions on a scale of 1 to 5 regarding how well the location enforces public health guidelines. Future visitors can then see these reviews and add their own, contributing to a collective safety rating. ## How I built it We collaborated using Github and Android Studio, and incorporated both a Google Maps API as well integrated Firebase API. ## Challenges I ran into Our group unfortunately faced a number of unprecedented challenges, including losing a team member mid-hack due to an emergency situation, working with 3 different timezones, and additional technical difficulties. However, we pushed through and came up with a final product we are all very passionate about and proud of! ## Accomplishments that I'm proud of We are proud of how well we collaborated through adversity, and having never met each other in person before. We were able to tackle a prevalent social issue and come up with a plausible solution that could help bring our communities together, worldwide, similar to how our diverse team was brought together through this opportunity. ## What I learned Our team brought a wide range of different skill sets to the table, and we were able to learn lots from each other because of our various experiences. From a technical perspective, we improved on our Java and android development fluency. From a team perspective, we improved on our ability to compromise and adapt to unforeseen situations. For 3 of us, this was our first ever hackathon and we feel very lucky to have found such kind and patient teammates that we could learn a lot from. The amount of knowledge they shared in the past 24 hours is insane. ## What's next for SafeHubs Our next steps for SafeHubs include personalizing user experience through creating profiles, and integrating it into our community. We want SafeHubs to be a new way of staying connected (virtually) to look out for each other and keep our neighbours safe during the pandemic.
## What it does MusiCrowd is an interactive democratic music streaming service that allows individuals to vote on what songs they want to play next (i.e. if three people added three different songs to the queue the song at the top of the queue will be the song with the most upvotes). This system was built with the intentions of allowing entertainment venues (pubs, restaurants, socials, etc.) to be inclusive allowing everyone to interact with the entertainment portion of the venue. The system has administrators of rooms and users in the rooms. These administrators host a room where users can join from a code to start a queue. The administrator is able to play, pause, skip, and delete and songs they wish. Users are able to choose a song to add to the queue and upvote, downvote, or have no vote on a song in queue. ## How we built it Our team used Node.js with express to write a server, REST API, and attach to a Mongo database. The MusiCrowd application first authorizes with the Spotify API, then queries music and controls playback through the Spotify Web SDK. The backend of the app was used primarily to the serve the site and hold an internal song queue, which is exposed to the front-end through various endpoints. The front end of the app was written in Javascript with React.js. The web app has two main modes, user and admin. As an admin, you can create a ‘room’, administrate the song queue, and control song playback. As a user, you can join a ‘room’, add song suggestions to the queue, and upvote / downvote others suggestions. Multiple rooms can be active simultaneously, and each room continuously polls its respective queue, rendering a sorted list of the queued songs, sorted from most to least popular. When a song ends, the internal queue pops the next song off the queue (the song with the most votes), and sends a request to Spotify to play the song. A QR code reader was added to allow for easy access to active rooms. Users can point their phone camera at the code to link directly to the room. ## Challenges we ran into * Deploying the server and front-end application, and getting both sides to communicate properly. * React state mechanisms, particularly managing all possible voting states from multiple users simultaneously. * React search boxes. * Familiarizing ourselves with the Spotify API. * Allowing anyone to query Spotify search results and add song suggestions / vote without authenticating through the site. ## Accomplishments that we're proud of Our team is extremely proud of the MusiCrowd final product. We were able to build everything we originally planned and more. The following include accomplishments we are most proud of: * An internal queue and voting system * Frontloading the development & working hard throughout the hackathon > 24 hours of coding * A live deployed application accessible by anyone * Learning Node.js ## What we learned Garrett learned javascript :) We learned all about React, Node.js, the Spotify API, web app deployment, managing a data queue and voting system, web app authentication, and so so much more. ## What's next for Musicrowd * Authenticate and secure routes * Add IP/device tracking to disable multiple votes for browser refresh * Drop songs that are less than a certain threshold of votes or votes that are active * Allow tv mode to have current song information and display upcoming queue with current number of votes
partial
## Inspiration The internet is filled with user-generated content, and it has become increasingly difficult to manage and moderate all of the text that people are producing on a platform. Large companies like Facebook, Instagram, and Reddit leverage their massive scale and abundance of resources to aid in their moderation efforts. Unfortunately for small to medium-sized businesses, it is difficult to monitor all the user-generated content being posted on their websites. Every company wants engagement from their customers or audience, but they do not want bad or offensive content to ruin their image or the experience for other visitors. However, hiring someone to moderate or build an in-house program is too difficult to manage for these smaller businesses. Content moderation is a heavily nuanced and complex problem. It’s unreasonable for every company to implement its own solution. A robust plug-and-play solution is necessary that adapts to the needs of each specific application. ## What it does That is where Quarantine comes in. Quarantine acts as an intermediary between an app’s client and server, scanning the bodies of incoming requests and “quarantining” those that are flagged. Flagging is performed automatically, using both pretrained content moderation models (from Azure and Moderation API) as well as an in house machine learning model that adapts to specifically meet the needs of the application’s particular content. Once a piece of content is flagged, it appears in a web dashboard, where a moderator can either allow or block it. The moderator’s labels are continuously used to fine tune the in-house model. Together with this in house model and pre-trained models a robust meta model is formed. ## How we built it Initially, we built an aggregate program that takes in a string and runs it through the Azure moderation and Moderation API programs. After combining the results, we compare it with our machine learning model to make sure no other potentially harmful posts make it through our identification process. Then, that data is stored in our database. We built a clean, easy-to-use dashboard for the grader using react and Material UI. It pulls the flagged items from the database and then displays them on the dashboard. Once a decision is made by the person, that is sent back to the database and the case is resolved. We incorporated this entire pipeline into a REST API where our customers can pass their input through our programs and then access the flagged ones on our website. Users of our service don’t have to change their code, simply they append our url to their own API endpoints. Requests that aren’t flagged are simply instantly forwarded along. ## Challenges we ran into Developing the in house machine learning model and getting it to run on the cloud proved to be a challenge since the parameters and size of the in house model is in constant flux. ## Accomplishments that we're proud of We were able to make a super easy to use service. A company can add Quarantine with less than one line of code. We're also proud of adaptive content model that constantly updates based on the latest content blocked by moderators. ## What we learned We learned how to successfully integrate an API with a machine learning model, database, and front-end. We had learned each of these skills individually before, but we has to figure out how to accumulate them all. ## What's next for Quarantine We have plans to take Quarantine even further by adding customization to how items are flagged and taken care of. It is proven that there are certain locations that spam is commonly routed through so we could do some analysis on the regions harmful user-generated content is coming from. We are also keen on monitoring the stream of activity of individual users as well as track requests in relation to each other (detect mass spamming). Furthermore, we are curious about adding the surrounding context of the content since it may be helpful in the grader’s decisions. We're also hoping to leverage the data we accumulate from content moderators to help monitor content across apps using shared labeled data behind the scenes. This would make Quarantine more valuable to companies as it monitors more content.
## Inspiration College students are busy, juggling classes, research, extracurriculars and more. On top of that, creating a todo list and schedule can be overwhelming and stressful. Personally, we used Google Keep and Google Calendar to manage our tasks, but these tools require constant maintenance and force the scheduling and planning onto the user. Several tools such as Motion and Reclaim help business executives to optimize their time and maximize productivity. After talking to our peers, we realized college students are not solely concerned with maximizing output. Instead, we value our social lives, mental health, and work-life balance. With so many scheduling applications centered around productivity, we wanted to create a tool that works **with** users to maximize happiness and health. ## What it does Clockwork consists of a scheduling algorithm and full-stack application. The scheduling algorithm takes in a list of tasks and events, as well as individual user preferences, and outputs a balanced and doable schedule. Tasks include a name, description, estimated workload, dependencies (either a start date or previous task), and deadline. The algorithm first traverses the graph to augment nodes with additional information, such as the eventual due date and total hours needed for linked sub-tasks. Then, using a greedy algorithm, Clockwork matches your availability with the closest task sorted by due date. After creating an initial schedule, Clockwork finds how much free time is available, and creates modified schedules that satisfy user preferences such as workload distribution and weekend activity. The website allows users to create an account and log in to their dashboard. On the dashboard, users can quickly create tasks using both a form and a graphical user interface. Due dates and dependencies between tasks can be easily specified. Finally, users can view tasks due on a particular day, abstracting away the scheduling process and reducing stress. ## How we built it The scheduling algorithm uses a greedy algorithm and is implemented with Python, Object Oriented Programming, and MatPlotLib. The backend server is built with Python, FastAPI, SQLModel, and SQLite, and tested using Postman. It can accept asynchronous requests and uses a type system to safely interface with the SQL database. The website is built using functional ReactJS, TailwindCSS, React Redux, and the uber/react-digraph GitHub library. In total, we wrote about 2,000 lines of code, split 2/1 between JavaScript and Python. ## Challenges we ran into The uber/react-digraph library, while popular on GitHub with ~2k stars, has little documentation and some broken examples, making development of the website GUI more difficult. We used an iterative approach to incrementally add features and debug various bugs that arose. We initially struggled setting up CORS between the frontend and backend for the authentication workflow. We also spent several hours formulating the best approach for the scheduling algorithm and pivoted a couple times before reaching the greedy algorithm solution presented here. ## Accomplishments that we're proud of We are proud of finishing several aspects of the project. The algorithm required complex operations to traverse the task graph and augment nodes with downstream due dates. The backend required learning several new frameworks and creating a robust API service. The frontend is highly functional and supports multiple methods of creating new tasks. We also feel strongly that this product has real-world usability, and are proud of validating the idea during YHack. ## What we learned We both learned more about Python and Object Oriented Programming while working on the scheduling algorithm. Using the react-digraph package also was a good exercise in reading documentation and source code to leverage an existing product in an unconventional way. Finally, thinking about the applications of Clockwork helped us better understand our own needs within the scheduling space. ## What's next for Clockwork Aside from polishing the several components worked on during the hackathon, we hope to integrate Clockwork with Google Calendar to allow for time blocking and a more seamless user interaction. We also hope to increase personalization and allow all users to create schedules that work best with their own preferences. Finally, we could add a metrics component to the project that helps users improve their time blocking and more effectively manage their time and energy.
## Inspiration Our inspiration was that we wanted to build something that we would want use, and could use immediately after the event ended. The two core principles behind Verify are that we want information fast, and we want to know if it is biased. This means we don't want to leave our browser, don't want to scroll over ads and don't want to lookup every site we go to. We knew that if we could create a project that could fit these parameters, we would (and others as well) would use it daily. ## What it does Verify is a chrome extension that analysis the current article you are looking at and produces the sites political leaning/bias, article sentiment and a concise summary of the article, only a click away. Previously analyzed site results are cached in chrome to save precious seconds when you visit again (clearable in settings). ## How we built it Our back end was a **Node** server deployed via **Standard Library** that integrated with the **Google Learning API**, **Article Outliner API** from Diffbot.com, a **NLP summarizer API** from DeepAI.org and <https://mediabiasfactcheck.com/> for the bias analysis by scraping their website (please add an API mediabiasfactcheck!). Our front end was a classic **Chrome extension** built using **JavaScript**, **CSS**, **HTML** and some non reciprocated love. The saving grace of our project was **Standard Library**. the smart code editor, automated package imports and installations, and one click deploy saved hours in creating and hosting our analysis endpoint and allowed us to reach our stretch goal of sentiment analysis. ## Challenges we ran into The biggest challenge we faced was finding a good source for checking website bias. No API's were available, but we found a great, reputable website with thousands of entries so we ended up scraping that site for our results. The usual challenge of setting up and deploying a server without being forced to use certain libraries was removed by **Standard Library's** no hassle deployment ## Accomplishments that we're proud of We are proud that our project has immediate, practical uses, and that we know we will use this project for months or years to come. ## What we learned We learned the power of good prototyping tools (**Standard Library**) and gained more experience with **Google Cloud Services** and our first look into **Chrome Extension** development ## What's next for Verify We plan on using our project ourselves. We will look into a wide release of the extension. As the server costs and API calls are only pennies per user per month, revenue models could include advertisement based, subscription based, or based on a small up front fee (or based on burning VC funding). We can also add quality of life features such as the full article in an easily copyable format and connectivity features.
winning
# ClarifEye A mobile application designed to aid the visually impaired by verbally identifying their surroundings as well as reading aloud text on objects or documents. ## Inspiration From the very beginning our team knew that we wanted to design something that would help improve people's lives. While brainstorming and discussing possible project ideas, we soon found ourselves gravitating towards the field of computer vision as it was a topic we were all interested in exploring. Through further dicussion, one of our team members shared their personal experience about their grandfather who struggled, daily, with being visually impaired, which guided our team to what we were going to try and create. The idea of *ClarifEye* was born. ## About The Project *ClarifEye* is specifically designed with accessibility for the visually impaired in mind. We spoke with multiple mentors who have had personal experience with the needs of visually impaired people and how they use mobile phones so that we could best accommodate their needs when developing our application. As a result, *ClarifEye* places emphasis on audio and haptic cues to aid the user as they interact with the app through extremely simple swiping or tapping gestures. *ClarifEye* utilizes the power of Google's Cloud Vision API to analyze the current image from the user's camera and synthesizes an audio description of that image through our original algorithm. Additionally, the application provides a text/document reading service to its users using similar computer vision technology. Try the app out for yourself or check out our demo videos at our GitHub repo. ## Simple Controls ``` Swipe Left = Change the mode to Read Document Mode Swipe Right = Change the mode to Describe Image Mode Swipe Down = Stop the audio Single Tap = Take a photo, and perform the currently selected function ``` ## What We Learned While working on this project our team learned a variety of new and useful skills. Such as: * Using Google Cloud APIs, specifically the Google Cloud Vision API. * Parsing a JSON object. * Building an Android app. * Interfacing with the camera of a mobile phone and designing a custom camera UI. ## Future Work *ClarifEye* has a lot of possible improvements to be made. One improvement in particular that we would like to try is to utilize the power of machine learning with a more detailed and customized dataset to provide better descriptions to the user. Another useful feature would be to support multiple languages.
## Inspiration We discussed different ideas on what we wanted to do but knew that we would like to do an android app without using Android studio. We set out to make an app for people who are hard of seeing, to help them navigate around environments. ## What it does Currently it is able to tell the user the path they should take to avoid collisions between others as well as face recognition which is not fully integrated at this moment. Using AR depth using only one camera we were able to achieve a point cloud of the scenes and make decisions on where is available to walk by. ## How we built it We looked for APIs for face recognition as well as depth mapping using the phones camera with ARCore and Unity. ## Challenges we ran into Everything related to Unity was a challenge as it is very difficult to find good support in the APIs we would like to use (free APIs) ## Accomplishments that we're proud of Being able to tell the user their path and avoid collisions while allowing them to meet people they know. ## What we learned ## What's next for ARHelpMe
## Inspiration Our team focuses primarily on implementing innovative technologies to spark positive change. In line with these ideals, we decided to explore the potential impact of a text reading software that not only provided an additional layer of accessibility for blind and deafblind individuals. Through these efforts, we discovered that despite previous attempts to solve similar problems, solutions were often extremely expensive and incongruous. According to many visually challenged advocates these technologies often lacked real-world application and were specific to online texts or readings which limited their opportunities in everyday tasks like reading books or scanning over menus. Upon further research into afflicted groups, we discovered that there was additionally a large population of people who were both deaf and blind which stopped them from utilizing any forms of auditory input as an alternative, significantly obstructing their means of communication. Employing a very human centered design rooted in various personal accounts and professional testimony, we were able to develop a universal design that provides the visually and dual sensory impaired to experience the world from a new lens. By creating a handheld text to braille and speech generator, we are able to revolutionize the prospects of interpersonal communication for these individuals. ## What it does This solution utilizes a singular piece with two modules, a video camera to decipher text, and a set of solenoids that imitates a standard Grade 2 Braille Grid. This portable accessory is intended to be utilized by a visually impaired or blind deaf individual when they’re attempting to analyze a physical text. This three finger supplement, equipped with a live action camera and sensitive solenoid components, is capable of utilizing a live camera feed to discern the diction of a physical text. Moving forward, the scanned text is moved to an A.I application to clean up the text for either auditory or sensory output in the form of TTS and braille. The text is then adapted into an audio format through a web application or to the classic 6 cells present in the Braille dictionary. Users are given a brief moment to make sense of each braille letter before the system automatically iterates through the remainder of the text. This technology effectively provides users with an alternative method to receive information that isn’t ordinarily accessible to them, granting a more authentic and amplified understanding of the world around them. In this unique application of these technologies, those who are hard of seeing and/or dual sensory impaired receive a more genuine appreciation for texts. ## How we built it As our project required two extremely different pieces, we decided to split up our resources in hopes of tackling both problems at the same time. Regardless, we needed to set firm goals and plan out our required resources or timeline which helped us stay on schedule and formulate a final product that fully utilized our expertise. In terms of hardware, we were somewhat limited for the first half of the hackathon as we had to purchase many of our materials and were unable to complete much of this work till later. We started by identifying a potential circuit design and creating a rigid structure to house our components. From there we simply spent a large amount of time actually implementing our theoretical circuit and applying it to our housing model in addition to cleaning the whole product up. For software, we mostly had problems with connecting each of the pieces after building them out. We first created an algorithm that could take a camera feed and produce a coherent string of text. This would then be run through an AI text to speech generator that could decipher any gibberish. Finally, these texts would be sent through to either be read out loud or be compared against a dictionary to create a binary code that would dictate the on/off states off our solenoids. Finally, we prototyped our product and tested it to see what we could improve in our final implementation to both increase efficiency and decrease latency. ## Challenges we ran into This project was extremely technical and ambitious which meant that it was plagued with difficulties. As a large portion of the project relied on its hardware and implementing complementary materials to formulate a cohesive product, there were countless problems throughout the building phase. We often had incompatible parts whether it be cables, Voltage output/input, or even sizing and scaling issues, we were constantly scrambling to alter, scavenge, and adapt materials for uncommon use cases. Even our main board didn’t produce enough power, leading to an unusual usage of a dehumidifier charger and balled up aluminum foil as a makeshift power bank. All of these mechanical complexities followed by a difficult software end of the project led to an innovative and reworked solution that maintained applicative efficiency. These modifications even continued just hours before the submission deadline when we revamped the entire physical end of our project to make use of newly acquired materials using a more efficient modeling technique. These last second improvements gave our product a more polished and adept edge, making a more impactful and satisfying design. Software wise we also strove to uncover the underappreciated features from our various APIs and tools which often didn’t coincide with our team’s strengths. As we had to simultaneously build out an effective product while troubleshooting our software side, we often ran into incompetencies and struggles. Regardless, we were able to overcome these adversities and produce an impressive result. ## Accomplishments that we're proud of We are proud that we were able to overcome the various difficulties that arose throughout our process and to still showcase the level of success that we did even given such a short timeframe. Our team came in with some members having never done a hackathon before and we made extremely ambitious goals that we were unsure we could uphold. However, we were able to effectively work as a team to develop a final product that clearly represents our initial intentions for the project. ## What we learned As a result of the many cutting-edge sponsors and new technological constraints, our whole team was able to draw from new more effective tools to increase efficiency and quality of our product. Through our careful planning and consistent collaboration, we experienced the future of software and progressed in our intricate technical knowledge within our fields and across specializations. and Because of the cross discipline nature of this project. Additionally, we became more flexible with what materials we needed to build out our hardware applications and especially utilized new TTS technologies to amplify the impact of our projects. In the future, we intend to continue to develop these crucial skills that we obtained at Cal Hacks 11.0, working towards a more accessible future. ## What's next for Text to Dot We would like to work on integrating a more refined design to the hardware component of our project. Unforeseen circumstances with the solenoid led to our final design needing to be adjusted beyond the design of the original model, which could be rectified in future iterations.
losing
## Inspiration We created this app to address a problem that our creators were facing: waking up in the morning. As students, the stakes of oversleeping can be very high. Missing a lecture or an exam can set you back days or greatly detriment your grade. It's too easy to sleep past your alarm. Even if you set multiple, we can simply turn all those off knowing that there is no human intention behind each alarm. It's almost as if we've forgotten that we're supposed to get up after our alarm goes off! In our experience, what really jars you awake in the morning is another person telling you to get up. Now, suddenly there is consequence and direct intention behind each call to wake up. Wake simulates this in an interactive alarm experience. ## What it does Users sync their alarm up with their trusted peers to form a pact each morning to make sure that each member of the group wakes up at their designated time. One user sets an alarm code with a common wakeup time associated with this alarm code. The user's peers can use this alarm code to join their alarm group. Everybody in the alarm group will experience the same alarm in the morning. After each user hits the button when they wake up, they are sent to a soundboard interface, where they can hit buttons to send try to wake those that are still sleeping with real time sound effects. Each time one user in the server hits a sound effect button, that sound registers on every device, including their own device to provide auditory feedback that they have indeed successfully sent a sound effect. Ultimately, users exit the soundboard to leave the live alarm server and go back to the home screen of the app. They can finally start their day! ## How we built it We built this app using React Native as a frontend, Node.js as the server, and Supabase as the database. We created files for the different screens that users will interact with in the front end, namely the home screen, goodnight screen, wakeup screen, and the soundboard. The home screen is where they set an alarm code or join using someone else's alarm code. The "goodnight screen" is what screen the app will be on while the user sleeps. When the app is on, it displays the current time, when the alarm is set to go off, who else is in the alarm server, and a warm message, "Goodnight, sleep tight!". Each one of these screens went through its own UX design process. We also used Socket.io to establish connections between those in the same alarm group. When a user sends a sound effect, it would go to the server which would be sent to all the users in the group. As for the backend, we used Supabase as a database to store the users, alarm codes, current time, and the wake up times. We connected the front and back end and the app came together. All of this was tested on our own phones using Expo. ## Challenges we ran into We ran into many difficult challenges during the development process. It was all of our first times using React Native, so there was a little bit of a learning curve in the beginning. Furthermore, incorporating Sockets with the project proved to be very difficult because it required a lot of planning and experimenting with the server/client relationships. The alarm ringing also proved to be surprisingly difficult to implement. If the alarm was left to ring, the "goodnight screen" would continue ringing and would not terminate. Many of React Native's tools like setInterval didn't seem to solve the problem. This was a problematic and reoccurring issue. Secondly, the database in Supabase was also quite difficult and time consuming to connect, but in the end, once we set it up, using it simply entailed brief SQL queries. Thirdly, setting up the front end proved quite confusing and problematic, especially when it came to adding alarm codes to the database. ## Accomplishments that we're proud of We are super proud of the work that we’ve done developing this mobile application. The interface is minimalist yet attention- grabbing when it needs to be, namely when the alarm goes off. Then, the hours of debugging, although frustrating, was very satisfying once we finally got the app running. Additionally, we greatly improved our understanding of mobile app developement. Finally, the app is also just amusing and fun to use! It’s a cool concept! ## What we learned As mentioned before, we greatly improved our understanding of React Native, as for most of our group, this was the first time using it for a major project. We learned how to use Supabase and socket. Additionally, we improved our general Javascript and user experience design skills as well. ## What's next for Wakey We would like to put this app on the iOS App Store and the Android Play Store, which would take more extensive and detailed testing, especially as for how the app will run in the background. Additionally, we would like to add some other features, like a leaderboard for who gets up most immediately after their alarm gets off, who sends the most sound effects, and perhaps other ways to rank the members of each alarm server. We would also like to add customizable sound effects, where users can record themselves or upload recordings that they can add to their soundboards.
## Inspiration Nowadays, the payment for knowledge has become more acceptable by the public, and people are more willing to pay to these truly insightful, cuttting edge, and well-stuctured knowledge or curriculum. However, current centalized video content production platforms (like YouTube, Udemy, etc.) take too much profits from the content producers (resaech have shown that content creators usually only receive 15% of the values their contents create) and the values generated from the video are not distributed in a timely manner. In order to tackle this unfair value distribution, we have this decentralized platform EDU.IO where the video contents will be backed by their digital assets as an NFT (copyright protection!) and fractionalized as tokens, and it creates direct connections between content creators and viewers/fans (no middlemen anymore!), maximizing the value of the contents made by creators. ## What it does EDU.IO is a decentralized educational video streaming media platform & fractionalized NFT exchange that empowers creator economy and redefines knowledge value distribution via smart contracts. * As an educational hub, EDU.IO is a decentralized platform of high-quality educational videos on disruptive innovations and hot topics like metaverse, 5G, IoT, etc. * As a booster of creator economy, once a creator uploads a video (or course series), it will be mint as an NFT (with copyright protection) and fractionalizes to multiple tokens. Our platform will conduct a mini-IPO for the each content they produced - bid for fractionalized NFTs. The value of each video token is determined by the number of views over a certain time interval, and token owners (can be both creators and viewers/fans/investors) can advertise the contents they owned to increase it values, and trade these tokens to earn monkey or make other investments (more liquidity!!). * By the end of the week, the value generated by each video NFT will be distributed via smart contracts to the copyright / fractionalized NFT owners of each video. Overall we’re hoping to build an ecosystem with more engagement between viewers and content creators, and our three main target users are: * 1. Instructors or Content creators: where the video contents can get copyright protection via NFT, and they can get fairer value distribution and more liquidity compare to using large centralized platforms * 2. Fans or Content viewers: where they can directly interact and support content creators, and the fee will be sent directly to the copyright owners via smart contract. * 3. Investors: Lower barrier of investment, where everyone can only to a fragment of a content. People can also to bid or trading as a secondary market. ## How we built it * Frontend in HTML, CSS, SCSS, Less, React.JS * Backend in Express.JS, Node.JS * ELUV.IO for minting video NFTs (eth-based) and for playing quick streaming videos with high quality & low latency * CockroachDB (a distributed SQL DB) for storing structured user information (name, email, account, password, transactions, balance, etc.) * IPFS & Filecoin (distributed protocol & data storage) for storing video/course previews (decentralization & anti-censorship) ## Challenges we ran into * Transition from design to code * CockroachDB has an extensive & complicated setup, which requires other extensions and stacks (like Docker) during the set up phase which caused a lot of problems locally on different computers. * IPFS initially had set up errors as we had no access to the given ports → we modified the original access files to access different ports to get access. * Error in Eluv.io’s documentation, but the Eluv.io mentor was very supportive :) * Merging process was difficult when we attempted to put all the features (Frontend, IPFS+Filecoin, CockroachDB, Eluv.io) into one ultimate full-stack project as we worked separately and locally * Sometimes we found the documentation hard to read and understand - in a lot of problems we encountered, the doc/forum says DO this rather then RUN this, where the guidance are not specific enough and we had to spend a lot of extra time researching & debugging. Also since not a lot of people are familiar with the API therefore it was hard to find exactly issues we faced. Of course, the staff are very helpful and solved a lot of problems for us :) ## Accomplishments that we're proud of * Our Idea! Creative, unique, revolutionary. DeFi + Education + Creator Economy * Learned new technologies like IPFS, Filecoin, Eluv.io, CockroachDB in one day * Successful integration of each members work into one big full-stack project ## What we learned * More in depth knowledge of Cryptocurrency, IPFS, NFT * Different APIs and their functionalities (strengths and weaknesses) * How to combine different subparts with different functionalities into a single application in a project * Learned how to communicate efficiently with team members whenever there is a misunderstanding or difference in opinion * Make sure we know what is going on within the project through active communications so that when we detect a potential problem, we solve it right away instead of wait until it produces more problems * Different hashing methods that are currently popular in “crypto world” such as multihash with cid, IPFS’s own hashing system, etc. All of which are beyond our only knowledge of SHA-256 * The awesomeness of NFT fragmentation, we believe it has great potential in the future * Learned the concept of a decentralized database which is directly opposite the current data bank structure that most of the world is using ## What's next for EDU.IO * Implement NFT Fragmentation (fractionalized tokens) * Improve the trading and secondary market by adding more feature like more graphs * Smart contract development in solidity for value distribution based on the fractionalized tokens people owned * Formulation of more complete rules and regulations - The current trading prices of fractionalized tokens are based on auction transactions, and eventually we hope it can become a free secondary market (just as the stock market)
## **INSPIRATION** One of our co-founders, Razi Syed visited Frankfurt, Germany alone when he was in Grade 10. Excited for the week-long trip, he barely got any sleep... except for when he was on the train heading to the airport to catch his flight back to Canada. He fell asleep, missed his flight and had to pay over $600 for a new flight. To make sure this never happens to him or anyone else ever again, we decided to create a mobile app called WakeMe. ## **WHAT IT DOES** A user sets a location (intersection, address, landmark, etc.) as their destination and can happily go on about their business, whether that means sleeping, reading, or listening to music. The app then sets up an alarm that will notify the user when they are within a 500m radius of their destination. Location data collected from the phone enables the app to know where exactly the user is at any given time (with user-authorization, of course) and activates the alarm when they are within the aforementioned radius. This is perfect for commuters and travelers who use public transit and are busy or in a foreign country. Bus timings can easily get delayed due to various factors such as accidents, inclement weather or simply due to rush hour traffic, but the distance to a location will never change. ## **HOW WE BUILT IT** We decided to implement our mobile app using Java in Android Studio. In the code, we utilized a few API's such as Google Maps' Distance Matrix API as well as Google's Geofencing API. These tools would enable us to work with location services, providing us with real-time updates of the user's location, therefore allowing us to notify the user as soon as they enter the geofence of their chosen destination. As for the UI, we brainstormed and created a few draft designs of how we thought the app should look. After many iterations and team decision-making, we agreed on our favourite design and implemented it. ## **CHALLENGES** One major challenge we faced was deciding on our project goals. We had many different, ambitious ideas being suggested but we also acknowledged our time constraints and realized that we should get started sooner rather than later. Some time after starting, we revisited some of our initial ideas and considered modifying our chosen idea. However, upon further research and discussion amongst all the team members, we decided to continue working on our originally selected idea. Another challenge we faced was extracting data from unfamiliar sources, but that was quickly resolved by using a workaround strategy that was much more familiar to the team. ## **ACCOMPLISHMENTS** Through all the ups and downs, all the pivoting of ideas, and all the first-timers on our team, we finished the event with a working prototype and a creative project under our belts while simultaneously learning a whole lot. ## **WHAT WE LEARNED** Everything has a workaround. We realized that Android App Development is not as difficult as it may sound to get started with, but also that it has a high-skill-ceiling with plenty to learn and master. We also learned that no matter how different your team members' skillsets may be, as long as you stick together and persevere, anything can be done. ## **WHAT'S NEXT FOR WakeMe** If we were to continue this endeavour beyond the scope of this hackathon, one major feature that we would like to implement is social media interaction. Among other ideas, one of our main considerations would be to allow the user's friends to be able to view the user's current location as it updates in real-time. This would ensure that both the user and their friends know when they have arrived at their chosen destination.
winning
## Inspiration We all know the feeling. Playing "Where's Waldo", and no matter where you look, you can't find the dude at all. Well we thought, what could be more frustrating than finding out exactly how hard you did look. And what could we learn from this information along the way? ## What it does **Still Can't Find Waldo** uses Adhawk's Eye-Tracking glasses to see where you were really looking while playing "Where's Waldo." After finding Waldo, we replay to you, live, the journey you took along the way and how dire it got at times. We'll highlight your vision path on the page so you can see how many times you glanced over ol' pal Waldo without realizing it and provide some insight into how much of the page you actually searched. ## How we built it We used Adhawk's Eye-Tracking glasses to track users' gaze while they were playing. By leveraging Socket.io, we were able to pipeline the data from the hardware through our Flask backend and over to our React Frontend. ## Challenges we ran into Creating this data pipeline was difficult. Additionally, figuring out how to project our gaze onto a screen without the usage of a front-facing camera was exceptionally tedious as well. ## Accomplishments that we're proud of We finished. ## What's next for Still Can't Find Waldo? There are many more behaviours that could be analyzed such as pupil dilation and we're curious if there are correlations that can be found from the Where's Waldo case study.
## Inspiration In 2019, close to 400 people in Canada died due to distracted drivers. They also cause an estimated $102.3 million worth of damages. While some modern cars have pedestrian collision avoidance systems, most drivers on the road lack the technology to audibly cue the driver if they may be distracted. We want to improve driver attentiveness and safety in high pedestrian areas such as crosswalks, parking lots, and residential areas. Our solution is more economical and easier to implement than installing a full suite of sensors and software onto an existing car, which makes it perfect for users who want peace of mind for themselves and their loved ones. ## What it does EyeWare combines the eye-tracking capabilities of the **AdHawk MindLink** with the onboard camera to provide pedestrian tracking and driver attentiveness feedback through audio cues. We label the pedestrian based on their risk of collision; GREEN for seen pedestrians, YELLOW for unseen far pedestrians, and RED for unseen near pedestrians. A warning chime is played when the pedestrian gets too close to get the driver. ## How we built it Using the onboard camera of the **AdHawk MindLink**, we pass video to the **openCV** machine learning algorithm to classify and detect people (pedestrians) based on their perceived distance from the user and assign them unique IDs. Simultaneously, we use the **AdHawk Python API** to track the user's gaze and cross-reference the detected pedestrians to ensure that the driver has seen the pedestrian. ## Challenges we ran into The biggest challenge was integrating the complex and cross-domain systems in little time. Over the last 36 hours, we learned and developed applications using the MindFlex glasses and AdHawk API, utilized object detection with **YOLO** (You Only Look Once) machine learning algorithms, and **SORT** object tracking algorithms. ## Accomplishments that we're proud of We got two major technologies working during this hackathon: the eye tracking and machine learning! Getting the **MindLink** to work with **OpenCV** was groundbreaking and enabled our machine learning algorithm to run directly using **MindLink’s** camera stream. Though the process was technically complex and complicated, we are proud that we are able to accomplish it in the end. We're also incredibly proud of the beautiful pitch deck. Our domain.com domain name (eyede.tech) is also a pun on "ID tech", "eye detect", and "I detect". ## What we learned In this hackathon, we had the privilege of working with the AdHawk team to bring the MindFlex eye tracking glasses to life. We learned so much about eye tracking APIs and their technologies as a whole. We also worked extensively with object detection algorithms like YOLO, OpenCV for image processing, and SORT algorithm for object tracking for temporal consistency. ## What's next for Eyeware Our platform is highly portable can be easily retrofitted to existing vehicles and be used by drivers around the world. We are excited to partner with OEMs and make this feature more widely available to people. Furthermore, we were limited by the mobile compute power at the hackathon, so removing that limit will also enable us to use more powerful object detection and tracking algorithms like deepSort.
## Inspiration The inspiration for this project was a group-wide understanding that trying to scroll through a feed while your hands are dirty or in use is near impossible. We wanted to create a computer program to allow us to scroll through windows without coming into contact with the computer, for eating, chores, or any other time when you do not want to touch your computer. This idea evolved into moving the cursor around the screen and interacting with a computer window hands-free, making boring tasks, such as chores, more interesting and fun. ## What it does HandsFree allows users to control their computer without touching it. By tilting their head, moving their nose, or opening their mouth, the user can control scrolling, clicking, and cursor movement. This allows users to use their device while doing other things with their hands, such as doing chores around the house. Because HandsFree gives users complete **touchless** control, they’re able to scroll through social media, like posts, and do other tasks on their device, even when their hands are full. ## How we built it We used a DLib face feature tracking model to compare some parts of the face with others when the face moves around. To determine whether the user was staring at the screen, we compared the distance from the edge of the left eye and the left edge of the face to the edge of the right eye and the right edge of the face. We noticed that one of the distances was noticeably bigger than the other when the user has a tilted head. Once the distance of one side was larger by a certain amount, the scroll feature was disabled, and the user would get a message saying "not looking at camera." To determine which way and when to scroll the page, we compared the left edge of the face with the face's right edge. When the right edge was significantly higher than the left edge, then the page would scroll up. When the left edge was significantly higher than the right edge, the page would scroll down. If both edges had around the same Y coordinate, the page wouldn't scroll at all. To determine the cursor movement, we tracked the tip of the nose. We created an adjustable bounding box in the center of the users' face (based on the average values of the edges of the face). Whenever the nose left the box, the cursor would move at a constant speed in the nose's position relative to the center. To determine a click, we compared the top lip Y coordinate to the bottom lip Y coordinate. Whenever they moved apart by a certain distance, a click was activated. To reset the program, the user can look away from the camera, so the user can't track a face anymore. This will reset the cursor to the middle of the screen. For the GUI, we used Tkinter module, an interface to the Tk GUI toolkit in python, to generate the application's front-end interface. The tutorial site was built using simple HTML & CSS. ## Challenges we ran into We ran into several problems while working on this project. For example, we had trouble developing a system of judging whether a face has changed enough to move the cursor or scroll through the screen, calibrating the system and movements for different faces, and users not telling whether their faces were balanced. It took a lot of time looking into various mathematical relationships between the different points of someone's face. Next, to handle the calibration, we ran large numbers of tests, using different faces, distances from the screen, and the face's angle to a screen. To counter the last challenge, we added a box feature to the window displaying the user's face to visualize the distance they need to move to move the cursor. We used the calibrating tests to come up with default values for this box, but we made customizable constants so users can set their boxes according to their preferences. Users can also customize the scroll speed and mouse movement speed to their own liking. ## Accomplishments that we're proud of We are proud that we could create a finished product and expand on our idea *more* than what we had originally planned. Additionally, this project worked much better than expected and using it felt like a super power. ## What we learned We learned how to use facial recognition libraries in Python, how they work, and how they’re implemented. For some of us, this was our first experience with OpenCV, so it was interesting to create something new on the spot. Additionally, we learned how to use many new python libraries, and some of us learned about Python class structures. ## What's next for HandsFree The next step is getting this software on mobile. Of course, most users use social media on their phones, so porting this over to Android and iOS is the natural next step. This would reach a much wider audience, and allow for users to use this service across many different devices. Additionally, implementing this technology as a Chrome extension would make HandsFree more widely accessible.
losing
## Inspiration We recognized that many individuals are keen on embracing journaling as a habit, but hurdles like the "all or nothing" mindset often hinder their progress. The pressure to write extensively or perfectly every time can be overwhelming, deterring potential journalers. Consistency poses another challenge, with life's busy rhythm making it hard to maintain a daily writing routine. The common issue of forgetting to journal compounds the struggle, as people find it difficult to integrate this practice seamlessly into their day. Furthermore, the blank page can be intimidating, leaving many uncertain about what to write and causing them to abandon the idea altogether. In addressing these barriers, our aim with **Pawndr** is to make journaling an inviting, effortless, and supportive experience for everyone, encouraging a sustainable habit that fits naturally into daily life. ## What it does **Pawndr** is a journaling app that connects with you through text and voice. You will receive conversational prompts delivered to their phone, sparking meaningful reflections wherever you are and making journaling more accessible and fun. Simply reply to our friendly messages with your thoughts or responses to our prompts, and watch your personal journey unfold. Your memories are safely stored, easy accessible through our web app, and beautifully organized. **Pawndr** is able to transform your daily moments into a rich tapestry of self-discovery. ## How we built it The front-end was built using react.js. We built the backend using FastAPI, and used MongoDB as our database. We deployed our web application and API to a Google Cloud VM using nginx and uvicorn. We utilized Infobip to build our primary user interaction method. Finally, we made use of OpenAI's GPT 3 and Whisper APIs to power organic journaling conversations. ## Challenges we ran into Our user stories required us to use 10 digit phone numbers for SMS messaging via Infobip. However, Canadian regulations blocked any live messages we sent using the Infobip API. Unfortunately, this was a niche problem that the sponsor reps could not help us with (we still really appreciate all of their help and support!! <3), so we pivoted to a WhatsApp interface instead. ## Accomplishments that we're proud of We are proud of being able to quickly problem-solve and pivot to a WhatsApp interface upon the SMS difficulties. We are also proud of being able to integrate our project into an end-to-end working demo, allowing hackathon participants to experience our project vision. ## What we learned We learned how to deploy a web app to a cloud VM using nginx. We also learned how to use Infobip to interface with WhatsApp business and SMS. We learned about the various benefits of journaling, the common barriers to journaling, and how to make journaling rewarding, effortless, and accessible to users. ## What's next for Pawndr We want to implement more channels to allow our users to use any platform of their choice to journal with us (SMS, Messenger, WhatsApp, WeChat, etc.). We also hope to have more comprehensive sentiment analysis visualization, including plots of mood trends over time.
## Inspiration When you think of the word “nostalgia”, what’s the first thing you think of? Maybe it’s the first time you tried ice cream, or the last concert you’ve been to. It could even be the first time you’ve left the country. Although these seem vastly different and constitute unique experiences, all of these events tie to one key component: memory. Can you imagine losing access to not only your memory, but your experiences and personal history? Currently, more than 55 million people world-wide suffer from dementia with 10 million new cases every year. Many therapies have been devised to combat dementia, such as reminiscence therapy, which uses various stimuli to help patients recall distant memories. Inspired by this, we created Remi, a tool to help dementia victims remember what’s important to them. ## What it does We give users the option to sign up as a dementia patient, or as an individual signing up on behalf of a patient. Contributors to a patient's profile (friends and family) can then add to the profile descriptions of any memory they have relating to the patient. After this, we use Cohere API to piece together a personalized narration of the memory; this helps patients remember all of their most heartening past memories. We then use an old-fashioned styled answering machine, created with Arduino, and with the click of a button patients can listen to their past memories read to them by a loved one. ## How we built it Our back-end was created using Flask, where we utilized Kintone API to store and retrieve data from our Kintone database. It also processes the prompts from the front-end using Cohere API in order to generate our personalized memory message; as well, it takes in button inputs from an Arduino UNO to play the text-to-speech. Lastly, our front-end was built using ReactJS and TailwindCSS for seamless user interaction. ## Challenges we ran into As it was our first time setting up a Flask back-end and using Kintone, we ran into a couple of issues. With our back-end, we had trouble setting up our endpoints to successfully POST and GET data from our database. We also ran into troubles setting up the API token with Kintone to integrate it into our back-end. There were also several hardware limitations with the Arduino UNO, which was probably not the most suitable board for this project. The inability to receive wire-less data or handle large audio files were drawbacks, but we found workarounds that would allow us to demo properly, such-as using USB communication. ## Accomplishments that we're proud of * Utilizing frontend, backend, hardware, and machine learning libraries in our project * Delegating tasks and working as a team to put our project together * Learning a lot of new tech and going to many workshops! ## What we learned We learned a lot about setting up databases and web frameworks! ## What's next for Remi Since there were many drawbacks with the hardware, future developments in the project would most likely switch to a board with higher bandwidth and wifi capabilities, such as a Raspberry Pi. We also wanted to add in a feature where contributors to a patient's memory library could record their voices, and our Text-to-Speech AI would mimic them when narrating a story. Reminiscence therapy involves more than narrating memories, so we wanted to add ore sensory features, such as visuals, to invoke memories and nostalgia for the patient as effectively as possible. For example, contributors could submit a picture of their memories along with their description, which could be displayed on an LCD attached to the answering machine design. On the business-end, we hope to collaborate with health professionals to see how we can further help dementia victims.
# Project Memoir ## Inspiration The genesis of Memoir stemmed from a shared longing to reconnect with the simpler, joyful moments of our pasts. We were inspired by the universal experience of nostalgia - be it the excitement of trading Pokémon cards or the freedom of playground adventures. Our goal was to create a digital space where these memories could be not just revisited, but shared and celebrated as collective human experiences. ## What We Learned This journey has been full of learning. We delved deep into the realms of app development, particularly in building intuitive and elegant user interfaces and ensuring data security with Auth0. We also explored the complexities of creating dynamic, visually appealing node graphs, using a BIRCH clustering algorithm, for our 'Clouds' feature. On a broader level, we learned about the power of shared experiences in building communities and the deep emotional connections that a platform like Memoir can foster. ## How We Built It Memoir's development was a multi-stage process: 1. **Conceptualization**: We began with brainstorming sessions to refine our vision for Memoir. 2. **Design**: Focusing on user experience, we designed an engaging interface. 3. **Development**: Our tech stack included React, MongoDB, and Flask. We integrated Auth0 for secure user authentication. 4. **Clouds Implementation**: The development of our unique 'Clouds' feature was both challenging and rewarding, requiring innovative data visualization techniques. 5. **Testing and Feedback**: We conducted multiple rounds of user testing, refining Memoir's design based on user response. ## Challenges We Faced The process wasn't always smooth. One major challenge was optimizing the 'Clouds' feature for both performance and visual impact. We also faced hurdles in ensuring user data security, a paramount concern for us, which led us to choose Auth0. Balancing simplicity with functionality in our app design was a constant learning curve. Through perseverance and collaboration, we overcame these challenges, emerging with a product we are truly proud of. ## Conclusion Memoir is more than just an app; it's a testament to the shared threads of past memories that bind us. As we look to the future, we aim to expand our features, enhance user engagement, and continue to provide a safe, vibrant space for reliving cherished memories. Our journey with Memoir is just beginning, and we are excited to see how it continues to evolve and touch lives. ---
winning
## Inspiration We wanted to make customer service a better experience for both parties. Arguments often arise during phone calls due to misunderstanding, so we wanted to create an app where voice can be analyzed and a course of action is suggested. ## What it does Our program is able to record a conversation and display the emotional state of the person speaking. This mobile app allows users, specifically people who are working the customer service department, to understand the caller better. As the customer explains the situation, the sadness, joy, fear, anger, and disgust in his/her voice is calculated and displayed in a graph. After a summary is given from entire conversation and the strongest, emotional tone, language tone, and social tone are given. The user has the ability to see the specific details of what the strongest tone consisted of. Lastly, a course of action is given according to the combination of tones. ## How we built it We began building our app using traditional paper prototyping and user research. We then, designed screens, logos, and icons with photoshop. Afterwards, we used keynote to display interactive data to showcase emotion. Lastly, we used Invision to prototype the app. For the backend, we attempted to use IBM Watson Tone Analyzer API to help detect communication tones from written text. ## Challenges we ran into The challenges we ran into was slow wifi, lack of technology and a basic coding knowledge. ## Accomplishments that we're proud of Were are proud of finishing a prototype in a two person in the demanding situation. We were able to design the user interface and user experience and challenge learning curves in the short amount of time. The mobile application is beautifully designed and capture our mission of helping the customer service field through understanding and listening. ## What we learned We learned the importance of the design process of ideate, research, prototype, and iterate. We learned programs such as principle and developed our skill in photoshop, keynote, sketch, and Invision. We attended the Intro to APIs workshop and learned about http request and how to use them to interact with APIs. ## What's next for Emotionyze We are going to continue to work on IBM Watson Tone Analyzer API to further develop the detection of communication tones from written text. We are also going to work not the overall User Experience and User Interface to create an effective mobile application
## Inspiration At the heart of our creation lies a profound belief in the power of genuine connections. Our project was born from the desire to create a safe space for authentic conversations, fostering empathy, understanding, and support. In a fast-paced, rapidly changing world, it is important for individuals to identify their mental health needs, and receive the proper support. ## What it does Our project is an innovative chatbot leveraging facial recognition technology to detect and respond to users' moods, with a primary focus on enhancing mental health. By providing a platform for open expression, it aims to foster emotional well-being and create a supportive space for users to freely articulate their feelings. ## How we built it We use opencv2 to actively analyze video and used its built-in facial recognition to build a cascade around individuals' faces when identified. Then, using the deepface library with a pre-trained model for identifying emotions, we assigned each cascaded face with an emotion in live time. Then, using this information, we used cohere's language learning model to then generate a message corresponding to the emotion the individual is feeling. The LLM is also used to chat back and forth between the user and the bot. Then all this information is displayed on the website which is designed using flask for the backend and html, css, react, nodejs for the frontend. ## Challenges we ran into The largest adversity encountered was dealing with the front-end portion of it. Our group lacked experience with developing websites, and found many issues connecting the back-end to the front-end. ## Accomplishments that we're proud of Creating the cascades around the faces, incorporating a machine learning model to the image processing, and attemping to work on front-end for the first time with no experience. ## What we learned We learned how to create a project with a team and divide up tasks and combine them together to create one cohesive project. We also learned new technical skills (i.e how to use API's, machine learning, and front-end development) ## What's next for My Therapy Bot Try and design a more visually appealing and interactive webpage to display our chat-bot. Ideally it would include live video feed of themselves with cascades and emotions while chatting. It would be nice to train our own machine learning model to determine expressions as well.
## Inspiration Knowtworthy is a startup that all three of us founded together, with the mission to make meetings awesome. We have spent this past summer at the University of Toronto’s Entrepreneurship Hatchery’s incubator executing on our vision. We’ve built a sweet platform that solves many of the issues surrounding meetings but we wanted a glimpse of the future: entirely automated meetings. So we decided to challenge ourselves and create something that the world has never seen before: sentiment analysis for meetings while transcribing and attributing all speech. ## What it does While we focused on meetings specifically, as we built the software we realized that the applications for real-time sentiment analysis are far more varied than initially anticipated. Voice transcription and diarisation are very powerful for keeping track of what happened during a meeting but sentiment can be used anywhere from the boardroom to the classroom to a psychologist’s office. ## How I built it We felt a web app was best suited for software like this so that it can be accessible to anyone at any time. We built the frontend on React leveraging Material UI, React-Motion, Socket IO and ChartJS. The backed was built on Node (with Express) as well as python for some computational tasks. We used GRPC, Docker and Kubernetes to launch the software, making it scalable right out of the box. For all relevant processing, we used Google Speech-to-text, Google Diarization, Stanford Empath, SKLearn and Glove (for word-to-vec). ## Challenges I ran into Integrating so many moving parts into one cohesive platform was a challenge to keep organized but we used trello to stay on track throughout the 36 hours. Audio encoding was also quite challenging as we ran up against some limitations of javascript while trying to stream audio in the correct and acceptable format. Apart from that, we didn’t encounter any major roadblocks but we were each working for almost the entire 36-hour stretch as there were a lot of features to implement. ## Accomplishments that I'm proud of We are super proud of the fact that we were able to pull it off as we knew this was a challenging task to start and we ran into some unexpected roadblocks. There is nothing else like this software currently on the market so being first is always awesome. ## What I learned We learned a whole lot about integration both on the frontend and the backend. We prototyped before coding, introduced animations to improve user experience, too much about computer store numbers (:p) and doing a whole lot of stuff all in real time. ## What's next for Knowtworthy Sentiment Knowtworthy Sentiment aligns well with our startup’s vision for the future of meetings so we will continue to develop it and make it more robust before integrating it directly into our existing software. If you want to check out our stuff you can do so here: <https://knowtworthy.com/>
losing
## Triangularizer Takes your simple mp4 videos and makes them hella rad!
## Inspiration Are you stressed about COVID-19? You're not alone. Over 50% of adults in US reported their mental health was [negatively affected due to worry and stress over the new coronavirus](https://www.healthline.com/health-news/covid-19-lockdowns-appear-to-be-causing-more-cases-of-high-blood-pressure). Cities lockdown and the implementation of COVID-19 measures affected many industries, and many lost their jobs during this period. We have long known that campfires and fires in a hearth help us to relax, [as fire played a key role in human evolution and was closely linked to human psychology](https://www.scotsman.com/news/uk-news/relaxing-fire-has-good-your-health-1520224). Here, we are presenting a solution that helps people to relax while generating an extra source of income. ## What it does CryptoChill fixes the main problem that people tend to have with the classic Yulelog (virtual fireplace) - the fact being that it does not actually generate heat. CryptoChill provides users with a way to relax and get warm while engaging in a warm (No pun intended!) community via live chat where users can talk to each other. CryptoChill generates heat using the users' computer to mine cryptocurrency, thus warming the user, and in exchange, CryptoChill subsidies the users' electricity costs with a revenue split between the user and the platform. This enables the user to stay warm in an environmentally friendly way as on average, electric heating is many times cleaner than gas or oil. ### Features 1. Users can view the description of the site on the landing page 2. Users give their consent for cryptocurrency mining by signing up/login 3. After logging in, users can search & browse popular Yulelog/fireplace videos, or enter an URL of their favorite video. They can save videos as favorites. 4. They can browse their saved videos on the favorite page 5. Each page allows users to listen to music from Spotify as they watch the Yulelog videos. 6. When a user plays a video from the carousel of videos, cryptocurrency will be generated in the background. 7. Users can also create a chat room where other users can join and talk among themselves, listen to music, and watch the Yulelog/fireplace videos together. ## How we built it * UI design using Figma * Front-end development using HTML, CSS, JavaScript * Back-end development using Node.js, Express, and EJS * Real-time communication using Socket.io * Client side CPU cryptocurrency mining via Webminepool * User creation and authentication using Firebase * Application hosting through Google Cloud Platform's App Engine ## Challenges we ran into We are an international team with 3 time zones. Not all of us are awake at the same time, and it was a challenge to find a meeting time that suits everyone. We overcame the challenge by effective communication and planning via Discord, Zoom, and Trello. ## Accomplishments that we're proud of We built a full application, and completed what we envisioned to do in the beginning! ## What we learned We learnt a lot about implementing live chat using socket.io and authentication using Firebase, and integrating front-end and back-end. We also learned how to incorporate cryptocurrency mining with our application! ## What's next for CryptoChill * Link to cryptocurrency wallet * Choice of algorithm & cryptocurrency * GPU mining to increase payouts and heat output via GPU.js
## Inspiration Some of our team members are hobbyist DJs, playing shows for college parties and other local events. As smaller-size DJs we've always dreamed of having amazing live visuals for our sets, at the same quality as the largest EDM festivals; however, the cost, effort, and skillset was clearly infeasible for us, and infeasible for any musicians but the higher-profile ones who have the budget to commission lengthy visualizations and animations from third-party digital artists. That was a few years ago; recently, since generative vision models in ML have reached such a high quality of output, and open-source models (namely, stable diffusion) allow for intricate, transformative engineering at the level of model internals, we've realized our music live visuals dream has become possible. We've come up with an app that takes a song and a prompt, and generates a music video in real-time that's perfectly synced to the music. The app will allow DJs and other musicians to have professional-level visuals for their live shows without the need for expensive commissions or complex technical setups. With our app, any musician can have access to stunning visuals that will take their performances to the next level. ## What it does Mixingjays is an innovative video editing tool powered by machine learning that's designed specifically for creating stunning visual accompaniments to music. Our app allows clients to upload their music tracks and specify various parameters such as textures, patterns, objects, colors, and effects for different sections of the song. Our system then takes these specifications and translates them into prompts for our advanced ML stable diffusion model, which generates a unique video tailored to the uploaded music track. Once the video is generated, the user can easily edit it using our intuitive and user-friendly interface. Our app provides users with a range of editing options, from basic graphical edits (similar to popular photo editing apps like Instagram and VSCO) to advanced generative edits that are handled by our ML pipeline. The user experience of Mixingjays is similar to that of well-known video editing software like Adobe Premiere or Final Cut, but with the added power of our machine learning technology driving the video outputs. Our app provides users with the ability to create professional-quality music videos with ease and without the need for extensive technical knowledge. Whether you're a professional musician or an amateur DJ, Mixingjays can help you create stunning visuals that will take your music to the next level while giving the musicians full creative control over the video. ## How we built it One of our baseline innovations is generating video from stable diffusion and engineering on the model internals of stable diffusion. As a standalone, stable diffusion outputs static images based on natural language prompts. Given a list of images that stable diffusion outputs from several prompts, we're able to generate video between the images by interpolating between images in CLIP latent space. A detail of stable diffusion is that it actually computes its output on a latent space, a compressed coordinate system the the model abstractly uses to represent images and text; the latent vector in latent space is then decoded into the image we see. This latent space is well-behaved in Euclidean space, so we can generate new image outputs and smooth transitions between images, strung into a video, by linearly interpolating between the latent vectors of two different images before decoding every interpolated vector. If the two images we interpolate between are from two different prompts from the same seed of stable diffusion, or the same prompt across two different seeds of stable diffusion, the resulting interpolations and video end up being semantically coherent and appealing.[1] A nice augmentation to our interpolation technique is perturbing the latent vector in coordinate space, making a new latent vector anywhere in a small Lp-norm ball surrounding the original vector. (This resembles the setting of adversarial robustness in computer vision research, although our application is not adversarial.) As a result, given a list of stable diffusion images and their underlying latent vectors, we can generate video by latent-space-interpolating between the images in order. We generate videos directly from the music, which relies on a suite of algorithms for music analysis. The demucs model from Facebook AI Research (FAIR) performs "stem separation," or isolation of certain individual instruments/elements in a music track; we pass our music track into the model and generate four new music tracks of the same length, containing only the vocals, drums, bass, and melodic/harmonic instruments of the track, respectively. With OpenAI Whisper, we're able to extract all lyrics from the isolated vocal track, as well as bucket lyrics into corresponding timestamps at regular intervals. With more classical music analysis algorithms, we're able to: * convert our drum track and bass track into timestamps for the rhythm/groove of the track * and convert our melody/harmony track into a timestamped chord progression, essentially the formal musical notation of harmonic information in the track. These aspects of music analysis directly interface into our stable diffusion and latent-space-interpolation pipeline. In practice, the natural language prompts into stable diffusion usually consist of long lists of keywords specifying: * objects, textures, themes, backgrounds * physical details * adjectives influencing tone of the picture * photographic specs: lighting, shadows, colors, saturation/intensity, image quality ("hd", "4k", etc) * artistic styles Many of these keywords are exposed to the client at the UI level of our application, as small text inputs, sliders, knobs, dropdown menus, and more. As a result, the client retains many options for influencing the video output, independently of the music. These keywords are concatenated with natural language encapsulations of musical aspects: * Lyrics are directly added to the prompts. * Melodic/harmonic information and chord progressions map to different colors and themes, based on different chords' "feel" in music theory. * Rhythm and groove is added to the interpolation system! We speed up, slow down, and alter our trajectory of movement though latent space in time with the rhythm. The result is a high-quality visualizer that incorporates both the user's specifications/edits and the diverse analyzed aspects of the music track. Theoretically, with good engineering, we'd be able to run our video generation and editing pipelines very fast, essentially in real time! Because interpolation occurs at the very last layer of the stable diffusion model internals, and video output from it depends only on a simple decoder instead of the entire stable diffusion model, video generation from interpolation is very fast and the runtime bottleneck depends only on stable diffusion. We generate one or two dozen videos from stable diffusion per video, so by generating each image in parallel on a wide GPU array, as well as incorporating stable diffusion speedups, we're able to generate the entire video for a music track in a few seconds! ## Challenges we ran into Prompt generation: Generating good quality prompts is hard. We tried different ways of prompting the model to see its affect on the quality of generated images and output video and discovered we needed more description of the song in addition to the manually inputted prompt. This motivated us to look into music analysis. Music Analysis: Our music analysis required pulling together a lot of disparate libraries and systems for digital signal processing that aren't typically used together, or popular enough to be highly supported. The challenge here was wrangling these libraries into a single system without the pipeline crumbling to obscure bugs. Stable Diffusion: The main challenge in using stable diffusion for multiple image generation is the overhead cost, compute and time. Primitive implementations of stable diffusion took 30s to generate an image which made things hard for us since we need to generate multiple images against each song and prompt pair. Modal workshop at Treehacks was very helpful for us to navigate this issue. We used modal containers and stubs to bake our functions into the images such that they are run only when the functions are called and free the GPUs otherwise for other functions to run. It also helped us parallelise our code which made things faster. However, since it is new, it was difficult to get it to recognise other files, adding data types to function methods etc. Interpolation: This was by far the hardest part. Interpolation is not only slow but also hard to implement or reproduce. After extensive research and trials with different libraries, we used Google Research’s Frame-interpolation method. It is implemented in tensorflow, in contrary to the other code which was more PyTorch heavy. In addition to that it is slow and scales exponentially with the number of images. A 1 minute video took 10 minutes to generate. Given the 36 hour time limitation, we had to generate the videos in advance for our app but there is tons of scope to make it faster. Integration: The biggest bottlenecks came about when trying to generate visuals for arbitrary input mp3 files. Because of this, we had to reduce our expectation of on-the-fly, real-time rendering of any input file, to a set of input files that we generated offline. Render times ranged anywhere from 20 seconds to many minutes, which means that we’re still a little bit removed from bringing this forward as a real-application for users around the world to toy with. Real-time video editing capabilities also proved difficult to implement given the lack of time for building substantial infrastructure, but we see this as a next step in building out a fully functional product that anyone can use! ## Accomplishments that we're proud of We believe we've made significant progress towards a finalized, professional-grade application for our purposes. We showed that the interpolation idea produces video output that is high-quality, semantically coherent, granularly editable, and closely linked to the concurrent musical aspects of rhythm, melody, harmony, and lyrics. This makes our video offerings genuinely usable in professional live performances. Our hackathon result is a more rudimentary proof-of-concept, but our results lead us to believe that continuing this project in a longer-term setting would lead to a robust, fully-featured offering that directly plugs into the creative workflow of hobbyist and professional DJs and other musical live performers. ## What we learned We found ourselves learning a ton about the challenges of implementing cutting-edge AI/ML techniques in the context of a novel product. Given the diverse set of experiences that each of our team members have, we also learned a ton through cross-team collaboration between scientists, engineers, and designers. ## What's next for Mixingjays Moving forward, we have several exciting plans for improving and expanding our app's capabilities. Here are a few of our top priorities: Improving speed and latency: We're committed to making our app as fast and responsive as possible. That means improving the speed and latency of our image-to-video interpolation process, as well as optimizing other parts of the app for speed and reliability. Video editing: We want to give our users more control over their videos. To that end, we're developing an interactive video editing tool that will allow users to regenerate portions of their videos and choose which generation they want to keep. Additionally, we're exploring the use of computer vision to enable users to select, change, and modify objects in their generated videos. Script-based visualization: We see a lot of potential in using our app to create visualizations based on scripts, particularly for plays and music videos. By providing a script, our app could generate visualizations that offer guidance on stage direction, camera angles, lighting, and other creative elements. This would be a powerful tool for creators looking to visualize their ideas in a quick and efficient manner. Overall, we're excited to continue developing and refining our app to meet the needs of a wide range of creators and musicians. With our commitment to innovation and our focus on user experience, we're confident that we can create a truly powerful and unique tool for music video creation. ## References Seth Forsgren and Hayk, Martiros, "Riffusion - Stable diffusion for real-time music generation." 2022. <https://riffusion.com/about>
losing
## Inspiration Public speaking is greatly feared by many, yet it is a part of life that most of us have to go through. Despite this, preparing for presentations effectively is *greatly limited*. Practicing with others is good, but that requires someone willing to listen to you for potentially hours. Talking in front of a mirror could work, but it does not live up to the real environment of a public speaker. As a result, public speaking is dreaded not only for the act itself, but also because it's *difficult to feel ready*. If there was an efficient way of ensuring you aced a presentation, the negative connotation associated with them would no longer exist . That is why we have created Speech Simulator, a VR web application used for practice public speaking. With it, we hope to alleviate the stress that comes with speaking in front of others. ## What it does Speech Simulator is an easy to use VR web application. Simply login with discord, import your script into the site from any device, then put on your VR headset to enter a 3D classroom, a common location for public speaking. From there, you are able to practice speaking. Behind the user is a board containing your script, split into slides, emulating a real powerpoint styled presentation. Once you have run through your script, you may exit VR, where you will find results based on the application's recording of your presentation. From your talking speed to how many filler words said, Speech Simulator will provide you with stats based on your performance as well as a summary on what you did well and how you can improve. Presentations can be attempted again and are saved onto our database. Additionally, any adjustments to the presentation templates can be made using our editing feature. ## How we built it Our project was created primarily using the T3 stack. The stack uses **Next.js** as our full-stack React framework. The frontend uses **React** and **Tailwind CSS** for component state and styling. The backend utilizes **NextAuth.js** for login and user authentication and **Prisma** as our ORM. The whole application was type safe ensured using **tRPC**, **Zod**, and **TypeScript**. For the VR aspect of our project, we used **React Three Fiber** for rendering **Three.js** objects in, **React XR**, and **React Speech Recognition** for transcribing speech to text. The server is hosted on Vercel and the database on **CockroachDB**. ## Challenges we ran into Despite completing, there were numerous challenges that we ran into during the hackathon. The largest problem was the connection between the web app on computer and the VR headset. As both were two separate web clients, it was very challenging to communicate our sites' workflow between the two devices. For example, if a user finished their presentation in VR and wanted to view the results on their computer, how would this be accomplished without the user manually refreshing the page? After discussion between using web-sockets or polling, we went with polling + a queuing system, which allowed each respective client to know what to display. We decided to use polling because it enables a severless deploy and concluded that we did not have enough time to setup websockets. Another challenge we had run into was the 3D configuration on the application. As none of us have had real experience with 3D web applications, it was a very daunting task to try and work with meshes and various geometry. However, after a lot of trial and error, we were able to manage a VR solution for our application. ## What we learned This hackathon provided us with a great amount of experience and lessons. Although each of us learned a lot on the technological aspect of this hackathon, there were many other takeaways during this weekend. As this was most of our group's first 24 hour hackathon, we were able to learn to manage our time effectively in a day's span. With a small time limit and semi large project, this hackathon also improved our communication skills and overall coherence of our team. However, we did not just learn from our own experiences, but also from others. Viewing everyone's creations gave us insight on what makes a project meaningful, and we gained a lot from looking at other hacker's projects and their presentations. Overall, this event provided us with an invaluable set of new skills and perspective. ## What's next for VR Speech Simulator There are a ton of ways that we believe can improve Speech Simulator. The first and potentially most important change is the appearance of our VR setting. As this was our first project involving 3D rendering, we had difficulty adding colour to our classroom. This reduced the immersion that we originally hoped for, so improving our 3D environment would allow the user to more accurately practice. Furthermore, as public speaking infers speaking in front of others, large improvements can be made by adding human models into VR. On the other hand, we also believe that we can improve Speech Simulator by adding more functionality to the feedback it provides to the user. From hand gestures to tone of voice, there are so many ways of differentiating the quality of a presentation that could be added to our application. In the future, we hope to add these new features and further elevate Speech Simulator.
## Inspiration Recently, character experiences powered by LLMs have become extremely popular. latforms like Character.AI, boasting 54M monthly active users and a staggering 230M monthly visits, are a testament to this trend. Yet, despite these figures, most experiences in the market offer text-to-text interfaces with little variation. We wanted to take the chat with characters to the next level. Instead of a simple and standard text-based interface, we wanted intricate visualization of your character with a 3D model viewable in your real-life environment, actual low-latency, immersive, realistic, spoken dialogue with your character, with a really fun dynamic (generated on-the-fly) 3D graphics experience - seeing objects appear as they are mentioned in conversation - a novel innovation only made possible recently. ## What it does An overview: CharactAR is a fun, immersive, and **interactive** AR experience where you get to speak your character’s personality into existence, upload an image of your character or take a selfie, pick their outfit, and bring your custom character to life in a AR world, where you can chat using your microphone or type a question, and even have your character run around in AR! As an additional super cool feature, we compiled, hosted, and deployed the open source OpenAI Shap-e Model(by ourselves on Nvidia A100 GPUs from Google Cloud) to do text-to-3D generation, meaning your character is capable of generating 3D objects (mid-conversation!) and placing them in the scene. Imagine the terminator generating robots, or a marine biologist generating fish and other wildlife! Our combination and intersection of these novel technologies enables experiences like those to now be possible! ## How we built it ![flowchart](https://i.imgur.com/R5Vbpn6.png) *So how does CharactAR work?* To begin, we built <https://charactar.org>, a web application that utilizes Assembly AI (State of the Art Speech-To-Text) to do real time speech-to-text transcription. Simply click the “Record” button, speak your character’s personality into existence, and click the “Begin AR Experience” button to enter your AR experience. We used HTML, CSS, and Javascript to build this experience, and bought the domain using GoDaddy and hosted the website on Replit! In the background, we’ve already used OpenAI Function Calling, a novel OpenAI product offering, to choose voices for your custom character based on the original description that you provided. Once we have the voice and description for your character, we’re ready to jump into the AR environment. The AR platform that we chose is 8th Wall, an AR deployment platform built by Niantic, which focuses on web experiences. Due to the emphasis on web experiences, any device can use CharactAR, from mobile devices, to laptops, or even VR headsets (yes, really!). In order to power our customizable character backend, we employed the Ready Player Me player avatar generation SDK, providing us a responsive UI that enables our users to create any character they want, from taking a selfie, to uploading an image of their favorite celebrity, or even just choosing from a predefined set of models. Once the model is loaded into the 8th Wall experience, we then use a mix of OpenAI (Character Intelligence), InWorld (Microphone Input & Output), and ElevenLabs (Voice Generation) to create an extremely immersive character experience from the get go. We animated each character using the standard Ready Player Me animation rigs, and you can even see your character move around in your environment by dragging your finger on the screen. Each time your character responds to you, we make an API call to our own custom hosted OpenAI Shap-e API, which is hosted on Google Cloud, running on an NVIDIA A100. A short prompt based on the conversation between you and your character is sent to OpenAI’s novel text-to-3D API to be generated into a 3D object that is automatically inserted into your environment. For example, if you are talking with Barack Obama about his time in the White House, our Shap-E API will generate a 3D object of the White House, and it’s really fun (and funny!) in game to see what Shap-E will generate. ## Challenges we ran into One of our favorite parts of CharactAR is the automatic generation of objects during conversations with the character. However, the addition of these objects also lead to an unfortunate spike in triangle count, which quickly builds up lag. So when designing this pipeline, we worked on reducing unnecessary detail in model generation. One of these methods is the selection of the number of inference steps prior to generating 3D models with Shap-E. The other is to compress the generated 3D model, which ended up being more difficult to integrate than expected. At first, we generated the 3D models in the .ply format, but realized that .ply files are a nightmare to work with in 8th Wall. So we decided to convert them into .glb files, which would be more efficient to send through the API and better to include in AR. The .glb files could get quite large, so we used Google’s Draco compression library to reduce file sizes by 10 to 100 times. Getting this to work required quite a lot of debugging and package dependency resolving, but it was awesome to see it functioning. Below, we have “banana man” renders from our hosted Shap-E model. ![bananaman_left](https://i.imgur.com/9i94Jme.jpg) ![bananaman_right](https://i.imgur.com/YJyRLKF.jpg) *Even after transcoding the .glb file with Draco compression, the banana man still stands gloriously (1 MB → 78 KB).* Although 8th Wall made development much more streamlined, AR Development as a whole still has a ways to go, and here are some of the challenges we faced. There were countless undefined errors with no documentation, many of which took hours of debugging to overcome. Working with the animated Ready Player Me models and the .glbs generated by our Open AI Shap-e model imposed a lot of challenges with model formats and dynamically generating models, which required lots of reading up on 3D model formats. ## Accomplishments that we're proud of There were many small challenges in each of the interconnected portions of the project that we are proud to have persevered through the bugs and roadblocks. The satisfaction of small victories, like seeing our prompts come to 3D or seeing the character walk around our table, always invigorated us to keep on pushing. Running AI models is computationally expensive, so it made sense for us to allocate this work to be done on Google Cloud’s servers. This allowed us to access the powerful A100 GPUs, which made Shap-E model generation thousands of times faster than would be possible on CPUs. This also provided a great opportunity to work with FastAPIs to create a convenient and extremely efficient method of inputting a prompt and receiving a compressed 3D representation of the query. We integrated AssemblyAI's real-time transcription services to transcribe live audio streams with high accuracy and low latency. This capability was crucial for our project as it allowed us to convert spoken language into text that could be further processed by our system. The WebSocket API provided by AssemblyAI was secure, fast, and effective in meeting our requirements for transcription. The function calling capabilities of OpenAI's latest models were an exciting addition to our project. Developers can now describe functions to these models, and the models intelligently output a JSON object containing the arguments for those functions. This feature enabled us to integrate GPT's capabilities seamlessly with external tools and APIs, offering a new level of functionality and reliability. For enhanced user experience and interactivity between our website and the 8th Wall environment, we leveraged the URLSearchParams interface. This allowed us to send the information of the initial character prompt seamlessly. ## What we learned For the majority of the team, it was our first AR project using 8th Wall, so we learned the ins and outs of building with AR, the A-Frame library, and deploying a final product that can be used by end-users. We also had never used Assembly AI for real-time transcription, so we learned how to use websockets for Real-Time transcription streaming. We also learned so many of the intricacies to do with 3D objects and their file types, and really got low level with the meshes, the object file types, and the triangle counts to ensure a smooth rendering experience. Since our project required so many technologies to be woven together, there were many times where we had to find unique workarounds, and weave together our distributed systems. Our prompt engineering skills were put to the test, as we needed to experiment with countless phrasings to get our agent behaviors and 3D model generations to match our expectations. After this experience, we feel much more confident in utilizing the state-of-the-art generative AI models to produce top-notch content. We also learned to use LLMs for more specific and unique use cases; for example, we used GPT to identify the most important object prompts from a large dialogue conversation transcript, and to choose the voice for our character. ## What's next for CharactAR Using 8th Wall technology like Shared AR, we could potentially have up to 250 players in the same virtual room, meaning you could play with your friends, no matter how far away they are from you. These kinds of collaborative, virtual, and engaging experiences are the types of environments that we want CharactAR to enable. While each CharactAR custom character is animated with a custom rigging system, we believe there is potential for using the new OpenAI Function Calling schema (which we used several times in our project) to generate animations dynamically, meaning we could have endless character animations and facial expressions to match endless conversations.
## Inspiration One of our team members underwent speech therapy as a child, and the therapy helped him gain a sense of independence and self-esteem. In fact, over 7 million Americans, ranging from children with gene-related diseases to adults who suffer from stroke, go through some sort of speech impairment. We wanted to create a solution that could help amplify the effects of in-person treatment by giving families a way to practice at home. We also wanted to make speech therapy accessible to everyone who cannot afford the cost or time to seek institutional help. ## What it does BeHeard makes speech therapy interactive, insightful, and fun. We present a hybrid text and voice assistant visual interface that guides patients through voice exercises. First, we have them say sentences designed to exercise specific nerves and muscles in the mouth. We use deep learning to identify mishaps and disorders on a word-by-word basis, and show users where exactly they could use more practice. Then, we lead patients through mouth exercises that target those neural pathways. They imitate a sound and mouth shape, and we use deep computer vision to display the desired lip shape directly on their mouth. Finally, when they are able to hold the position for a few seconds, we celebrate their improvement by showing them wearing fun augmented-reality masks in the browser. ## How we built it * On the frontend, we used Flask, Bootstrap, Houndify and JavaScript/css/html to build our UI. We used Houndify extensively to navigate around our site and process speech during exercises. * On the backend, we used two Flask servers that split the processing load, with one running the server IO with the frontend and the other running the machine learning. * On our algorithms side, we used deep\_disfluency to identify speech irregularities and filler words and used the IBM Watson speech-to-text (STT) API for a more raw, fine-resolution transcription. * We used the tensorflow.js deep learning library to extract 19 points representing the mouth of a face. With exhaustive vector analysis, we determined the correct mouth shape for pronouncing basic vowels and gave real-time guidance for lip movements. To increase motivation for the user to practice, we even incorporated AR to draw the desired lip shapes on users mouths, and rewards them with fun masks when they get it right! ## Challenges we ran into * It was quite challenging to smoothly incorporate voice our platform for navigation, while also being sensitive to the fact that our users may have trouble with voice AI. We help those who are still improving gain competence and feel at ease by creating a chat bubble interface that reads messages to users, and also accepts text and clicks. * We also ran into issues finding the balance between getting noisy, unreliable STT transcriptions and transcriptions that autocorrected our users’ mistakes. We ended up employing a balance of the Houndify and Watson APIs. We also adapted a dynamic programming solution to the Longest Common Subsequence problem to create the most accurate and intuitive visualization of our users’ mistakes. ## Accomplishments that we are proud of We’re proud of being one of the first easily-accessible digital solutions that we know of that both conducts interactive speech therapy, while also deeply analyzing our users speech to show them insights. We’re also really excited to have created a really pleasant and intuitive user experience given our time constraints. We’re also proud to have implemented a speech practice program that involves mouth shape detection and correction that customizes the AR mouth goals to every user’s facial dimensions. ## What we learned We learned a lot about the strength of the speech therapy community, and the patients who inspire us to persist in this hackathon. We’ve also learned about the fundamental challenges of detecting anomalous speech, and the need for more NLP research to strengthen the technology in this field. We learned how to work with facial recognition systems in interactive settings. All the vector calculations and geometric analyses to make detection more accurate and guidance systems look more natural was a challenging but a great learning experience. ## What's next for Be Heard We have demonstrated how technology can be used to effectively assist speech therapy by building a prototype of a working solution. From here, we will first develop more models to determine stutters and mistakes in speech by diving into audio and language related algorithms and machine learning techniques. It will be used to diagnose the problems for users on a more personal level. We will then develop an in-house facial recognition system to obtain more points representing the human mouth. We would then gain the ability to feature more types of pronunciation practices and more sophisticated lip guidance.
winning
## Inspiration In the Fintech industry, there are currently many services that offer financial institutions an analysis of whether or not a client is at-risk of defaulting on a potential loan. Many of these programs that predict loan defaulting are only available to the institutions, leaving customers in the dark. Instead, we wanted to develop a program that would predict loan defaulting, and if a loan was not approved, provide the user with potential causes and next steps to improve their chances of getting a loan next time. ## What it does The LoanSimple program uses the random forest algorithm to predict whether or not a user will be approved for a loan. If the user is not approved for a loan, the LoanSimple will give the user a couple reasons why (depending on their inputs) and some steps on how to improve their application the next time around so they can secure a loan. ## How we built it We used HTML and CSS to develop the front end of the LoanSimple program. For the backend, Python was used to create the trained machine learning model. To interface the frontend with the backend, python flask was used to deploy the model. ## Challenges we ran into The initial dataset we used to train our machine learning model was very imbalanced. As a result, our initial model was overfitting. Instead, we had to find a more balanced dataset to train our model. ## Accomplishments that we're proud of 1. Our machine learning model has an accuracy of 87% ## What we learned 1. Learned how to set up, train and test a basic machine learning model 2. Learned about how to develop web apps using Flask 3. Learned more about the financial industry and what factors are considered when analyzing the risk of a loan ## What's next for LoanSimple 1. Extend LoanSimple and train the model with data that is more closely related to the Canadian currency.
## Inspiration As full-time college students who are also working part-time, it is a daily struggle to keep up with our schedule organized while managing our academic assignments and personal responsibilities. The demands of our schedule often lead us to prioritize tasks over essential self-care. To help college students like us strike a balance between their academic or work responsibility and their self-care routine, our team has come up with a scheduling program utilizing AI to help us stay organized. ## What it does The program guides users to input their daily routines, such as wake-up, meal-times, and bedtime, forming the foundation for a personalized schedule. Users can add tasks by providing details like title, description, and estimated time, tailoring the schedule to their needs. The AI then generates a balanced schedule, integrating tasks with essential self-care activities like sleep and meals. This approach promotes efficient time management, helping users improve both productivity and well-being while maintaining a sustainable balance between work and self-care. ## How we built it First, we came up with a design in Figma that served as our outline of the website before adding any scripts or functions. We soon began writing codes using backend tools such as Node.js and Express.js, as well as frontend tools such as HTML, CSS, and Javascript tools. After that we made a Google Cloud server and created an API key, allowing us to use Gemini-Pro in our code. Using our varied skillsets, we produced a website built on HTML, CSS, and Javascript powered by Google's Gemini AI, capable of reading user input and generating schedules. ## Challenges we ran into We had a few problems with implementing the GeminiAI into our project but the one that kept persisting was giving Gemini AI the prompt to fulfill our intended purpose of creating an effective and organized schedule for users. ## Accomplishments that we're proud of As all of us are beginners, so we came to the hackathon without expecting much. So we’re proud of the fact that we learned so much in just a mere 36 hours and managed to build a program that works the way we wanted it to work. ## What we learned We were all really proud at what we have accomplished in these 36 hours. Throughout the experience, we learned how to bond and learn each other's skills when it comes to coding. This event was really impactful for us as it taught us how to utilize AI in our programs, how to use API, javascript and express. ## What's next for Schedulify We would like to add features that give users more customizations to their preferred schedule. We would also like to implement google calendar API so that our website can cross referenced the generated schedule from our website to google calendar. We also plan to incorporate Schedulify into mobile devices, allowing for more accessibility.
## Inspiration Music is a universal language, and we recognized Spotify wrapped to be one of the most anticipated times of the year. Realizing that people have an interest in learning about their own music taste, we created ***verses*** to not only allow people to quiz themselves on their musical interests, but also quiz their friends to see who knows them best. ## What it does A quiz that challenges you to answer questions about your Spotify listening habits, allowing you to share with friends and have them guess your top songs/artists by answering questions. Creates a leaderboard of your friends who have taken the quiz, ranking them by the scores they obtained on your quiz. ## How we built it We built the project using react.js, HTML, and CSS. We used the Spotify API to get data on the user's listening history, top songs, and top artists as well as enable the user to log into ***verses*** with their Spotify. JSON was used for user data persistence and Figma was used as the primary UX/UI design tool. ## Challenges we ran into Implementing the Spotify API was a challenge as we had no previous experience with it. We had to seek out mentors for help in order to get it working. Designing user-friendly UI was also a challenge. ## Accomplishments that we're proud of We took a while to get the backend working so only had a limited amount of time to work on the frontend, but managed to get it very close to our original Figma prototype. ## What we learned We learned more about implementing APIs and making mobile-friendly applications. ## What's next for verses So far, we have implemented ***verses*** with Spotify API. In the future, we hope to link it to more musical platforms such as Apple Music. We also hope to create a leaderboard for players' friends to see which one of their friends can answer the most questions about their music taste correctly.
losing
## Inspiration We love the playing the game and were disappointed in the way that there wasnt a nice web implementation of the game that we could play with each other remotely. So we fixed that. ## What it does Allows between 5 and 10 players to play Avalon over the web app. ## How we built it We made extensive use of Meteor and forked a popular game called [Spyfall](https://github.com/evanbrumley/spyfall) to build it out. This game had a very basic subset of rules that were applicable to Avalon. Because of this we added a lot of the functionality we needed on top of Spyfall to make the Avalon game mechanics work. ## Challenges we ran into Building realtime systems is hard. Moreover, using a framework like Meteor that makes a lot of things easy by black boxing them is also difficult by the same token. So a lot of the time we struggled with making things work that happened to not be able to work within the context of the framework we were using. We also ended up starting the project over again multiple times since we realized that we were going down a path in which it was impossible to build that application. ## Accomplishments that we're proud of It works. Its crisp. Its clean. Its responsive. Its synchronized across clients. ## What we learned Meteor is magic. We learned how to use a lot of the more magical client synchronization features to deal with race conditions and the difficulties of making a realtime application. ## What's next for Avalon Fill out the different roles, add a chat client, integrate with a video chat feature.
## What is 'Titans'? VR gaming shouldn't just be a lonely, single-player experience. We believe that we can elevate the VR experience by integrating multiplayer interactions. We imagined a mixed VR/AR experience where a single VR player's playing field can be manipulated by 'Titans' -- AR players who can plan out the VR world by placing specially designed tiles-- blocking the VR player from reaching the goal tile. ## How we built it We had three streams of development/design to complete our project: the design, the VR experience, and the AR experience. For design, we used Adobe Illustrator and Blender to create the assets that were used in this project. We had to be careful that our tile designs were recognizable by both human and AR standards, as the tiles would be used by the AR players to lay our the environment the VR players would be placed in. Additionally, we pursued a low-poly art style with our 3D models, in order to reduce design time in building intricate models and to complement the retro/pixel-style of our eventual AR environment tiles. For building the VR side of the project, we selected to build a Unity VR application targeting Windows and Mac with the Oculus Rift. One of our most notable achievements here is a custom terrain tessellation and generation engine that mimics several environmental biomes represented in our game as well as integrating a multiplayer service powered by Google Cloud Platform. The AR side of the project uses Google's ARCore and Google Cloud Anchors API to seamlessly stream anchors (the tiles used in our game) to other devices playing in the same area. ## Challenges we ran into Hardware issues were one of the biggest time-drains in this project. Setting up all the programs-- Unity and its libraries, blender, etc...-- took up the initial hours following the brainstorming session. The biggest challenge was our Alienware MLH laptop resetting overnight. This was a frustrating moment for our team, as we were in the middle of testing our AR features such as testing the compatibility of our environment tiles. ## Accomplishments that we're proud of We're proud of the consistent effort and style that went into the game design, from the physical environment tiles to the 3D models, we tried our best to create a pleasant-to-look at game style. Our game world generation is something we're also quite proud of. The fact that we were able to develop an immersive world that we can explore via VR is quite surreal. Additionally, we were able to accomplish some form of AR experience where the phone recognizes the environment tiles. ## What we learned All of our teammates learned something new: multiplayer in unity, ARCore, Blender, etc... Most importantly we learned the various technical and planning challenges involved in AR/VR game development ## What's next for Titans AR/VR We hope to eventually connect the AR portion and VR portion of the project together the way we envisioned: where AR players can manipulate the virutal world of the VR player.
## Inspiration The inspiration for our project stems from the increasing trend of online shopping and the declining foot traffic in physical stores. Our goal was to provide a unique and engaging experience for customers, encouraging them to visit physical stores and rediscover the joy of in-person shopping. We wanted to create an interactive and entertaining shopping experience that would entice customers to visit stores more frequently and foster a deeper connection between them and the store's brand. ## What it does Our project is an AR scavenger hunt experience that gamifies the shopping experience. The scavenger hunt encourages customers to explore the store and discover new products they may have otherwise overlooked. As customers find specific products, they can earn points which can be redeemed for exclusive deals and discounts on future purchases. This innovative marketing scheme not only provides customers with an entertaining experience but also incentivizes them to visit stores more frequently and purchase products they may have otherwise overlooked. ## How we built it To create the AR component of our project, we used Vuforia and Unity, two widely used platforms for building AR applications. The Vuforia platform allowed us to create and track image targets, while Unity was used to design the 3D models for the AR experience. We then integrated the AR component into an Android application by importing it as a Gradle project. Our team utilized agile development methodologies to ensure efficient collaboration and problem-solving throughout the development process. ## Challenges we ran into One of the challenges we faced was integrating multiple APIs and ensuring that they worked together seamlessly. Another challenge was importing the AR component and creating the desired functionality within our project. We also faced issues with debugging and resolving technical errors that arose during the development process. ## Accomplishments that we're proud of Despite the challenges we faced, we were able to achieve successful teamwork and collaboration. Despite forming the team later than other groups, we were able to effectively communicate and work together to bring our project to fruition. We are proud of the end result, which was a polished and functional AR scavenger hunt experience that met our objectives. ## What we learned We learned how difficult it is to truly ship out software, and we are grateful to have joined the hackathon. We gained a deeper understanding of the importance of project planning, effective communication, and collaboration among team members. We also learned that the development process can be challenging and unpredictable, and that it requires perseverance and problem-solving skills. Additionally, participating in the hackathon taught us valuable technical skills such as integrating APIs, creating AR functionality, and importing projects onto an Android application. ## What's next for Winnur Looking forward, we plan to incorporate Computer Vision technology into our project to prevent potential damage to our product's packaging. We also aim to expand the reach of our AR scavenger hunt experience by partnering with more retailers and enhancing the user interface and experience. We are excited about the potential for future development and growth of Winnur.
partial
## Inspiration 💡 Buying, selling, and trading physical collectibles can be a rather tedious task, and this has become even more apparent with the recent surge of NFTs (Non-Fungible Tokens). The global market for physical collectibles was estimated to be worth $372 billion in 2020. People have an innate inclination to collect, driving the acquisition of items such as art, games, sports memorabilia, toys, and more. However, considering the world's rapid shift towards the digital realm, there arises a question about the sustainability of this market in its current form. At its current pace, it seems inevitable that people may lose interest in physical collectibles, gravitating towards digital alternatives due to the speed and convenience of digital transactions. Nevertheless, we are here with a mission to rekindle the passion for physical collectibles. ## What it does 🤖 Our platform empowers users to transform their physical collectibles into digital assets. This not only preserves the value of their physical items but also facilitates easy buying, selling, and trading. We have the capability to digitize various collectibles with verifiable authenticity, including graded sports/trading cards, sneakers, and more. ## How we built it 👷🏻‍♂️ To construct our platform, we utilized [NEXT.js](https://nextjs.org/) for both frontend and backend development. Additionally, we harnessed the power of the [thirdweb](https://thirdweb.com/) SDK for deploying, minting, and trading NFTs. Our NFTs are deployed on the Ethereum L2 [Mumbai](https://mumbai.polygonscan.com/) testnet. `MUMBAI_DIGITIZE_ETH_ADDRESS = 0x6A80AD071932ba92fe43968DD3CaCBa989C3253f MUMBAI_MARKETPLACE_ADDRESS = [0xedd39cAD84b3Be541f630CD1F5595d67bC243E78](https://thirdweb.com/mumbai/0xedd39cAD84b3Be541f630CD1F5595d67bC243E78)` Furthermore, we incorporated the Ethereum Attestation Service to verify asset ownership and perform KYC (Know Your Customer) checks on users. `SEPOLIA_KYC_SCHEMA = 0x95f11b78d560f88d50fcc41090791bb7a7505b6b12bbecf419bfa549b0934f6d SEPOLIA_KYC_TX_ID = 0x18d53b53e90d7cb9b37b2f8ae0d757d1b298baae3b5767008e2985a5894d6d2c SEPOLIA_MINT_NFT_SCHEMA = 0x480a518609c381a44ca0c616157464a7d066fed748e1b9f55d54b6d51bcb53d2 SEPOLIA_MINT_NFT_TX_ID = 0x0358a9a9cae12ffe10513e8d06c174b1d43c5e10c3270035476d10afd9738334` We also made use of CockroachDB and Prisma to manage our database. Finally, to view all NFTs in 3D 😎, we built a separate platform that's soon-to-be integrated into our app. We scrape the internet to generate all card details and metadata and render it as a `.glb` file that can be seen in 3D! ## Challenges we ran into 🏃🏻‍♂️🏃🏻‍♂️💨💨 Our journey in the blockchain space was met with several challenges, as we were relatively new to this domain. Integrating various SDKs proved to be a formidable task. Initially, we deployed our NFTs on Sepolia, but encountered difficulties in fetching data. We suspect that thirdweb does not fully support Sepolia. Ultimately, we made a successful transition to the Mumbai network. We also faced issues with the PSA card website, as it went offline temporarily, preventing us from scraping data to populate our applications. ## Accomplishments that we're proud of 🌄 As a team consisting of individuals new to blockchain technology, and even first-time deployers of smart contracts and NFT minting, we take pride in successfully integrating web3 SDKs into our application. Moreover, users can view their prized possessions in **3-D!** Overall, we're proud that we managed to deliver a functional minimum viable product within a short time frame. 🎇🎇 ## What we learned 👨🏻‍🎓 Through this experience, we learned the value of teamwork and the importance of addressing challenges head-on. In moments of uncertainty, we found effective solutions through open discussions. Overall, we have gained confidence in our ability to deliver exceptional products as a team.Lastly, we learned to have fun and build things that matter to us. ## What's next for digitize.eth 👀👀👀 Our future plans include further enhancements such as: * Populating our platform with a range of supported NFTs for physical assets. * Take a leap of faith and deploy on Mainnet * Deploy our NFTs on other chains, eg Solana. Live-demo: <https://bl0ckify.tech> Github: <https://github.com/idrak888/digitize-eth/tree/main> + <https://github.com/zakariya23/hackathon>
## Inspiration 34.2 million people are unpaid and unsupported caregivers for seniors, with 48% of which are working age. These people could be working throughout the day and find it difficult to manage their care receiver's medications. Care receivers could be reliant on their caregiver as they often forget to take their medication. This is where our application would come to fill in the gap to make the lives of the caregivers and their network less stressful by providing more information about the status of the care receiver's prescriptions. ## What it does Our solution is a at-home prescription management service for caregivers of seniors to provide real-time updates to a family network across paltforms. It uses Sonr's peer-to-peer interaction and real-time network to provide quick updates to all members in the family while ensuring we still provide a smooth and beginner-friendly user experience. Hosting our information on the blockchain ensures the security and safety of our patient's data. ## How we built it We hosted our app in Sonr's flutter SDK in order to connect our device to the Sonr Network to allow for a Decentralized Network. We built it using iOS emulator due to better development based on the advice from the Sonr team. We have also used Sonr's Login protocol to help make account login and creation seamless as they just need to sign in using their Sonr credentials. We implemented Sonr's Schema and Document functionality in order to save important prescription data on the Sonr Decenrtalized network ## Challenges we ran into We had dependencies on Sonr's API who was unable to support the android emulator at this point in time. We also discovered the limitation of computing resources which caused us take a long period of time to connect. ## Accomplishments that we're proud of Managing to achieve our goal of creating a working prototype that integrates with the Sonr's API is one of our proudest accomplishments. We also manage to create an addressable problem and ideate a unique solution to it which we were able to build in 24 hours. We were not familiar with Sonr before this, however, with the help of the mentors, we have learnt a lot and were able to implement sonr's technology to our problem. ## What we learned We really found Sonr's technology very intriguing exciting, especially as we are moving towards Web3 today. We were very keen on exploring how might we make full use of the technology in our mission to improve the lives of the caregivers today. After discussing with the Sonr mentors, we were enlightened on how it works and were able to use it in our application. ## What's next for CalHacks - Simpillz For the future use case, we hope to develop more support for an expanded network of users which could include medical professionals and nurses to help keep track of the patient prescription meds as well. We are also looking into expanding into IOT devices to track prescription intake by the seniors to automate the tracking process such that there will be less dependencies on the caregiver.
## Inspiration As college students who recently graduated high school in the last year or two, we know first-hand the sinking feeling that you experience when you open an envelope after your graduation, and see a gift card to a clothing store you'll never set foot into in your life. Instead, you can't stop thinking about the latest generation of AirPods that you wanted to buy. Well, imagine a platform where you could trade your unwanted gift card for something you would actually use... you would actually be able to get those AirPods, without spending money out of your own pocket. That's where the idea of GifTr began. ## What it does Our website serves as a **decentralized gift card trading marketplace**. A user who wants to trade their own gift card for a different one can log in and connect to their **Sui wallet**. Following that, they will be prompted to select their gift card company and cash value. Once they have confirmed that they would like to trade the gift card, they can browse through options of other gift cards "on the market", and if they find one they like, send a request to swap. If the other person accepts the request, a trustless swap is initiated without the use of a intermediary escrow, and the swap is completed. ## How we built it In simple terms, the first party locks the card they want to trade, at which point a lock and a key are created for the card. They can request a card held by a second party, and if the second party accepts the offers, both parties swap gift cards and corresponding keys to complete the swap. If a party wants to tamper with their object, they must use their key to do so. The single-use key would then be consumed by the smart contract, and the trade would not be possible. Our website was built in three stages: the smart contract, the backend, and the frontend. **The smart contract** hosts all the code responsible for automating a trustless swap between the sender and the recipient. It **specifies conditions** under which the trade will occur, such as the assets being exchanged and their values. It also has **escrow functionality**, responsible for holding the cards deposited by both parties until swap conditions have been satisfied. Once both parties have undergone **verification**, the **swap** will occur if all conditions are met, and if not, the process will terminate. **The backend\* acts as a bridge between the smart contract and the front end, allowing for \*\*communication** between the code and the user interface. The main way it does this is by **managing all data**, which includes all the user accounts, their gift card inventories, and more. Anything that the user does on the website is communicated to the Sui blockchain. This **blockchain integration** is crucial so that users can initiate trades without having to deal with the complexities of blockchain. **The frontend** is essentially everything the user sees and does, or the UI. It begins with **user authentication** such as the login process and connection to Sui wallet. It allows the user to **manage transactions** by initiating trades, entering in attributes of the asset they want to trade, and viewing trade offers. This is all done through React to ensure *real-time interaction* so that new offers are seen and updated without refreshing the page. ## Challenges we ran into This was **our first step into the field** of Sui blockchain and web 3 entirely, so we found it to be really informative, but also really challenging. The first step we had to take to address this challenge was to begin learning Move through some basic tutorials and set up a development environment. Another challenge was the **many aspects of escrow functionality**, which we addressed through embedding many tests within our code. For instance, we had to test that that once an object was created, it would actually lock and unlock, and also that if the second shared party stopped responding or an object was tampered with, the trade would be terminated. ## Accomplishments that we're proud of We're most proud of the look and functionality of our **user interface**, as user experience is one of our most important focuses. We wanted to create a platform that was clean, easy to use and navigate, which we did by maintaining a sense of consistency throughout our website and keep basic visual hierarchy elements in mind when designing the website. Beyond this, we are also proud of pulling off a project that relies so heavily on **Sui blockchain**, when we entered this hackathon with absolutely no knowledge about it. ## What we learned Though we've designed a very simple trading project implementing Sui blockchain, we've learnt a lot about the **implications of blockchain** and the role it can play in daily life and cryptocurrency. The two most important aspects to us are decentralization and user empowerment. On such a simple level, we're able to now understand how a dApp can reduce reliance on third party escrows and automate these processes through a smart contract, increasing transparency and security. Through this, the user also gains more ownership over their own financial activities and decisions. We're interested in further exploring DeFi principles and web 3 in our future as software engineers, and perhaps even implementing it in our own life when we day trade. ## What's next for GifTr Currently, GifTr only facilitates the exchange of gift cards, but we are intent on expanding this to allow users to trade their gift cards for Sui tokens in particular. This would encourage our users to shift from traditional banking systems to a decentralized system, and give them access to programmable money that can be stored more securely, integrated into smart contracts, and used in instant transactions.
partial
## Inspiration What time is it? Well, unfortunately, people think Millennials and Gen Zs can't read analogue clocks, so why not just make some AR glasses that can do it for you! We heard a call for "Useless Inventions" so we thought, "what is the most overcomplicated, unnecessary, yet still fun idea for a project?" After a session of brainstorming, we settled on AR glasses that turn an analogue clock into a digital clock. Like the true engineers we are, why would we need a cell phone or watch to tell the time, when we could just create the most geeky, and overengineered way to tell the time with a single glance. ## What it does NextTime is our take on augmented reality glasses. The whole premise of NextTime is that it turns an analogue clock into a digital clock. While wearing the AR glasses, a user can look directly at any analogue clock. Using a camera built into an Arduino on the left side of the glasses, the Arduino detects if an analogue clock is within a specified border range on the camera. In the event that an analogue clock is detected, the Arduino will then send a signal, prompting the current time to be printed on a small display on the right side of the glasses. The text is printed as a mirror image of itself. The lens in front of the glasses then reflects the display back to the user in the correct orientation. ## How we built it We used an AI Thinker ESP32 Arduino as the main board on the glasses themselves, it is an Arduino compatible board with an attached camera and Wi-Fi. It is powered by a 2000mAh LiPo battery connected through a 5v boost converter. The Arduino connects to a wireless hotspot and runs a web server which streams the camera feed and listens for get requests. A python script is run on a desktop computer which takes the video feed from the camera and does real-time opencv processing on it to detect if the user is looking at a clock. If a clock is detected near the center of the user's field of view, a request is sent by the desktop back to the Arduino which indicates a clock is being viewed. The Arduino then pulls the current time from time.nist.gov and displays it on the 1.3" OLED display connected via SPI. ## Challenges we ran into The major challenge we faced was at the beginning of our project. Essentially, the compiling and uploading time for all of our code written using the Arduino IDE was extremely slow. Each time we needed to edit code, it would take at least 5 minutes to re-upload to the Arduino. This made debugging and testing difficult to complete, since we didn't account for the extra time needed for this stage. This is the main reason why we spent over a quarter of hacking time trying to get the display to correctly turn off, when we initially thought it would only take an hour at most. We really struggled to overcome this, but eventually after trial and error, we optimized our debugging process by being very selective in the lines of code we were changing, resulting in as few re-uploads as possible to save time. ## Accomplishments that we're proud of We are proud that we managed to actually make usable augmented reality glasses with fun and mildly comedic functionality. Our time management was amazing and the hardware components worked so much better than we expected. We all agree that the sole reason that we wanted to participate in this hackathon was to learn and have fun, which, in our opinions, was achieved with flying colours. ## What we learned We learned that programming Arduinos is a lot harder than it looks. There are a lot of libraries available to use, but not all of them do what you want them to, even if you try. We now have a better understanding of how to integrate Additionally, we learned that you should plan the overall design of your project near the beginning, instead of attempting to attach hardware randomly where there really isn't space for it. This would've saved us the headache discovering that the placement of wires and batteries wasn't suitable. Design plays a large role in the appeal of a project, like it or not. You could have the coolest project features ever, but if it doesn't fit together into one cohesive and well thought out product, no one will pay attention. ## What's next for NextTime There are many areas of improvement for NextTime. Ideally, we would like the hardware to be much lighter, as currently it is very heavy to wear. The extra weight causes the glasses to slightly slip down the user's face, running the risk of falling off, in turn damaging the glasses. Also, we would like to make this detachable rather than a semi-permanent modification to a pair of glasses. This could be done by creating housing for the hardware that can clip onto the arms of glasses. The main software feature we would like to add is analogue clock hand detection. Currently, the Arduino only detects if an analogue clock is present, and it will display the current time based on the internet time. We could add the functionality of detecting where the hands of the clock are and displaying the time seen on the clock (correct or not) and not just the current time no matter what.
## Inspiration You haven't seen your friends in real life for a long, long time. Now that COVID restrictions are loosening, you decide that it's a perfect time for the squad to hang out again. You don't care where you'll all be meeting up; you just want a chance to speak with them in-person. Let whereyouapp decide where your next meetup location should be. ## What it does WhereYouApp allows you to create Hangouts and invite your friends to them. All invitees can mark dates where they're unavailable, and then vote on a hangout location. A list of hangout locations are automatically generated based on proximity to all invitees as well as when they are open. Once votes are finalized, it generates a travel plan for all of your friends, allowing for options such as carpooling. ## How we built it NestJS for the backend, storing information on CockroachDB. OpenStreetMap for geographic-related calculations. SolidJS for the frontend, displaying the map information with Leaflet. ## Challenges we ran into ## Accomplishments that we're proud of ## What we learned ## What's next for WhereYouApp
## Inspiration The idea was to help people who are blind, to be able to discretely gather context during social interactions and general day-to-day activities ## What it does The glasses take a picture and analyze them using Microsoft, Google, and IBM Watson's Vision Recognition APIs and try to understand what is happening. They then form a sentence and let the user know. There's also a neural network at play that discerns between the two dens and can tell who is in the frame ## How I built it We took a RPi Camera and increased the length of the cable. We then made a hole in our lens of the glasses and fit it in there. We added a touch sensor to discretely control the camera as well. ## Challenges I ran into The biggest challenge we ran into was Natural Language Processing, as in trying to parse together a human-sounding sentence that describes the scene. ## What I learned I learnt a lot about the different vision APIs out there and creating/trainingyour own Neural Network. ## What's next for Let Me See We want to further improve our analysis and reduce our analyzing time.
losing
## Inspiration Every musician knows that moment of confusion, that painful silence as onlookers shuffle awkward as you frantically turn the page of the sheet music in front of you. While large solo performances may have people in charge of turning pages, for larger scale ensemble works this obviously proves impractical. At this hackathon, inspired by the discussion around technology and music at the keynote speech, we wanted to develop a tool that could aid musicians. Seeing AdHawks's MindLink demoed at the sponsor booths, ultimately give us a clear vision for our hack. MindLink, a deceptively ordinary looking pair of glasses, has the ability to track the user's gaze in three dimensions, recognizes events such as blinks and even has an external camera to display the user's view. Blown away by the possibility and opportunities this device offered, we set out to build a hands-free sheet music tool that simplifies working with digital sheet music. ## What it does Noteation is a powerful sheet music reader and annotator. All the musician needs to do is to upload a pdf of the piece they plan to play. Noteation then displays the first page of the music and waits for eye commands to turn to the next page, providing a simple, efficient and most importantly stress-free experience for the musician as they practice and perform. Noteation also enables users to annotate on the sheet music, just as they would on printed sheet music and there are touch controls that allow the user to select, draw, scroll and flip as they please. ## How we built it Noteation is a web app built using React and Typescript. Interfacing with the MindLink hardware was done on Python using AdHawk's SDK with Flask and CockroachDB to link the frontend with the backend. ## Challenges we ran into One challenge we came across was deciding how to optimally allow the user to turn page using eye gestures. We tried building regression-based models using the eye-gaze data stream to predict when to turn the page and built applications using Qt to study the effectiveness of these methods. Ultimately, we decided to turn the page using right and left wink commands as this was the most reliable technique that also preserved the musicians' autonomy, allowing them to flip back and forth as needed. Strategizing how to structure the communication between the front and backend was also a challenging problem to work on as it is important that there is low latency between receiving a command and turning the page. Our solution using Flask and CockroachDB provided us with a streamlined and efficient way to communicate the data stream as well as providing detailed logs of all events. ## Accomplishments that we're proud of We're so proud we managed to build a functioning tool that we, for certain, believe is super useful. As musicians this is something that we've legitimately thought would be useful in the past, and granted access to pioneering technology to make that happen was super exciting. All while working with a piece of cutting-edge hardware technology that we had zero experience in using before this weekend. ## What we learned One of the most important things we learnt this weekend were the best practices to use when collaborating on project in a time crunch. We also learnt to trust each other to deliver on our sub-tasks and helped where we could. The most exciting thing that we learnt while learning to use these cool technologies, is that the opportunities are endless in tech and the impact, limitless. ## What's next for Noteation: Music made Intuitive Some immediate features we would like to add to Noteation is to enable users to save the pdf with their annotations and add landscape mode where the pages can be displayed two at a time. We would also really like to explore more features of MindLink and allow users to customize their control gestures. There's even possibility of expanding the feature set beyond just changing pages, especially for non-classical musicians who might have other electronic devices to potentially control. The possibilities really are endless and are super exciting to think about!
## Bringing your music to life, not just to your ears but to your eyes 🎶 ## Inspiration 🍐 Composing music through scribbling notes or drag-and-dropping from MuseScore couldn't be more tedious. As pianists ourselves, we know the struggle of trying to bring our impromptu improvisation sessions to life without forgetting what we just played or having to record ourselves and write out the notes one by one. ## What it does 🎹 Introducing PearPiano, a cute little pear that helps you pair the notes to your thoughts. As a musician's best friend, Pear guides pianists through an augmented simulation of a piano where played notes are directly translated into a recording and stored for future use. Pear can read both single notes and chords played on the virtual piano, allowing playback of your music with cascading tiles for full immersion. Seek musical guidance from Pear by asking, "What is the key signature of C-major?" or "Tell me the notes of the E-major diminished 7th chord." To fine tune your compositions, use "Edit mode," where musicians can rewind the clip and drag-and-drop notes for instant changes. ## How we built it 🔧 Using Unity Game Engine and the Oculus Quest, musicians can airplay their music on an augmented piano for real-time music composition. We used OpenAI's Whisper for voice dictation and C# for all game-development scripts. The AR environment is entirely designed and generated using the Unity UI Toolkit, allowing our engineers to realize an immersive yet functional musical corner. ## Challenges we ran into 🏁 * Calibrating and configuring hand tracking on the Oculus Quest * Reducing positional offset when making contact with the virtual piano keys * Building the piano in Unity: setting the pitch of the notes and being able to play multiple at once ## Accomplishments that we're proud of 🌟 * Bringing a scaled **AR piano** to life with close-to-perfect functionalities * Working with OpenAI to synthesize text from speech to provide guidance for users * Designing an interactive and aesthetic UI/UX with cascading tiles upon recording playback ## What we learned 📖 * Designing and implementing our character/piano/interface in 3D * Emily had 5 cups of coffee in half a day and is somehow alive ## What's next for PearPiano 📈 * VR overlay feature to attach the augmented piano to a real one, enriching each practice or composition session * A rhythm checker to support an aspiring pianist to stay on-beat and in-tune * A smart chord suggester to streamline harmonization and enhance the composition process * Depth detection for each note-press to provide feedback on the pianist's musical dynamics * With the up-coming release of Apple Vision Pro and Meta Quest 3, full colour AR pass-through will be more accessible than ever — Pear piano will "pair" great with all those headsets!
## Inspiration We thought Adhawk's eye tracking technology was super cool, so we wanted to leverage it in a VR game. However, since Adhawk currently only has a Unity SDK, we thought we would demonstrate a way to build eye-tracking VR games for the web using WebVR. ## Our first game You, a mad scientist, want to be able to be in two places at once. So, like any mad scientist, you develop cloning technology that allows you to inhabit your clone's body. But the authorities come in and arrest you and your clone for violating scientific ethics. Now, you and your clone are being held in two separate prison cells. Luckily, it seems like you should be able to escape by taking control of your clone. But, you can only take control of your clone by **blinking**. Seemed like a good idea at the time of developing the cloning technology, but it *might* prove to be a little annoying. Blink to change between you and your clone to solve puzzles in both prison cells and break out of prison together! ## How we built it We extracted the blinking data from the Adhawk Quest 2 headset using the Adhawk Python SDK and routed it into a Three.js app that renders the rooms in VR. ## Challenges we ran into Setting up the Quest 2 headset to even display WebVR data took a lot longer than expected. ## Accomplishments that we're proud of Combining the Adhawk sensor data with the Quest 2 headset and WebVR to tell a story we could experience and explore! ## What we learned Coming up with an idea is probably the hardest part of a hackathon. During the ideation process, we learned a lot about the applications of eye tracking in both VR and non-VR settings. Coming up with game mechanics specific to eye tracking input had our creative juices flowing; we really wanted to use eye tracking as its own special gameplay elements and not just as a substitute to other input methods (for example, keyboard or mouse). And VR game development is a whole other beast. ## What's next for eye♥webvr We want to continue developing our game to add more eye tracking functionality to make the world more realistic, such as being able to fixate on objects to be able to read them, receive hints, and "notice" things that you would normally miss if you didn't take a second glance.
winning
## Inspiration A good vision is one of the key ingredients for building an AI. Although we see the power of deep learning and convolutional neural networks everyday in various platforms, we thought it would be nice to see how an embodied AI does when given tasks that require a strong object detection performance. ## What it does When faced with various objects, the GoPiGo robot listens to a user's object of choice, detects it, and moves toward that object. ## How we built it We used the rigorous pre-trained CNN model designed at Google called Inception to detect images and their locations. ## Challenges we ran into Setting up GoPiGo and fine tuning the model with custom hyper parameters as we saw fit. ## What's next for Envision Publishing the project as open source.
## Inspiration The future of entertainment and UX will be personalized experiences highly tailored to the target user. Imagine future where the show you watch "Just get's it." or the the experience of an app "Just feels right all the time." To get there, we need a deep understanding of the emotional landscape while watching shows and using products. However, during user testing it’s tough to observe all the nuanced behaviors that participants feel. We were inspired to design a software that not only helps take notes and synthesize them, but captures the nuance of human emotion and attention by cross comparing expressible behaviors; facial expressions, tone of voice, and length of pauses to the content being studied. ## What it does Sentimetrix successfully leverages AI across multiple vectors (many axis) to provide more enriched user research data and nuanced insights. When engagement metrics are flat, our tool can help businesses pinpoint why. With the analysis and organized aggregation of data, we can improve the efficiency of sentiment gathering at scale. We designed a time series that maps facial expressions, tone of voice, and eye tracking at intervals of 10 seconds. This multi-factor approach empowers streaming services, content creators, distributors, and film studios to gain a comprehensive understanding of user experiences and uncover the "Why" behind engagement levels, bridging the gap between high and low engagement. ## How we built it We used a python Flask backend to manage and store user interview data, which included time series analysis over user events, facial expression, and dialogue. Facial expression were analyzed via a CNN (Convolutional neural network) to classify web cam images of participants into mood. We leveraged ROBERTA perform sentiment analysis over user dialogue. Finally to patrician the interview timeline into designated tasks/events, we utilized GPT. ## Challenges we ran into Our dual screen-capture functionality. Capturing screenshot of the participant's screen and the embedded iframes and photos of participants from webcam to perform eye tracking analysis and facial expression. Designing a system to deal with asynchronous processing of long-running machine learning and classifying tasks proved to be much more complicated than originally expected, but a necessary hurdle to overcome with regards to scale. ## Accomplishments that we're proud of A fully functioning backend that supports data persistence and is able to be completely hosted with all API endpoints working correctly. Endpoints include: starting a new interview and generating an interview id, receiving image data from the client and processing it, ending interview and updating state, receiving vtt text transcript data post-interview and processing it, retrieving and parsing interview data from persistant storage. All integration with AI and ML models is fully completed and working properly, image facial sentiment analysis and classification is completely finished, we're able to use AI to partition transcripts and timestamps into relevant events, and we're able to use sentiment analysis on text transcripts. ## What we learned Prioritization and planning of features was a big struggle for us throughout this hackathon, we had an ambitious MVP and even more ambitious extensions, without enough prior planning to indicate the actual quantity of work that would be required for the implementation of all of the features. Also, when it came to some vital components of the design, more specifically with regards to capturing participant screens, there were a lot of assumptions that were made by our team members with how much functionality was allowed to us via the browser, and without enough research. Delegation was also a skill that was improved and worked on throughout this process. With one codebase and multiple developers it begins to take active effort to make sure we're getting the most productivity out of our team members. ## What's next for Sentimetrix
## Inspiration We love the playing the game and were disappointed in the way that there wasnt a nice web implementation of the game that we could play with each other remotely. So we fixed that. ## What it does Allows between 5 and 10 players to play Avalon over the web app. ## How we built it We made extensive use of Meteor and forked a popular game called [Spyfall](https://github.com/evanbrumley/spyfall) to build it out. This game had a very basic subset of rules that were applicable to Avalon. Because of this we added a lot of the functionality we needed on top of Spyfall to make the Avalon game mechanics work. ## Challenges we ran into Building realtime systems is hard. Moreover, using a framework like Meteor that makes a lot of things easy by black boxing them is also difficult by the same token. So a lot of the time we struggled with making things work that happened to not be able to work within the context of the framework we were using. We also ended up starting the project over again multiple times since we realized that we were going down a path in which it was impossible to build that application. ## Accomplishments that we're proud of It works. Its crisp. Its clean. Its responsive. Its synchronized across clients. ## What we learned Meteor is magic. We learned how to use a lot of the more magical client synchronization features to deal with race conditions and the difficulties of making a realtime application. ## What's next for Avalon Fill out the different roles, add a chat client, integrate with a video chat feature.
losing
A "Tinder-fyed" Option for Adoption! ## Inspiration All too often, I seem to hear of friends or family flocking to pet stores or specialized breeders in the hope of finding the exact new pet that they want. When an animal reaches towards the forefront of the pop culture scene, this is especially true. Many new pet owners can adopt for the wrong reasons, and many later become overwhelmed, and we see these pets enter the shelter systems, with a hard chance of getting out. ## What it does PetSwipe is designed to bring the ease and variety of pet stores and pet breeders to local shelter adoption. Users can sign up with a profile and based on the contents of their profile, will be presented with different available pets from local shelters, one at a time. Each animal is presented with their name, sex, age, and their breed if applicable, along with a primary image of the pet. The individual can choose to accept or reject each potential pet. From here, the activity is loaded into a database, where the shelter could pull out the information for the Users' clicking accept on the pet, and email those who's profiles best suit the pet for an in-person meet and greet. The shelters can effectively accept or reject these Users' from their end. ## How I built it A browsable API was built on a remote server with Python, Django, and the Django REST Framework. We built a website, using PHP, javascript, and HTML/CSS with Bootstrap, where users could create an account, configure their preferences, browse and swipe through current pets in their location. These interactions are saved to the databases accessible through our API. ## Challenges I ran into The PetFinder API came with a whole host of unnecessary challenges in its use. From unnecessary MD5 hashing, to location restrictions, this API is actually in Beta and it requires quite a bit of patience to use. ## What I learned Attractive, quick front end website design, paired with a remote server was new for most of the team. We feel PawSwipe could become a fantastic way to promote the "Adopt, Don't Shop" motto of local rescue agencies. As long as the responsibilities of pet owner ship are conveyed to users, paired with the adoptee vetting performed by the shelters, a lot of lovable pets could find their way back into caring homes!
## Inspiration In today's world, Public Speaking is one of the greatest skills any individual can have. From pitching at a hackathon to simply conversing with friends, being able to speak clearly, be passionate and modulate your voice are key features of any great speech. To tackle this problem of becoming a better public speaker, we created Talky. ## What it does It helps you improve your speaking skills by giving you suggestions based on what you said to the phone. Once you finish presenting your speech to the app, an audio file of the speech will be sent to a flask server running on Heroku. The server will analyze the audio file by examining pauses, loudness, accuracy and how fast user spoke. In addition, the server will do a comparative analysis with the past data stored in Firebase. Then the server will return the performance of the speech.The app also provides the community functionality which allows the user to check out other people’s audio files and view community speeches. ## How we built it We used Firebase to store the users’ speech data. Having past data will allow the server to do a comparative analysis and inform the users if they have improved or not. The Flask server uses similar Audio python libraries to extract meaning patterns: Speech Recognition library to extract the words, Pydub to detect silences and Soundfile to find the length of the audio file. On the iOS side, we used Alamofire to make the Http request to our server to send data and retrieve a response. ## Challenges we ran into Everyone on our team was unfamiliar with the properties of audio, so discovering the nuances of wavelengths in particular and the information it provides was challenging and integral part of our project. ## Accomplishments that we're proud of We successfully recognize the speeches and extract parameters from the sound file to perform the analysis. We successfully provide the users with an interactive bot-like UI. We successfully bridge the IOS to the Flask server and perform efficient connections. ## What we learned We learned how to upload audio file properly and process them using python libraries. We learned to utilize Azure voice recognition to perform operations from speech to text. We learned the fluent UI design using dynamic table views. We learned how to analyze the audio files from different perspectives and given an overall judgment to the performance of the speech. ## What's next for Talky We added the community functionality while it is still basic. In the future, we can expand this functionality and add more social aspects to the existing app. Also, the current version is focused on only the audio file. In the future, we can add the video files to enrich the post libraries and support video analyze which will be promising.
## Inspiration We are currently living through one of the largest housing crises in human history. As a result, more Canadians than ever before are seeking emergency shelter to stay off the streets and find a safe place to recuperate. However, finding a shelter is still a challenging, manual process, where no digital service exists that lets individuals compare shelters by eligibility criteria, find the nearest one they are eligible for, and verify that the shelter has room in real-time. Calling shelters in a city with hundreds of different programs and places to go is a frustrating burden to place on someone who is in need of safety and healing. Further, we want to raise the bar: people shouldn't be placed in just any shelter, they should go to the shelter best for them based on their identity and lifestyle preferences. 70% of homeless individuals have cellphones, compared to 85% of the rest of the population; homeless individuals are digitally connected more than ever before, especially through low-bandwidth mediums like voice and SMS. We recognized an opportunity to innovate for homeless individuals and make the process for finding a shelter simpler; as a result, we could improve public health, social sustainability, and safety for the thousands of Canadians in need of emergency housing. ## What it does Users connect with the ShelterFirst service via SMS to enter a matching system that 1) identifies the shelters they are eligible for, 2) prioritizes shelters based on the user's unique preferences, 3) matches individuals to a shelter based on realtime availability (which was never available before) and the calculated priority and 4) provides step-by-step navigation to get to the shelter safely. Shelter managers can add their shelter and update the current availability of their shelter on a quick, easy to use front-end. Many shelter managers are collecting this information using a simple counter app due to COVID-19 regulations. Our counter serves the same purpose, but also updates our database to provide timely information to those who need it. As a result, fewer individuals will be turned away from shelters that didn't have room to take them to begin with. ## How we built it We used the Twilio SMS API and webhooks written in express and Node.js to facilitate communication with our users via SMS. These webhooks also connected with other server endpoints that contain our decisioning logic, which are also written in express and Node.js. We used Firebase to store our data in real time. We used Google Cloud Platform's Directions API to calculate which shelters were the closest and prioritize those for matching and provide users step by step directions to the nearest shelter. We were able to capture users' locations through natural language, so it's simple to communicate where you currently are despite not having access to location services Lastly, we built a simple web system for shelter managers using HTML, SASS, JavaScript, and Node.js that updated our data in real time and allowed for new shelters to be entered into the system. ## Challenges we ran into One major challenge was with the logic of the SMS communication. We had four different outgoing message categories (statements, prompting questions, demographic questions, and preference questions), and shifting between these depending on user input was initially difficult to conceptualize and implement. Another challenge was collecting the distance information for each of the shelters and sorting between the distances, since the response from the Directions API was initially confusing. Lastly, building the custom decisioning logic that matched users to the best shelter for them was an interesting challenge. ## Accomplishments that we're proud of We were able to build a database of potential shelters in one consolidated place, which is something the city of London doesn't even have readily available. That itself would be a win, but we were able to build on this dataset by allowing shelter administrators to update their availability with just a few clicks of a button. This information saves lives, as it prevents homeless individuals from wasting their time going to a shelter that was never going to let them in due to capacity constraints, which often forced homeless individuals to miss the cutoff for other shelters and sleep on the streets. Being able to use this information in a custom matching system via SMS was a really cool thing for our team to see - we immediately realized its potential impact and how it could save lives, which is something we're proud of. ## What we learned We learned how to use Twilio SMS APIs and webhooks to facilitate communications and connect to our business logic, sending out different messages depending on the user's responses. In addition, we taught ourselves how to integrate the webhooks to our Firebase database to communicate valuable information to the users. This experience taught us how to use multiple Google Maps APIs to get directions and distance data for the shelters in our application. We also learned how to handle several interesting edge cases with our database since this system uses data that is modified and used by many different systems at the same time. ## What's next for ShelterFirst One addition to make could be to integrate locations for other basic services like public washrooms, showers, and food banks to connect users to human rights resources. Another feature that we would like to add is a social aspect with tags and user ratings for each shelter to give users a sense of what their experience may be like at a shelter based on the first-hand experiences of others. We would also like to leverage the Twilio Voice API to make this system accessible via a toll free number, which can be called for free at any payphone, reaching the entire homeless demographic. We would also like to use Raspberry Pis and/or Arduinos with turnstiles to create a cheap system for shelter managers to automatically collect live availability data. This would ensure the occupancy data in our database is up to date and seamless to collect from otherwise busy shelter managers. Lastly, we would like to integrate into municipalities "smart cities" initiatives to gather more robust data and make this system more accessible and well known.
partial
## Inspiration Isaac Park, a freshman from South Korea, noticed that back home they had super convenient all-in-one transportation apps, but when he came here, he realized there’s nothing like that. It was frustrating having to jump between different apps for shuttles, trains, and figuring out routes. That’s where the idea for UniRoute came from — a way to make commuting easier for university students by putting everything in one place. ## What it does UniRoute is designed to make students’ commutes smoother. Instead of juggling multiple apps, it combines shuttle and train info into one easy-to-use platform. The app shows shuttle times, connects you to train schedules, and helps you plan your trips so you don’t have to deal with delays or missed rides. It’s all about saving time and avoiding the headache of switching between different apps when you’re just trying to get around. ## How we built it We built UniRoute using Apple’s MapKit API, which let us integrate maps and transportation info into the app. As first-time hackers and newbies to mobile app development, we hit a lot of bumps along the way. Initially, we tried using Google Maps API, but after spending hours trying to fix dependency issues, we just couldn’t get it to work. In the end, we switched to Apple’s API, which worked out much better for us. ## Challenges we ran into Being new to this, pretty much everything was a challenge! We had trouble setting up the right dependencies and figuring out how to handle complex route calculations. We ended up using static data in the app just to get something working in time, but that helped us push through and build a prototype. ## Accomplishments that we're proud of Despite all the challenges, we’re really proud that we made an actual working app. It might not be perfect yet, but we learned a ton, and getting the app up and running, even with static data, feels like a big win for us as first-time hackers. ## What we learned We learned a lot about mobile app development, especially how hard it can be to combine different transportation data into one place. We also got a much better handle on how APIs like Apple’s MapKit work and how to manage time under pressure. ## What's next for UniRoute Next up, we want to actually implement the algorithms we had planned for real-time data and fully functional route calculations. We’ll also refine the app’s design and make it more user-friendly, so students can rely on it to make their commutes easier every day. The dream will be to actually post the app on App Store and create impacts for many students' life.
## Inspiration As college students, we both agreed that fun trips definitely make college memorable. However, we are sometimes hesitant about initiating plans because of the low probability of plans making it out of the group chat. Our biggest problem was being able to find unique but local activities to do with a big group, and that inspired the creation of PawPals. ## What it does Based on a list of criteria, we prompt OpenAI's API to generate the perfect itinerary for your trip, breaking down hour by hour a schedule of suggestions to have an amazing trip. This relieves some of the underlying stresses of planning trips such as research and decision-making. ## How we built it On the front-end, we utilized Figma to develop an UI design that we then translated into React.js to develop a web app. We connect a Python back-end through the utilization of Fast API to integrate the usage of SQLite and OpenAI API into our web application. ## Challenges we ran into Some challenges we ran into were the lack of experience in Full-Stack development. We did not have the strongest technical skill sets needed; thus, we took this time to explore and learn new technologies in order to accomplish the project. Furthermore, we recognized the long time it takes for OpenAI API to generate text, leading us to in the future output the generated text as a stream utilizing the SDK for it by Vercel. ## Accomplishments that we're proud of As a team of two, we were very proud of our final product and took pride in our work. We came up with a plan and vision that we truly dedicated ourselves to in order to accomplish it. Even though we have not had too much experience in creating a full-stack application and developing a Figma design, we decided to take the leap in exploring these domains, going out of our comfort zone which allowed us to truly learn a lot from this experience. ## What we learned We learned the value of coming up with a plan and solving problems that arise creatively. There is always a solution to a goal, and we found that we simply need to find it. Furthermore, we both were able to develop ourselves technically as we explored new domains and technologies, pushing our knowledge to greater breadth and depth. We were also able to meet many people along the way that would provide pieces of advice and guidance on paths and technologies to look into. These people accelerated the learning process and truly added a lot to the hackathon experience. Overall, this hackathon was an amazing learning experience for us! ## What's next for PawPals We hope to deploy the site on Vercel and develop a mobile application of the app. We believe with the streaming technology, we can create faster response time. Furthermore, through some more scaling in interconnectivity with friends, budgeting, and schedule adjusting, we will be able to create an application that will definitely shift the travel planning industry!
## Inspiration We as a team shared the same interest in knowing more about Machine Learning and its applications. upon looking at the challenges available, we were immediately drawn to the innovation factory and their challenges, and thought of potential projects revolving around that category. We started brainstorming, and went through over a dozen design ideas as to how to implement a solution related to smart cities. By looking at the different information received from the camera data, we landed on the idea of requiring the raw footage itself and using it to look for what we would call a distress signal, in case anyone felt unsafe in their current area. ## What it does We have set up a signal that if done in front of the camera, a machine learning algorithm would be able to detect the signal and notify authorities that maybe they should check out this location, for the possibility of catching a potentially suspicious suspect or even being present to keep civilians safe. ## How we built it First, we collected data off the innovation factory API, and inspected the code carefully to get to know what each part does. After putting pieces together, we were able to extract a video footage of the nearest camera to us. A member of our team ventured off in search of the camera itself to collect different kinds of poses to later be used in training our machine learning module. Eventually, due to compiling issues, we had to scrap the training algorithm we made and went for a similarly pre-trained algorithm to accomplish the basics of our project. ## Challenges we ran into Using the Innovation Factory API, the fact that the cameras are located very far away, the machine learning algorithms unfortunately being an older version and would not compile with our code, and finally the frame rate on the playback of the footage when running the algorithm through it. ## Accomplishments that we are proud of Ari: Being able to go above and beyond what I learned in school to create a cool project Donya: Getting to know the basics of how machine learning works Alok: How to deal with unexpected challenges and look at it as a positive change Sudhanshu: The interesting scenario of posing in front of a camera while being directed by people recording me from a mile away. ## What I learned Machine learning basics, Postman, working on different ways to maximize playback time on the footage, and many more major and/or minor things we were able to accomplish this hackathon all with either none or incomplete information. ## What's next for Smart City SOS hopefully working with innovation factory to grow our project as well as inspiring individuals with similar passion or desire to create a change.
losing
## Inspiration Travelling past drains with heaps of food waste daily on my way to the school was the first time I noticed the irony of the situation - In India while 40% of the food is going down the drain every single day, 194 million citizens are going to sleep hungry. At the same time India is also producing 2 million tons of e-waste annually. **So, in order to tackle both these issues of food-waste & e-waste simultaneously I came up with the idea to build Syffah.** ## What it does Syffah, which stands for **'Share Your Food, Feed All Humanity'**, is a web application which has two functions: 1. It will allow users to donate old refrigerators, which they otherwise throw away, which will be set up outside societies. People who have leftover food to donate can keep it in these community refrigerators so that the poor and homeless can access this food. 2. It will also allow users like event caterers, restaurants, hotels and individuals to donate their leftover food by posting it so that NGOs and Hunger Relief Organizations can then directly contact them, collect the food and distribute it amongst the needy. ## How we built it * I started off by analyzing the problem at hand, followed by deciding the functionality and features of the app. * I have also created a business plan, including Revenue model, marketing strategy and SWOT analysis. * I then went ahead with the wireframing and mock-ups and worked on the UX/UI. * This was followed by building the front-end of the web-app for which I have used HTML, CSS and Javascript and some php for developing one part of backend. * However, I wasn't able to develop the back-end in the given time as I was not familiar with the languages (for which I had planned on using Google Cloud services - Firebase) * Hence, the project is unfinished. ## Challenges we ran into * This was my first hackathon so had some difficulties * Was not able to develop the back-end in time, trying to learn back-end development in a short period of time was challenging, but it was still fun trying * Enhancing the User Interface ## Accomplishments that we're proud of * The main pages of the website are responsive. * The UX and UI of the web-page, keeping it as simple as possible ## What we learned * Learnt about Hackathons * Learnt basics of Google Firebase, cloud services * Learnt UX/UI concepts of cards, color schemes, typography, etc. ## What's next for Project SYFFAH * Creating a fully functional prototype of the web app * Developing the backend using Google Firebase * Also developing an Android app alongside web-app * Conducting ground-work and research for deploying these services * Conducting market analysis, surveys and beta and alpha testing to improve the application overall
## Inspiration Coming from a strong background in retail food sales, one of our members saw the sad reality of how much food is wasted daily by companies. According to goal 12 of the United Nations's 17 Sustainable Development Goals, the average person wastes 120 kilograms of food a year. Many companies are throwing out this much food A DAY. The reality is that these companies do not allocate enough time to go through and find expiring food before it goes bad, so it simply goes in the trash. We wanted to find an easier way for the employees of these companies to find these food items before they expire so that they can be donated or sold cheaper to people instead of wasted. With the understanding that the majority of these companies employ Android-based handheld scanner devices, our approach centred on developing a native Android app to simplify the integration process. ## What it does Our project starts with an Android application allowing users to scan a barcode with their camera. Then they can select a date from a calendar or make the process easier by using text-to-speech. Then the user can use a web browser to select a date and print off a report of barcodes that expire on that date. They can then use their handhelds that they already use for other things to scan that barcode and go check if they still have that item in stock. ## How we built it For the database we wanted to try out a technology we had never used so we decided to try out Redis. We used the Redis cloud to generate a NoSQL database. For adding and pulling data from the DB we used the Redis JSON module since we knew we would be able to work with the JSON format when it came to making the reports. We decided not to try and do more than we can handle so we decided to focus on just developing it for android instead of multi-platform. We used Android Studio and Kotlin to write the Android app. We used CameraX to implement the camera functionality in the app, and we used Google ML Kit to read and detect barcodes. For the web app, we used Python to set up our web server. We imported Redis, Flask and python barcode. Flask would help us set up the webserver and the python-barcode plugin would allow us to convert the UPC from the DB to an actual scalable image. ## Challenges we ran into It was both our first time learning about Redis so we had to learn how to use it. Originally we weren't using the RedisJSON module but we realized afterwards the json format would make it easier for us so we had to switch over. We also wanted to look into hosting our web app on Google Cloud, but we had trouble running the web server properly once uploaded. The Python barcode plugin does not seem to be working properly once we uploaded everything. After some thought, we decided that for the scope of our application, it actually makes more sense for the webserver to be hosted locally over a company's intranet anyways since the Redis authentication information is currently stored locally and would be less secure online. ## Accomplishments that we're proud of We are happy that we were able to get the barcode and voice-to-text functionally to work with our smartphones. We are able to scan and add dates to our database with no major issues. This was also our first time setting up a web server using Python, we mostly focused on technologies like Java spring boot and other languages, so we were proud that we were able to set up the server and have basic report generating working in such a short amount of time. ## What we learned We learned about Redis and some of the things it can do. We were impressed with the ease of use it offers and how quickly we could get everything set up and working. We also continued to improve our knowledge of building apps with Android. Implementing the camera and ML kit functionality in a time crunch forced us to focus a lot on bug fixing and helped us see errors we could avoid going forward. We also learned more about Python development. Our programs did not put much focus on Python, so most of what we did with Python was brand new to us. We now know how to set up simple web apps using Python. ## What's next for WasteWatcher We would like to continue updating the Android app. We would like to implement user authentication. Currently, a company would just set up a different database for each location, but using user accounts we could make that easier. We would also like to keep upgrading the reports' functionality. We would still like to get it working on the cloud. We would also like to add more ways to generate reports, such as date ranges, so it can add more value and use for a company to help incentivize use.
## Inspiration \_ "According to Portio Research, the world will send 8.3 trillion SMS messages this year alone – 23 billion per day or almost 16 million per minute. According to Statistic Brain, the number of SMS messages sent monthly increased by more than 7,700% over the last decade" \_ The inspiration for TextNet came from the crazy mobile internet data rates in Canada and throughout North America. The idea was to provide anyone with an SMS enabled device to access the internet! ## What it does TextNet exposes the following internet primitives through basic SMS: 1. Business and restaurant recommendations 2. Language translation 3. Directions between locations by bike/walking/transit/driving 4. Image content recognition 5. Search queries. 6. News update TextNet can be used by anyone with an SMS enabled mobile device. Are you \_ roaming \_ in a country without access to internet on your device? Are you tired of paying the steep mobile data prices? Are you living in an area with poor or no data connection? Have you gone over your monthly data allowance? TextNet is for you! ## How we built it TextNet is built using the Stdlib API with node.js and a number of third party APIs. The Stdlib endpoints connect with Twilio's SMS messaging service, allowing two way SMS communication with any mobile device. When a user sends an SMS message to our TextNet number, their request is matched with the most relevant internet primitive supported, parsed for important details, and then routed to an API. These API's include Google Cloud Vision, Yelp Business Search, Google Translate, Google Directions, and Wolfram Alpha. Once data is received from the appropriate API, the data is formatted and sent back to the user over SMS. This data flow provides a form of text-only internet access to offline devices. ## Challenges we ran into Challenge #1 - We arrived at HackPrinceton at 1am Saturday morning. Challenge #2 - Stable SMS data flow between multiple mobile phones and internet API endpoints. Challenge #3 - Google .json credential files working with our Stdlib environment Challenge #4 - Sleep deprivation ft. car and desks Challenge #5 - Stdlib error logging ## Accomplishments that we're proud of We managed to build a basic offline portal to the internet in a weekend. TextNet has real world applications and is built with exciting technology. We integrated an image content recognition machine learning algorithm which given an image over SMS, will return a description of the contents! Using the Yelp Business Search API, we built a recommendation service that can find all of the best Starbucks near you! Two of our planned team members from Queen's University couldn't make it to the hackathon, yet we still managed to complete our project and we are very proud of the results (only two of us) :) ## What we learned We learned how to use Stdlib to build a server-less API platform. We learned how to interface SMS with the internet. We learned *all* about async / await and modern Javascript practices. We learned about recommendation, translate, maps, search queries, and image content analysis APIs. ## What's next for TextNet Finish integrate of P2P payment using stripe ## What's next for HackPrinceton HackPrinceton was awesome! Next year, it would be great if the team could arrange better sleeping accommodations. The therapy dogs were amazing. Thanks for the experience!
losing
## Inspiration We considered who needed connectivity the most, and how we can assist them in attaining it. From our research, we've learned that elderly loneliness is a large issue that a significant portion of the population is facing. We are aiming to enable lonely and isolated users to connect with others like them, as well as others who would volunteer to share their time. The name BackFence is derived from the term “back-fence talk” which refers to neighbours talking with one another over a fence, in spite of being separated. ## What it does BackFence is an Android app where users create profiles with usernames and a list of interest tags, and retain and grow friends lists as they encounter others who share similar interests. Friend lists allow users to retain the connections that they make and access the connections they already have. The UI is friendly for users who may not have much experience with technology and there are easily accessible help pages to guide them. ## What we learned We learned a lot about Android app development and Android Studio. We've taken the time to learn extensively in how voice and video communications over APIs like Sinch can aid in tasks such as this. We also learned about the potential use of cloud and authentication technologies like Firebase to connect users with one another, and combining it with Sinch for feeding video streams in real-time connections.
## Inspiration The loneliness epidemic is a real thing and you don't get meaningful engagements with others, just by liking and commenting on Instagram posts, you get meaningful engagement by having real conversations with others, whether it's a text exchange, phone call, or zoom meeting. This project was inspired by the idea of reviving weak links in our network as described in *The Defining Decade* "Weak ties are the people we have met, or are connected to somehow, but do not currently know well. Maybe they are the coworkers we rarely talk with or the neighbor we only say hello to. We all have acquaintances we keep meaning to go out with but never do, and friends we lost touch with years ago. Weak ties are also our former employers or professors and any other associations who have not been promoted to close friends." ## What it does This web app helps bridge the divide between wanting to connect with others, to actually connecting with others. In our MVP, the Web App brings up a card with information on someone you are connected to. Users can swipe right to show interest in reconnecting or swipe left if they are not interested. In this way the process of finding people to reconnect with is gamified. If both people show interest in reconnecting, you are notified and can now connect! And if one person isn't interested, the other person will never know ... no harm done! ## How we built it The Web App was built using react and deployed with Google cloud's Firebase ## Challenges we ran into We originally planned to use Twitters API to aggregate data and recommend matches for our demo, but getting the developer account took longer than expected. After getting a developer account, we realized that we didn't use Twitter all that much, so we had no data to display. Another challenge we ran into was that we didn't have a lot of experience building Web Apps, so we had to learn on the fly. ## Accomplishments that we're proud of We came into this hackathon with little experience in Web development, so it's amazing to see how far we have been able to progress in just 36 hours! ## What we learned REACT! Also, we learned about how to publish a website, and how to access APIs! ## What's next for Rekindle Since our product is an extension or application within an existing social media, Our next steps would be to partner with Facebook, Twitter, LinkedIn, or other social media sites. Afterward, we would develop an algorithm to aggregate a user's connections on a given social media site and optimize the card swiping feature to recommend the people you will most likely connect with.
## Inspiration BThere emerged from a genuine desire to strengthen friendships by addressing the subtle challenges in understanding friends on a deeper level. The team recognized that nuanced conversations often go unnoticed, hindering meaningful support and genuine interactions. In the context of the COVID-19 pandemic, the shift to virtual communication intensified these challenges, making it harder to connect on a profound level. Lockdowns and social distancing amplified feelings of isolation, and the absence of in-person cues made understanding friends even more complex. BThere aims to use advanced technologies to overcome these obstacles, fostering stronger and more authentic connections in a world where the value of meaningful interactions has become increasingly apparent. ## What it does BThere is a friend-assisting application that utilizes cutting-edge technologies to analyze conversations and provide insightful suggestions for users to connect with their friends on a deeper level. By recording conversations through video, the application employs Google Cloud's facial recognition and speech-to-text APIs to understand the friend's mood, likes, and dislikes. The OpenAI API generates personalized suggestions based on this analysis, offering recommendations to uplift a friend in moments of sadness or providing conversation topics and activities for neutral or happy states. The backend, powered by Python Flask, handles data storage using Firebase for authentication and data persistence. The frontend is developed using React, JavaScript, Next.js, HTML, and CSS, creating a user-friendly interface for seamless interaction. ## How we built it BThere involves a multi-faceted approach, incorporating various technologies and platforms to achieve its goals. The recording feature utilizes WebRTC for live video streaming to the backend through sockets, but also allows users to upload videos for analysis. Google Cloud's facial recognition API identifies facial expressions, while the speech-to-text API extracts spoken content. The combination of these outputs serves as input for the OpenAI API, generating personalized suggestions. The backend, implemented in Python Flask, manages data storage in Firebase, ensuring secure authentication and persistent data access. The frontend, developed using React, JavaScript, Next.js, HTML, and CSS, delivers an intuitive user interface. ## Accomplishments that we're proud of * Successfully integrating multiple technologies into a cohesive and functional application * Developing a user-friendly frontend for a seamless experience * Implementing real-time video streaming using WebRTC and sockets * Leveraging Google Cloud and OpenAI APIs for advanced facial recognition, speech-to-text, and suggestion generation ## What's next for BThere * Continuously optimizing the speech to text and emotion analysis model for improved accuracy with different accents, speech mannerisms, and languages * Exploring advanced natural language processing (NLP) techniques to enhance conversational analysis * Enhancing user experience through further personalization and more privacy features * Conducting user feedback sessions to refine and expand the application's capabilities
losing