Research
In the course of our research, we discovered many tools that will help us achieve the project goal. This document contains the research that was done for both [Dance] and [Dance Console], thus it is the [Dance Project] Research. To read about the conclusions drawn from the research one should read the complete Dance Project Description and its accompanying documents.
- Technocal
- Market
Technical
Question: How do we get data from arenas to form the arena layout and map the devices?
Answer: In our research we found many [API]s that give us much information but they are all very limited and expensive.
We studied integrations with Ticketmaster, SeatGeek, Mappedin, Seats.io, Mapwize, Steerpath, Indoor Google Maps. Here you can see pros and cons of each one:
| API | Pros | Cons |
|---|---|---|
| Ticketmaster | It has many event details and we can use them to check how many seats were bought and other additional information. | If our client doesn’t have a Channel Partner key, we can’t have the ticket information. The Section Map doesn’t return much data, it only returns an image that could be used to design our arena if we implement artificial intelligence someday. |
| SeatGeek | Their API is very detailed. There is information about ticket pricing as well as information about the venue like coordinates. | It says in the API documentation that they have no plans to expose individual ticket listings via the API, so we can’t get any ticket’s information. We couldn’t find any information about the arena area or any kind of drawing. |
| Mappedin | It has already pre-build solutions and a great UI. | We can’t use our own UI with this one. They communicate only through their own [SDK] for iOS, Android and Web (Javascript package), without an API Rest. |
| Seats.io | It can be used to draw an arena and sell the tickets. | We can’t use our own UI with this one, we can only use html code to display the UI and we can’t save arena data into our [database]. They charge per seat because it’s an API created for selling tickets. |
| Mapwize | It has good support for [Flutter]. It’s a good tool for designing indoor mapping. | We can’t use our own UI with this one. This API is great to design maps for malls, but it can be more difficult to use it for designing arenas. |
| Steerpath | It has terrible documentation, we couldn't figure out which data we could get from their API. | |
| Indoor Google Maps | It has a great integration with [Flutter] and it’s easy to use if the arena is already mapped. | We would need to send a floor plan to their email if there isn’t already a map for an arena. |
Because of the [API]’s limitations, we will use a service we created to design custom arenas (similar to Seats.io) and in some cases we will use Indoor Google Maps. We will also have pre-build models for well-known arenas.
Our first approach for the prototype is to relate colors with time and sections, then we will implement GPS, because the first version of our design arena service still doesn’t relate coordinates with sections. When we have the coordinate system ready we will relate GPS location with time and colors. For the prototype we could use the solution that we discussed about letting the user input the section he is currently in.
Question - How do we get the ticket and seating data?
Answer - For the prototype, we decided to let the user input his own location. For the MVP we can use another platform we will build that sells tickets and using our API we will be able to get all the data we need.
Question - Research and document [API]s that could be used to enhance the product.
Answer - As mentioned, we found many APIs that could be useful while we are developing the application, many of them are already listed under the last question answered, but we also found [API]s to get music information:
| Music API | Description |
|---|---|
| YouTube | We can get a list of music and use a player inside our app to get the notes and decibels. |
| Spotify | There is a plugin here that helps with the integration with their API. We can also use this package to play a song using a URI, but it requires user authentication. |
| Audd | We can use it to recognize the music that will be playing live and related to the pre-build sequence. This API also gives us links to preview the songs. |
| Deezer | We can use it to search for music and play. |
| iTunes | We can use it to search for music and play a preview. |
| lastFM | We can use it to search for music and play the whole music. |
| Acrcloud | Music recognition. |
| Gracenote | Music recognition. |
| Dejavu | Free open source music recognition. Downsides: you need to have your own [database]. |
We also found [API]s for social media, for camera, for device location, for augmented reality, for filters (deepar and banuba), for flashlight, for audio player and for machine learning (Cloudmersive, PixLab, Apache PredictionIO, Sentence Clustering API and wit.ai).
Question - What’s the best approach for the application: web or mobile?
Answer - To start we will develop a mobile application that runs on tablets. In the MVP we will enable the web capabilities and make the necessary UI adaptations.
Question - How will we store, manage and process data? And which [database]/[server] provider should we use?
Answer - This is a big one and so we will break this down into additional questions:
Question - The first question we should answer is: do we need an [SQL] or [NoSQL] [database]?
Answer - We will use a [NoSQL] [database] ([Firebase]) because it is a far superior option even with the downsides. It would require extra work to make the [database] consistent considering we need to use duplicated data, however this is not a problem in [Cloud Firestore] because [Firebase] provides many mechanisms to keep the data clean. However, once we outgrow Firestone and move to our own [server], implementing these functions to keep the data organized is a big job.
For the prototype we will use [Cloud Firestore] and switch to our own [server] hosted on [Google Cloud Platform] once we start working on the [MVP] and we will then implement all the necessary protocols to keep the data from getting messy. Here and here you can see their pricing. And here you can see [Firebase]’s platform.
Question - The second question we need to answer is: are we willing to put a lot of effort into Backend? A solid [backend] requires special care with security, infrastructure, etc. I prefer to put more effort in [frontend] and use a [BaaS], but it can get expensive over time.
Answer - We should be focused on the [frontend] and most importantly to make sure the project functions as a whole; therefore, any [BaaS] we can use will elevate us from having to focus on yet another problem. This leaves me with the same conclusion as I stated on the questions above. We start the prototype on the well-established and reliable [BaaS] [Cloud Firestore] provided by [Firebase] and switch over to our own servers and data management for the [MVP] in order to lower costs.
Question - How will our [database] be structured? How much information should it hold?
Answer - This is answered in the Data Structure section.
Question - What are the inputs and outputs of each system (apps, boards, etc.)?
Answer - We are not going to integrate with boards or any third parties in the prototype.
Question - How to connect each user that is in the stadium to the data that is going to be sent from the console (which is in that very same stadium)? The designer will be able to create something like a “room” that will be used for that specific show, and thus each user will need to connect to this “room”, regardless of the stadium?
Answer - The user doesn't need to check in, the app will check if the user's GPS location is near the arena's location.
Question - How can we connect the [DMX] (lighting controller) with the designer/user (if needed)?
Answer - We will not use DMX in the prototype.
Question - How to create the mesh of phones? How can we know how far a phone is apart from each other in every direction?
Answer - We will use inputs from the user to select which section he is currently, and based on that he will receive the combination of colors that the designer built for that section.
For the prototype we are not going to relate the distance from phones to anything.
Question - How will we synchronize the lights with the live show?
Answer - We can use existing solutions we found to identify the notes and decibels level from each part of the music; this way we can synchronize the lights. But this is experimental and we are not sure if it’s the best approach. The first solution we are going to implement is for the designer to synchronize the lights during the show adding more or less time to the sequency.
Market Research
Question - Who are the main competitors and how do they work?
Answer - We have only found one direct competitor DeviceMesh, however there are many indirect competitors in the form of hand bracelets: CrowdLED and Xyloband are the two big players in the industry with many small vendors around the world.
User Research
We are relying on information provided by the client (Michael) and basic assumptions, although User Research is strongly recommended.
Question - Who is the primary user of the [Dance Console]?
Answer - We are assuming that the primary user is the designer that will build the show.
Question - Who is the primary user of the [Dance] app?
Answer - We are assuming that the primary users will be 15 to 35 year old 25% male and 75% female who are frequent concert goers. We assume that only 15 to 25 percent of one-time concert goers will use the app.
[Dance]
Technological Research
Question - How to control the screen brightness?
Answer - There is a plugin called Screen that makes it possible for us to control the screen brightness, which is the Screen. It has a function we can use to set the brightness from the lowest level to the highest (0 - 1). It is necessary that the user gives permissions to the app for that, but it’s easy to ask their permission explaining that it is an essential feature.
Question - How to keep the screen on?
Answer - To keep the screen on, we can use the Wakelock plugin or the same Screen plugin from the last question. This [plugin] allows us to manage the screen of our phones so we can prevent the screen from turning off. For both [plugins] we need the user’s permission.
Question - Can an app turn the phone screen on and off?
The answer is no, because there are only [plugins] to keep the screen awake. This is because of security issues, if a screen turns off it is probably locked with a password. But if we prevent the screen from sleeping, turning it on wouldn’t be something to worry about.
If we need to turn the screen off because of battery issues, we can disable the [plugin] that keeps the screen awake and it will eventually turn off.
Question - If we use 3G to connect all the phones, how are we going to deal with individual bad connections?
It may be an expensive idea, but if the event organization could provide wifi to all the users, the individual bad connections wouldn’t be such a big problem.
Other connection possibilities were also found, such as mesh networks that are a promising solution.
Mesh networks are tools to connect many devices without using the internet. They use many technologies to do that, but bluetooth is the main tool. We found many options for embedding this technology inside our application:
| Mesh Network | Description |
|---|---|
| P2pkit | They can connect two devices via bluetooth, but we were reading their documentation and didn’t find a method to send a message to more than one device. |
| Bridgefy | It has methods to communicate with more than one device. It doesn’t support [Flutter] yet, but it supports iOS and Android, but we can use Flutter methods for that. Their prices are good. |
| Nearby Devices | It’s a plugin that allows devices to communicate with each other using infrastructure Wi-Fi networks, peer-to-peer Wi-Fi and Bluetooth Personal Area Networks (PAN). This plugin has some bugs and in order to have the data flow that we need it would be necessary to be developed by ourselves. |
The Bridgefy solution is a promising way to eliminate the [server] in the future. We are planning first to use Firebase for all app’s needs and then, when we see that there is enough time and budget, we will implement Bridgefy.
To deal with latency related to phones' hardwares we can improve the software implementation, reducing this latency to the lowest possible. The first step is to choose a proper state manager and perform the best practices possible. Our suggestion is to use reactive programming, so we would need a state manager that provides high performance and reactivity. We have a list of state managers that could fit well in our project type like: BloC, Mobx, or even the [Flutter] natives setState and Change Notifier.
We would also be worried about using Flutter components properly, as there are some that can be expensive in terms of performance. For instance, the Opacity component is very expensive in terms of use and if we need to use opacity for some reason, it’s better to consider using transparent colors. All these performance concerns will be taken care of by the developers, we will assure that the user has a great experience.
Question - Will the public at the show have to download an application?
Answer - We know that for a user at a concert it would be too much effort to download an app, but a possible solution for that is to send the users an email informing them there will be a light show that he could be a part of and ask him to download the app before the event.
Considering that we have to work with GPS locators, due to the policy of both Android or iOS, it’s a better approach to focus on mobile applications, this way we can have more control over the devices.
For the prototype the user will have a mobile application, but on the [MVP] we can try to make use of [PWA], Instant Apps and other technologies that will make the user acquisition more simple.
Question - Research and document [API]s that could be used to enhance the product.
Answer - As mentioned, we found many APIs that could be useful while we are developing the application, many of them are already listed under the last question answered. We also found [API]s get Chords information, [plugins] to measure noise level and recognize notes, as follows:
Chords Finder Softwares/[API]’s/[Plugins]
-
Sonic Visualizer: Is a free, open source, cross-platform desktop application for music audio visualisation, annotation and analysis. It has a [plugin] for chords recognition which I tested playing some simple chords on guitar (I don’t think it would be a good idea at a concert) and it was pretty accurate although it is a desktop and non-live application, meaning it records and after that finds the chords and it’s not a “ready-to-go” API.
-
Scales Chords Finder: It’s a live chord sound finder which is on beta version, so it’s not so accurate.
iOS/ Android package for measuring decibels
Notes recognition links
| Name | Description |
|---|---|
| Beethoven | An audio processing library that provides an interface to solve pitch detection problems of music signals. Available for Swift. |
| Aubio | A tool designed for the extraction of annotations from audio signals. It performs pitch detection, beat tapping and/or midi streams production from live audio. |
| Music Sheet Transcriber | An interesting article about a music notation application made by Haohui. Available on the web. |
| Detecting piano notes | A web audio API made by David Gilbertson that detects which piano key is being played. |
| OMR-Datasets | A repository that contains useful information and links about Optical Music Recognition tasks. |
-
https://dsp.stackexchange.com/questions/10364/note-recognition-software
-
https://link.medium.com/wIXWpo8Gedb
iOS/ Android package for detecting notes
-
We also found [plugins] for file picker, for QRCode (qr flutter and qrcode), for battery level (battery, battery info and battery indicator), for screen control (brightness included) and for device vibration.
Question - What kind of data are we going to receive from the Dance Console?
Answer - We would only receive colors related to time, notes and decibels. Firebase will receive the user’s information (where the phone is) and respond sending only important data back (colors). This is because we can’t rely on devices with low memory to process all data, we delegate this type of control to our [database]. The smartphone will only have to control the brightness of the screen and other predicted situations.
Question - Should we calculate in which section the user is inserted in the client-side or server-side?
First, we will have the user input his location, but when we have a designed arena with coordinates, we will have two options:
-
The app can receive a list of polygons from the API with coordinates and color associated to them and calculate the closest polygon to them on the client-side;
-
Send the user’s location to the API, then receive the colors for the closest polygon which will be calculated on the server-side.
Answer -
It’s better to keep the logic on the [server] side, the app should only make a query to [Firebase] passing the coordinates and receive the color sequence for that specific location. For that we can use Cloud Functions.
It would be better to process this on the server-side because we should consider phones with the lowest processing power, so the application would send the coordinates to the API and would receive a color or list of colors with the time schedule for each one. Also following the pattern of separating the [backend] from the [frontend] it could be more standardized. For a moment I thought the client-side would consume less mobile data, but I realized it could consume more data instead, depending on the number of sections. If there are too many of them, it would consume more data than the server-side option.
Question - The user probably needs to have at least 10% of battery level to go home after the concert, should we turn off his screen when the battery is almost over or is it enough to reduce the brightness of the screen?
We can lower the brightness of the screen when the battery is almost half and we can turn off the screen when it gets to 10%. We can use this package to check the battery level.
Question - When the user loses network, will the app reduce the brightness and change the screen to a neutral color like black?
We can use this to check if there is internet connection and with that information we can adjust the brightness or change the screen color.
Question - What is the average GPS precision in meters using a 3G/4G connection?
GPS has its pros and cons. The location accuracy range of the GPS could reach from 10 to 100 meters in most devices and the best average GPS accuracy possible we could get is within 4 meters. The biggest problem in using 3G/4G networks is the geographical coverage. The 3G network is very slow in comparison to the 4G network and if the mobile network of the user is reaching 3G, there could be some problems with the GPS. It’s better to consider using Wi-Fi in most cases.
There are apps like Strava used for counting running workouts that load the map and use GPS offline, we can try this.
The accuracy of the GPS with good signal is 4 meters, and can vary from 10 to 100 meters depending on the signal strength.
In urban areas the signal is more accurate, in rural areas a little less or more depending on the geography of the place.
Question - Since GPS changes its localization a lot, what’s the best solution to make less requests to Firebase (considering that the price depends on how many requests we make)?
We discussed that we should store the data in cache and first check if the user’s current location is close to the last one. If it’s close enough, we don’t make another request. Another solution could be to wait some time before making another request, which means that we won’t make a request every GPS update.
Question - What can we do to deal with transition zones (the ones between one section and the other)?
Answer - We have two options:
-
Give them a neutral color (which I don’t think is a good idea);
-
Get the color of the closest section.
Question - Can we capture the audio from a mixer to the phone or computer?
Answer - Yes, it may be necessary for some hardware adapters, but it’s possible.Question - Can we capture the audio from a single instrument on the mixer?
Answer - That’s a tough question. Once we have many kinds of mixers some of them are
digital and some analog; of course this would be easier on a digital mixer (most of the clients would probably use digital mixers by the way), but it’s possible to use an analog one if we use special protocol because the output audio signal in these mixers are sent with all the instruments together. Also it would be necessary to use the earphone output and filter the instruments with some adaptations.
Question - What’s the battery impact of keeping the microphone always on?
Answer - We found an article that explains that it depends a lot on the hardware of the phone. You can see in the same article in Figure 4 the time it takes to consume the whole battery. It’s something that still needs to be experimented, a possible option for that is to have only one audio receptor close to the stage.
Conclusions
We are thinking about three options: display colours based on decibels, notes and pre-built sequence. We will start with a pre-built sequence made by the designer inside the Dance [Console]. There will be information about a song (notes, decibels and time) and based on that the designer will decide which colour he wants to display related to those three parameters. The first parameters available to the designer will be time, while development continues we will add the others.
During the show, the colours will be displayed based on time and adjusted inside the Control section from the [Dance Console]. As we keep developing, notes and decibels will be processed using one receiver next to the stage. This information will be sent to all devices, first using [Firebase] as a server then after a while using a mesh network like Bridgefy.
During our research, we found packages for notes and decibels recognition and this Bridgefy solution seems to be a great option to mesh networks.
We found many music APIs that allow us to play a song so we can identify all the information (notes and decibels). At some point, we can also allow the user to upload a song that can be analyzed. We know that the information from decibels that we are going to have is very different from a live show, but we were thinking about making a decibel percentage for each part of the song. This way we just need to know what the highest decibel level is in the live show and based on that we can relate to that percentage.