VIS 160B Project
Week 1:
Worked on fixing lighting and researching best practices for lighting moving VR environments using baking.
Week 2:
This was the preliminary set up for the audio analysis that was applied to the cubes. It initialized multiple variables that helped to drive the actual frequency analysis and apply it to instanced static meshes. That can be seen in the photo below. The photo above shows:
A sequence node to run each item in an order that allows for the analysis to actually work correctly.
Setting the sound variable to an actual audio file in order to get the duration and have a usable variable for checking percent done and other items.
Spawning multiple of an instanced object in order to create utilize them for the actual audio analysis. This required inputing the Number of Cubes (equal to the number of frequencies being analyzed by the NRT) and Spacing (at the time was the distance between the cubes) which would then be ran through a for each loop, spawned at different transform locations, then added to an array that will be used to adjust their sizes to a corresponding frequency.
The final node group is where the audio analysis begins. The audio duration, playback percent, and Sound Cubes array is fed into the Synesthesia scale cubes node which is shown below.
Everything done in the photo above is mostly preparing variables that can be used in the actual audio analysis. The difficult part is doing anything after all of these have ran. The Synesthesia audio analysis plugin is NRT (Non-real-time) meaning everything is created and ran all in one go.
The above photo shows the NRT audio analysis portion of the blueprint. It’s relatively straightforward once all the variables are fed into the macro.
The audio analysis only works if it has the position in the audio in seconds which is where the music duration and percentage played is used to calculate this. This is set to a variable - Position in audio
The Save Frequency Band Strengths is where the audio analysis is occurring and being saved to an array of integers to later be added as transforms to the sound cubes. The position in audio and ConstantQNRT variable, Synesthesia Analysis, is added to the NRT audio analysis node that analyzes a set of frequency bands (This is why the number of cubes and bands must match) and ads them as integers to an array.
The last portion sets the cubes to the frequency strengths using a for each loop and the array of sound cubes. Scale is based on the frequency strength and this is applied to the sound cubes using a Set World Scaled 3D node.
This is a good way to do audio analysis when working in VR as it does not take up too much processing power and allows the program to continue to work at a higher frame rate. However, issues do start to arrisse when attempting to manipulate the objects in the actual game space. This is due to the fact that everything that is being done to the sound cubes is happening at the very start of the program. This makes it extremely difficult to manipulate the instanced objects. It also makes having multiple instances of the same blueprint running at the same time impossible.
Week 3 - 7:
I spent these four weeks making the program run infinitely. I have decided to do all the things in Unreal with VR that one really should not be doing in Unreal.
I took some tips right out of the endless runner platform and decided to move away from having the player move and more towards having the world move around the player. This began to pose some interesting and extremely frustrating issues!
When reading back in my week 2 post you will see I mentioned that it is difficult to manipulate the sound cubes due to them being initialized and created as the program starts. This becomes a major issue when you need the entire world, including the audio visuals, to move around the player. However, if I was able to do this I would be saving a ton of processing power which I could put towards other things.
I first decided to get the environment working as an endless simulation as that was not dependent on the audio visuals.
This was extremely straight forward, albeit a little annoying to get running correctly. I made the environment the user moves through into a chunk that blocked their extended view on the left and right. The train then blocks the front and back view for the most part, leaving the viewer about 100 degrees out the sides and blocking them from seeing chunks spawn. These chunks were identical and gave the illusion of moving infinitely through space even though the user is not the one moving.
I created the endless runner by making the chunks actors then using an InterpToMovement node. This node gave me the ability to:
Have them move from point A to point B
Specify the speed of this movement by using a global variable which determines the time in seconds it will take to complete the movement
Create as many chunks as I needed in order to hide the spawning from view
There are still issues to iron out in regards to having different chunks spawn in succession after certain amounts of time, but those are later type of issues. For now I have to focus on getting the sound cubes moving in unison with the environment.
This is the endless runner capabilities from inside the train.
This is the endless runner capabilities from outside the train.
I mentioned a few times before that manipulating the sound cubes after spawning them in is difficult. Here is where I figured out how to add movement to them.
It was a relatively simple solution that required changing how the for each loops work in unreal. I tried a few different methods of movement, like trying to add the spawned cubes to an array then manipulate them after the fact, however, any method I tried like that caused a major uptick in performance issues.
I then moved into trying to have the cubes initialize with movement. This would allow the processing usage to stay the same but would lock me into a single look for the audio visuals until I triggered another event. I had limited success at the start with spawning the cubes with movement. I could get them all to spawn but only number 54 of an array of 55 would have movement. The issue was how blueprints work and for each loops not having a delay.
The method I was using to move the cubes was the same as the environment. I took the array of already spawned sound cubes (spawned at the actors world location in a set square area) and fed them into a for each which took the array element and gave it Interp to Movement. It worked for one cube! This was good as it was clearly an issue with how blueprints ran for each loops and the necessity to add a delay.
I created my own For each with Delay macro. This was mostly identical to the standard blueprint with a delay added in between each index that can be controlled externally. That can be seen below.
I then added my newly created For each with Delay and set the delay to be the length in seconds of the chunks interp to movement node divided by the number of sound cubes/frequency bands. This allowed me to control the spacing of the cubes using the same global variable that controls the timing, number of cubes, and frequency bands. Each element of the sound cube array was then fed into the Add Interp to Movement Component which is similar to the chunks but can be added to multiple objects in succession. This simple combination of nodes allowed me to have the audio analysis objects move with the chunks while only sacrificing some processing to the movement of the sound cubes in the first 21 seconds, which is negligible.
This shows the audio analysis static meshes moving with the chunks using the above blueprints.
The above video shows the audio analysis static meshes moving with the chunks from an aerial view. This shows them spawning in all at once then individually having movement added to them. It can be seen here that the for each loop only runs for a total of 21 seconds as once the first chunk resets, all of the audio analysis static meshes have had movement added to them!
I then began working on trying to have multiple instances of the audio analysis objects running at the same time. So far I am having little to no luck with that, but believe it has something to do with analyzing audio using the same ConstantQNRT variable. I will be testing it out with a new variable and an entirely new blueprint. If and when I can get that working I should be able to create multiple instances of audio analyses that can be changed out at any point using volumes.
I am also part way through creating different environments for the user in the train to move through. This is posing somewhat difficult with the endless runner, however, I am close to a solution. I switched up my interp to movement structure for the chunks to initialize an array of actors (the different varieties of chunks) that then have their visibility switched after a set number of time existing. The only issue this poses is the annoyance of having to individually set timing for these rather than having the ability to create a global variable that can be controlled in the editor.
Once I have the different environment added the only thing left will be polishing out the audio visualizers to create some more interest, then dealing with lighting and some object/environment filler if there is time.
Week 8: