Last time we looked at a rudimentary – although in some senses complicated – spatial sound implementation for our ABB IRB 6620 industrial robot model inside HoloLens. It was simple because we added a single sound at the root of the robot, and complicated because we then had to track the status of each of the robot’s parts and turn the sound off when all were stopped.
In this post we’re going to look at the second of the three design options we saw last time:
- A single sound is assigned to our robot
- When the robot stops completely, so does the sound
- The same sound is assigned to each of the robot’s parts
- When each part stops moving, so does the sound for that part
- A different sound is assigned to each of the robot’s parts
- When each part stops moving, so does the sound for that part
It was simple enough inside Unity to copy & paste the AudioSource values from the root of the robot to each of the parts: Unity has a handy “Paste Component As New” option in the Inspector, which saves a great deal of time and effort:
Then we simply had to make sure the root AudioSource is disabled (or removed) and that the code uses the AudioSource attached to the part itself rather than relying on our root-attached Buzz.cs component. Here’s the updated Rotate.cs file:
using UnityEngine;
public class Rotate : MonoBehaviour
{
// Parameters provided by Unity that will vary per object
public int partNumber = 0; // Part number to help identify when all are stopped
public float speed = 50f; // Speed of the rotation
public Vector3 axis = Vector3.up; // Axis of rotation
public float maxRot = 170f; // Minimum angle of rotation (to contstrain movement)
public float minRot = -170f; // Maximim angle of rotation (if == min then unconstrained)
public bool isFast = false; // Flag to allow speed-up on selection
public bool isStopped = false; // Flag to allow stopping
// Internal variable to track overall rotation (if constrained)
private float rot = 0f;
private AudioSource audio;
void Start()
{
audio = this.gameObject.GetComponent<AudioSource>();
}
public void StartPart()
{
isStopped = false;
if (audio)
audio.Play();
}
public void ReversePart()
{
speed = -speed;
}
public void StopPart()
{
isStopped = true;
if (audio)
audio.Stop();
}
public void TogglePart()
{
isStopped = !isStopped;
if (audio)
{
if (isStopped)
{
audio.Stop();
}
else
{
ReversePart();
audio.Play();
}
}
}
void Update()
{
if (isStopped)
return;
// Calculate the rotation amount as speed x time
// (may get reduced to a smaller amount if near the angle limits)
var locRot = speed * Time.deltaTime * (isFast ? 2f : 1f);
// If we're constraining movement (via min & max angles)...
if (minRot != maxRot)
{
// Then track the overall rotation
if (locRot + rot < minRot)
{
// Don't go below the minimum angle
locRot = minRot - rot;
}
else if (locRot + rot > maxRot)
{
// Don't go above the maximum angle
locRot = maxRot - rot;
}
rot += locRot;
// And reverse the direction if we're at a limit
if (rot <= minRot || rot >= maxRot)
{
speed = -speed;
}
}
// Perform the rotation itself
transform.Rotate(axis, locRot);
}
}
I was pleasantly surprised with how well it worked. While I’d expected having 6 simultaneous audio sources playing to be an issue, it actually just led to the sound being louder when combined, and somehow more realistic: as you stop individual parts, they stop making noise. The last part being stopped causes the robot to fall silent.
Here it is in action. I’ve tried to give a sense of the spatial nature of the sound, which may or may not come through. At least you should be able to see how the various parts’ sounds get combined.
So technically HoloLens can clearly manage the load associated with having multiple, spatial AudioSources attached to a model in this way – at least with this particular usage scenario. Which means technical feasibility (for my purposes) is proven, and the question of whether to attach a single sound or multiple ones becomes a design decision. Which is great.
Now that I’ve tried it, I’m leaning towards the current approach being the preferred option… if we need further differentiation of the individual parts, we might also tweak the values for each AudioSource while keeping the same base audio file, of course.
But I’m still going to see how things work with separate audio files for each part, to see how that behaves. I’m now fairly sure this will also prove feasible – and potentially very interesting – but will clearly take more design-related effort, or the experience may prove to be overwhelming or confusing. We’ll see!