This post will talk on how to build reflection matrix along a given axis using simple projection equations. This matrix could be useful in a variety of gameplay mechanics, from simple physics used to bounce bullets off the walls to complex ray tracing techniques to bounce rays off the meshes from a light source. The image below gives an overview of what we are trying to achieve.
Variables in play
These are the following variables in play:
b = incident vector
b’ = reflected vector
q = axis along with reflection should happen
p = projection vector of b along q
For input, we would just need the single axis, which is a normalized vector. It should be normalized to make the calculations easier and to not have the resultant matrix modify the magnitude of vectors that operate on it. For those short on time and couldn’t read through the explanation, here are the equations:
Reflection Matrix: I – 2 qqT Reflection vector: b’ = b – (2 qqT) b
There’s also a code snippet at the bottom.
Reflection Matrix generation
Given the incident vector(b), we want to reflect on the plane, which is defined by the normal vector q, to produce the reflected vector(b’). We restrict the axis vector (q) to be normalized so that qTq will be equal to 1 and make calculations simpler. To give a general idea, we need to add/subtract a mystery vector from b to land at vector b’. This vector happens to be the projection(p) of vector b on our given axis vector q. On adding -p to b will put us on the plane perpendicular to q. Adding on more -p to it will give us our reflection vector.
The image below should summarize all the equations in play that are going to be translated to the code.
//NOTE: This code is not optimized.
// Mat3x3 and Vec3 are my custom structures
inline Mat3x3 makeReflectionMatrix(const Vec3& axis)
{
Mat3x3 refMat = Mat3x3::Identity();
Vec3 normAxis = axis;
normAxis.normalize();
for(int i = 0; i < 3; ++i)
{
for(int j = 0; j < 3; ++j)
{
float val = refMat.get(i, j); // get ith row and jth col element
val = val - (2 * normAxis.buffer[i] * normAxis.buffer[j]);
refMat.set(i, j, val);
}
}
return refMat;
}
Conclusion
Hope this helped someone in some capacity. As always, please do visit again ๐
In this post, I want to register the bare minimum requirements necessary to render something to the screen using Vulkan API and talk about Vulkan RenderPass dependencies with SubPass and Framebuffer. I assume in this post that there are already valid physical and logical devices created that support queue(s) with graphics capabilities.
RenderPass
Render-pass defines a “contract” of attachments. These attachments are shared across the sub-passes, Frame-buffer and also the Fragment shader. “VkRenderPassCreateInfo” is the structure to be populated to construct a renderpass.
It takes in an array of attachments, sub-passes and dependencies among those sub passes.
The attachments are all image-views. No Buffers are allowed!
The way to interpret the attachments is setup here using “VkAttachmentDescription“. This establishes load and store operations, that defines how to interpret the image data.
When FinalLayout is specified in the structure, it converts the image to the required layout once the render-pass ends.
The last point above is quite interesting as it allows us to transition image to a different layout automatically and pass it along to another render-pass if we choose to, without performing a manual layout conversion.
Sub-Passes
Each Render pass can have one or more sub-passes. This post is about making sense of how resources are shared, so I’ll ignore everything else.
Render-pass declares the sub-passes and how they interact with each other.
If sub-passes use any resources, they must be referenced from the attachments declared in Render-Pass by and unsigned integer index.
The color attachments defined in the sub-pass form a contract with the outputs of the Fragment shader. The values are written to those attachments after passing through the blending stage.
A Graphics Pipeline operates on a single sub-pass within the render-pass. Therefore, the attachments defined within the sub-pass form a direct contract with the Fragment Shader.
Frame-Buffer
The Frame buffer is the final piece of puzzle where it all comes together.
Till this point, we’ve only been kind of talking about the description of attachments. Nowhere have we passed the actual resources that will be consumed!!
The GPU-Backed image resources are passed to the framebuffer.
These directly map to the resources defined while creating the render-pass.
So, if you consider render-pass as a declaration of resource, Framebuffer would be their definitions.
Framebuffer also declares the render dimensions. This is just a declaration though, the actual dimensions should be defined in VkRenderPassBeginInfo, which the spec dictates to be less than or equal to the dimensions defined by FrameBuffer!!
Conclusion
With a simple Swapchain attachment declared in render pass, passed to sub-pass and defined in Frame-buffer., we could issue the command vkCmdBeginRenderPass with some clear color value and see a solid color on the screen within the render-area defined!!!
This post is an attempt to elucidate and demystify the “World to Local Matrix”. This is one concept that you would have to face some time or the other during your game development adventures. Typically game engines expose a helper function like “InverseTransformMatrix” or something else to make this easy. BUT! it’s important to understand what’s going on behind the scenes to fully appreciate it. I will not be going into the actual calculation of the inverse matrices in this post, but rather how to visualize them when trying to transform a point from world to local space. A short recap helps to establish a few things:
Basis vectors are a set of vectors that define a coordinate space. (1, 0) and (0, 1) represent X and Y axes for 2D coordinate space.
A point in a coordinate space is a linear combination of basis vectors.
World space has basis vectors that sit in the Identity matrix.
This diagram should demo the idea of points being combination of basis vectors. It’s vital that this piece of information is completely understood. If we want to represent a point, no matter the basis, it’s going to be a linear combination of the basis vectors under consideration.
A New Basis
Now that we have established that a point is a linear combination of basis vectors, let’s setup a new 2D basis that’s rotated from World space basis by 45 degrees and scaled by 4. This is how our new basis would look and it’s matrix representation:
Going by the rules set above that a point is a linear combination of basis vectors, (1, 0) in local space of A’ would be (4, 4) in the world space (space of A). This is basically the X-Axis of our new basis.
The vector (4, 4) is one unit in our new basis along X’
The vector (-4, 4) is one unit in out new basis along Y’
X’ and Y’ and effectively scaled up. Their magnitude is no longer 1.
X’ and Y’ are rotated by 45 degrees from the default basis vectors.
I’ve intentionally chosen slightly weird vectors to act as our new basis. Following properties can be observed in out new basis. Let’s take another example and try to work out the local position. Note: The bracket notation is used to represent column matrices/vectors
Let P = (4, 0) be a point in world-space and we want to represent that in local space of A’.
Let P’ = (a, b) be the point we require in local-space of A’.
Since P’ should be a linear combination of vectors of A’, we can represent it as follows
a (4, 4) + b (-4, 4) = (4, 0)
On solving the above equation and back-substitution, we get a = 0.5 and b = -0.5.
It’s that simple and those are the values that we want. (a, b) represent a point in the local-space of A’, whose world-space counterpart is (4, 0) !! When represented in matrix notation, we get a familiar local-to-world matrix representation. Except in this case, the unknown is on the left hand side and we are solving 2 equations to obtain unknowns.
On moving things around a little bit, we land in a more convenient format for calculating (a, b) as shown in picture above. This is basically multiplying the World-Space point with the inverse matrix. Let’s make sense of the inverse matrix next!
The Inverse
What does it mean to represent a point in Local-Space? The new basis that I chose is quite odd. It’s X-Axis is (4, 4) and Y-Axis is (-4 ,4) when represented in World-Space coordinates. We want some way to represent that (4, 4) as (1, 0) in the new basis and (-4 ,4) as (0, 1). This shouldn’t be magical, but rather should happen with what we have. As it happens, we only have the basis vectors to work with.
By linear combination of basis vectors of A’, we must transform X’ (4, 4) to (1, 0)
By linear combination of basis vectors of A’, we must transform Y'(-4, 4) to (0, 1)
In doing this, we would have our target representation where (4, 4) is transformed to (1, 0) and (-4, 4) is transformed to (0, 1).
Expanding on the example in the paragraph above, I’ve worked a part of it by hand. This is just to demo the idea of the inverse matrix. NOTE that, I’ve done it using a hacky approach as we know both the Local and World spaces of couple of points. There are obviously better approaches to calculating the inverse. This is basically our World to Local Matrix!!
This is a bit mind-bending, but try to think of Inverse matrix representing a different coordinate space with the basis vectors as the ones we’ve obtained after calculation above. Let us represent this new basis by T.
In this new basis, (4, 4) in local space of T is (1, 0) in world space.
In this new basis, (-4, 4) in local space of T is (0, 1) in world space.
This transformation essentially scales and rotates the world-space basis vectors. They are scaled and rotated such that (4, 4) is transformed to (1, 0) and (-4, 4) is transformed to (0, 1). AND, This is the transformation we needed and we reduced it to a simple local-to-world matrix calculation. BUT! in this case, the point being transformed is in world-space rather than in local-space of T!! If you can swallow this, you’ve pretty much mastered inverse! Whack your head around this a little bit. Draw some picture. Make a mess! If you think about it, transforming (4, 4) with T will give (1, 0), which represents a single unit it X-Direction inside our basis A’.
Therefore, inverse is basically a set of basis vectors that have the offset required to transform A to Identity. Once we have the matrix that maintains this offset, any point in world-space when transformed with T(inverse of A) will represent local-space of A’.
I hope this post helped someone and check back later to see what’s new. That’s all for now ๐
The aim of this post is to demonstrate a fun way of dot product visualization. And while doing that, I also want to demo the concept of how it looks when lights overlap using Shaders in Unity. By the end of this post, you should be able to whip up shaders that looks something like the picture below. The visible conical areas originating from cuboid represent a “valid” region of the dot product as setup in the code, which can also be considered as “Field of View” regions. The conical angles and distance are exposed to the editor for tweaking.
If you wish to see a video of this, please checkout my Youtube video here.
Setting Up
To prepare for dot product visualization, as always, we will need to setup the scene and scripts. Let’s go over them sequentially:
Create a new unity scene with a plane(I named it playground) at position: (0, 0, 0) and rotation: (0, 0, 0)
Create 3 cubes and set the scale to (0.05, 0.05, 0.3). These will serve as the point of origin for the lights to visualize dot products.
A C# script named Playground.cs. This will be attached to the plane we create in the first step
(Optional) An editor script for Playground.cs if you want changes to be reflected without running the game.
An unlit shader named DP_Visualizer(Right click -> Create -> Shader -> Unlit Shader). Clear if it has any code and just have bare minimum to not throw an error and just return a black color in it’s fragment function.
Create a material and attach the shader created in the step above.
Attach this material to the plane created in step-1.
The image below will give an overview of my setup.
Let’s start coding!
In Playground.cs, we expose following variables:
GameObjects: This is to hold the 3 cubes that we have in the scene.
Color: The same number of colors as exposed game objects. This will determine the color of the positive dop product region.
Distance: This determines the distance of the colored conical shape
Angle: This is the actual angle we use to calculate the dot product.
We also need to keep track of the local position and orientation of the Cube GameObjects to send them to the shader. We keep track of local positions so that everything is relative to the point of view of the plane. We send the updated values to the shader in the Update() function. The entire code for Playgound.cs is as follows:
//Playground.cs
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Playground : MonoBehaviour
{
[SerializeField]
public GameObject[] m_gameObjects = new GameObject[3];
[SerializeField]
public Vector4[] m_colors = new Vector4[3];
[SerializeField]
public float m_Distance = 1;
[SerializeField]
public float m_Angle = 10;
[HideInInspector]
public Vector4[] m_localPositions = new Vector4[3];
[HideInInspector]
public Vector4[] m_directions = new Vector4[3];
MeshRenderer m_MeshRenderer;
public void Update()
{
UpdateShaderValues();
}
public void UpdateShaderValues()
{
m_MeshRenderer = GetComponent<MeshRenderer>();
for (int i = 0; i < 3; i++)
{
Vector3 pos = m_gameObjects[i].transform.position;
pos.y = transform.position.y;
m_localPositions[i] = transform.InverseTransformPoint(pos);
m_localPositions[i].w = 0;
m_directions[i] = transform.InverseTransformDirection(m_gameObjects[i].transform.forward);
m_directions[i].w = 0;
}
m_MeshRenderer.sharedMaterial.SetVectorArray("_LocalPositions", m_localPositions);
m_MeshRenderer.sharedMaterial.SetVectorArray("_Directions", m_directions);
m_MeshRenderer.sharedMaterial.SetVectorArray("_Colors", m_colors);
m_MeshRenderer.sharedMaterial.SetFloat("_Distance", m_Distance);
m_MeshRenderer.sharedMaterial.SetFloat("_Angle", m_Angle);
m_MeshRenderer.sharedMaterial.SetVector("_ObjScale", transform.localScale);
}
}
The next part is optional, but will be very helpful if you want to check the results without playing the game every time. Create a PlaygroundEditor.cs script and put it inside the “editor” folder in you unity editor. When our plane’s InspectorGUI updates, we force update the shader values.
//PlaygroundEditor.cs
using System.Collections;
using System.Collections.Generic;
using UnityEditor;
using UnityEngine;
[CustomEditor(typeof(Playground))]
public class PlaygroundEditor : Editor
{
Playground m_playground;
public override void OnInspectorGUI()
{
base.OnInspectorGUI();
m_playground = ((Playground)target);
m_playground.UpdateShaderValues();
}
}
Time To Shade!!
This is where the whole thing comes together. Let’s first set up the parameters that will be passed form our C# script. They are the following:
float4 _LocalPositions[3]; // Local positions of cubes wrt Plane float4 _Directions[3]; // Forward Direction of cubes in scene float4 _Colors[3]; // Colors assigned to cubes float4 _ObjScale; // Scale of plane to account for skewing float _Distance; // How far to check float _Angle; // Represents valid region that should be colored
In v2f struct, we keep track of the vertex’s local position, which we will use to compare against the local position of the Cubes in the scene.
We finally setup the vertex and fragment shaders as follows: Vertex shader is nothing special. Only thing we do different is to send position of vertex inside the output structure. Fragment shader is where the core of the logic happens:
For each of the three cubes we’ve set up in scene, we get the vector from cube’s local position to pixel position (input.localpos). We then multiply that with _ObjScale to deter the affects of any scale on our plane.
From the vector we calculated above, we calculate Dot Product with the forward vector of the respective cubes.
To color the valid region of the dot product, we compare the result with the angle we passed from the C# script. If the Dot Product of the vectors is greater than cosine of angle provided, we color the region.
To have the colors interact with each other, we add to the existing color of the fragment, instead of replacing it.
Finally, to control the reach of the drawing area, we use the following instruction: smoothstep(_Distance, 0.0, length(diff))
With this, you should be able to tweak values in inspector and have results something similar to this:
You can go a little further and smooth the edges of the cone too with following code and have the results look like the one in the first picture of the post: finalCol += _Colors[i] * smoothstep(_Distance, 0.0, length(diff)) * smoothstep(cosVal, 1.0f, dp) * 3;
That is all for this post on dot product visualization. Please drop a comment if you have any feedback. Hope you’ve learnt something and do visit again! ๐
Let’s start with something simple. We want to rotate a vector along x-axis at a certain angle. This is quite trivial to compute and you just need knowledge of rudimentary trigonometry to compute the resulting coordinates of vector. Let’s see if we can wrap around head around vector rotation intuition.
Let’s kick it up a notch and try to rotate Y-Axis in positive direction to get a new vector. This too can be easily computed with a few simple calculations. But! since we are considering counter-clockwise rotation to be positive(Left-Handed System), in our example, X’ should receive a negative value. How can we reason about this?
Here’s what’s happening: In out Left-Handed system, positive rotation around Z-Axis happens from +X -> +Y -> -X -> -Y -> +X. As we are rotating our Y-Axis, there’s an additional offset of PI/2 that needs to be added to rotation. This gives: X’ = Cos(90 + A) = Cos(90)Cos(A) – Sin(90)Sin(A) = -Sin(A) Y’ = Sin(90 + A) = Sin(90)Cos(A) + cos(90)Sin(A) = Cos(A) Now we see a negative sign showing up in the computation of X’. So, we ended up having to add an offset that would let us reach Y-Axis from X-Axis and then add the actual rotation that we want Y-Axis to rotate.
Viewing Through Y-Axis
This is great, but here’s another way of thinking about it: Let’s take a peek at how rotation looks from Y-Axis’ perspective. We are still in Left-Handed space and our Z-Axis hasn’t changed. In Y’s view, it is the positive axis that starts the rotation and on rotating counter-clockwise by 90 degrees, it reaches -ve X-Axis. Upon rotating some angle, it arrives at (YL, XL). This is what the Y-Axis sees and these coordinate are what we get in the local space of Y-Axis. Now that we have local coordinates, we just need a transformation matrix to compute the actual position in world space. Refer this for a refresher on local to world computation.
This matrix is quite easy to write as we just swapped X-Axis with Y-Axis and Y-Axis with -ve X-Axis.
0
-1
1
0
Transformation(Column-Major)
As an example, let’s rotate a vector along Y-Axis by 45 degrees. – (YL, XL) are (1/sqrt(2), 1/sqrt(2)) – On applying the transformation matrix to the vector above, we get (-1/sqrt(2), 1/sqrt(2)). These are indeed the world space coordinates that we desire.
In this post, let’s try to understand the geometric intuition behind the Local To World Matrix and try to make sense of matrix multiplication a bit better through a simple illustration. – Why exactly are we multiplying row ‘i’ of matrix A with column ‘j’ of matrix B to get an element ‘Rij‘?
This post is going to talk a lot. So, please bear with it. ๐
Matrix Multiplication: Geometric Intuition
For a refresher on matrix multiplication, I recommend going though this article. In summary, multiplication of matrix can be interpreted as Dot product of row ‘i’ of matrix A with column ‘j’ of matrix B to get the element ‘Rij‘. If you ever wondered why exactly multiplications are done this way and what’s the geometric interpretation behind it, this article is for you! ๐
Here are the assumptions I’ve defined: – All the matrices and vectors in this article are represented using Column-Major notation. – Our basis vectors for “World-Space” coordinate system are each represented by a column in a square matrix. Let this matrix be represented by ‘W‘ (Default Basis). – Let’s assume a new set of 2D basis vectors(not necessarily orthonormal). If a point is represented relative to this new basis, we are representing the point in local space (New Basis). – A 2D point P(2, 3) as an example.
If we look at a generic multiplication, we can see that the result is a “linear combination” of rows of matrix A with the weights coming from matrix B.
Now, if we replace the matrix A with our Default Basis and perform the multiplication, we end up at 2 units to right and 3 units up. Understanding what’s happening here is vital to making sense of geometric interpretation of matrix multiplication. On expanding multiplication, we get following equations: – (1)(2) + (0)(3) = 2 = P’.X – (0)(2) + (1)(3) = 3 = P’.Y Everything is beautiful in our Default Basis. Our Y-axis vector doesn’t have an X component to add weight to P’.X! Similarly, X-Axis doesn’t have Y-Component to add any weight to P’.Y! As a result, we cleanly end up at the location we expect to.
Now, what if the basis vectors are not so clean(like our ‘New Basis’)? Another intuitive point to note is that this “New Basis” is represented in terms of our Default Basis. If we apply the same matrix multiplication to this new basis and the same point, we get this: (1)(2) + (-1)(3) = -1 = P’.X (1)(2) + (1)(3) = 5 = P’.Y What exactly is happening here? What does this multiplication imply? When doing the same calculation with Default Basis, we started and ended with a same point. Now, with our New Basis, the point (2, 3) represents the location in New Basis’ coordinate system. When the matrix transformation is applied, we are transforming the point from New Basis’ space to Default Basis’ coordinate space.
Local To World Space Intuition:
– To get to point (2, 3) in any coordinate space, we move 2 units right and 3 units up. – We have represented New Basis in terms of Old basis. In terms of default basis, 2 units right in New basis is represented by: (1, 1) * 2 = (2, 2). In terms of default basis, 2 units up is represented by: (-1, 1) * 3 = (-3, 3). Sum of both these vectors gives our World-Space location: (-1, 5) !!! We simplified the issue from confusing matrices to simple vector addition ๐ In the end, matrices are just glorified representation of linear combination of Basis Vectors !!
There’s one last thing to think about. Why is the matrix multiplication a dot product of rows and columns? If we look at mapping points in our default basis, P.X and P.Y can be viewed as projections on X(1,0) and Y(0,1) axes respectively.
Now, what vector should we project on to get the World-Space coordinates of a local position in New Basis? – Looking at matrix multiplication, we have P’.X = (X’.X*P.X) + (Y’.X * P.Y). X components of both X’ and Y’ contributes to the final result P’.X in world space. – A new vector with (X’.X, Y’.X) will be the one that we should project on to get P’.X. – This can be thought of as vector obtained through the addition of (X’.X, 0) and (0, Y’.X). – The final vector obtained implies that: X component of X’ is added only along world’s X axis. X component of Y’ is only added along world’s Y axis. – It’s the same for P’.Y.
A matrix transformation is very simple to learn, but the intuition behind it is quite magical. I apologize if what you read seemed redundant. Hope this makes you a bit wiser than yesterday! ๐
I’ve been in the mood to do some physics simulations. I decided to start small and try to perform some Buoyancy simulation using unity. Do note that this post focuses only on simulating Buoyancy and it’s not the intention of this post to perform a complete simulation. So, let’s see how we can perform some 2D buoyancy simulation in unity. This is what I’ve ended up with
Let’s start by creating 2 scripts: “Buoyancy” and “Water”. Buoyancy script goes on the objects on which Buoyant force should act and Water script is attached to a liquid body. To keep the simulation simple, I’ve decided the circle to have a unit area. I’ve added the liquid object to layer “Water” and objects that can be collided with to layer “Floor”. As the liquid is static, we grab the min and max points in the Start(). These are the following variables declared in each of the scripts:
public class Buoyancy : MonoBehaviour
{
public float density = 750; // Kg/m^3
public float gravitationForce = 9.8f; // m/s^2;
[HideInInspector]
public float mass; // in KG
// 1 Unit = 1 Meter in Unity
// Forces that will act on body:
// Gravity : F = Mass * Gravity
// Buoyancy : B = density * gravity * (Volume to displaced fluid)
[HideInInspector]
public Vector3 currentVelocity = Vector3.zero;
[HideInInspector]
public Vector3 oldVelocity = Vector3.zero;
Vector3 externalForces = Vector3.zero;
// Start is called before the first frame update
void Start()
{
mass = 1 * density; // Considering unit volume
}
}
public class Water : MonoBehaviour
{
public float density = 997.0f; // KG/M^3
public float stickiness = 0.15f;
Vector3 minPoint = Vector3.zero;
Vector3 maxPoint = Vector3.zero;
// Start is called before the first frame update
void Start()
{
BoxCollider2D collider = GetComponent&amp;lt;BoxCollider2D&amp;gt;();
minPoint = collider.bounds.min;
maxPoint = collider.bounds.max;
}
}
Setup
In our simulation, there are going to be 2 forces acting on the body. One is the gravitational force(which is independent of mass) and buoyant force is the other one. We gather all the external forces that are acting on the body in the previous frame(using “RegisterExternalForce“) and add that to current acceleration of the body(A += F/M). It’s trivial to compute velocity if we have acceleration. We therefore use the computed velocity to update the position of the object in the “Update” loop (Disp = Vel * DeltaT).
void FixedUpdate()
{
SpriteRenderer renderer = GetComponent<SpriteRenderer>();
Vector3 acceleration = Vector3.zero;
acceleration.y = -gravitationForce;
acceleration += externalForces / mass;
currentVelocity = Vector3.zero;
currentVelocity = oldVelocity + (acceleration * Time.fixedDeltaTime);
currentVelocity.y = Mathf.Clamp(currentVelocity.y, -gravitationForce, gravitationForce);
Vector3 pos = gameObject.transform.position + currentVelocity * Time.fixedDeltaTime;
CircleCollider2D collider = GetComponent<CircleCollider2D>();
Collider2D otherCollider = Physics2D.OverlapCircle(pos, collider.radius, LayerMask.GetMask("Floor"));
if(otherCollider != null)
{
currentVelocity = Vector3.zero;
acceleration = -acceleration;
}
float gravity = mass * gravitationForce;
oldVelocity = currentVelocity;
//This should be reverted to 0 to gather all the forces again
externalForces = Vector3.zero;
}
public void RegisterForce(Vector3 externalForce)
{
externalForces += externalForce;
}
// Update is called once per frame
void Update()
{
gameObject.transform.position += currentVelocity * Time.deltaTime;
}
Buoyant force is acted on when an object comes into contact with a liquid surface. We use Trigger functions to perform these calculations whenever an object with “Buoyancy” component comes into contact with our trigger.
public class Water : MonoBehaviour
{
...
void OnTriggerEnter2D(Collider2D collider)
{
RegisterForceOn(collider);
}
void OnTriggerStay2D(Collider2D collider)
{
RegisterForceOn(collider);
}
void RegisterForceOn(Collider2D collider)
{
//Will be populated in next section
}
}
For unity to trigger collision events, one of the objects being partaking collision should have a rigidbody component. But we don’t want the rigidbody to move the object and perform any simulations. To fit our needs for this simulation, we can set the BodyType in rigidbody2D to “kinematic”.
Buoyancy Calculation
Now we can finally fill the final and most important function that actually calculates the Buoyancy. Whenever a body comes into contact with our liquid surface, we are going to apply the following forces on it:
Buoyant Force: acts in direction opposite to gravitational force [(density of liquid) * (acc. due to gravity) * (volume of liquid displaced)]
Drag: The resistance to motion of the object and it’s the force acting in opposite direction to relative motion of the object. It’s defined by (0.5 * liquidDensity * (relative speed of object)^2 * (dragCoefficient) * (Cross sectional Area)). For more information, please refer this wiki page.
Arrest Force: This is not conventional, but I had to apply an additional force to stop the objects from bouncing forever. This also acts in direction opposite to current velocity, but the calculated force attempts to halt the object. increasing the “Stickiness” variable reduces the resistance of the liquid. It seemed like a neat little effect, so I left it in. Please do post a comment if there’s a better way to prevent the eternal bouncing ๐
Well, that’s all for this post. Please do post a comment if you figure out any improvements or better ways to approach this. Thanks for reading till the end!!
Motive of this article is to lay a foundation on how to quickly set-up dynamic platform generation for an infinite 2D side scroller. Since it’s a prototype, I’m going solely going to be working with a rectangle. And, I’ll be using Unity engine as it’s really well suited for anything 2D. So, let’s dive in and take a look at some dynamic platform generation. Check out this Youtube video to out to see this in action.
Getting Ready
We first need a platform with well defined dimensions. I created an extremely simple black rectangle prefab of 10 units wide. For Obstacles, I created two prefabs: a small red rectangle that player should avoid by jumping and an orange blocking platform that player must slide/duck under. For player itself, I’m using a very primitive green rectangle.
Everything is elementary, but I’ve employed a small trick when creating slide hazard prefab. The collider and sprite are at a slight offset so that the player can pass under it. Now that our game objects are ready, you can store them inside a class like GameManager to be accessible when generating platforms. My camera’s projection is set to orthographic and size to 15. Platforms start spawning from (0, 0, 0). So, I have the player sitting a few units above origin.
Coding IT!
Let’s first define a few constants and data structures for storing platforms. I’m going to call my script LevelGenerator.cs.
private readonly int TILE_DEACTIVATION_DISTANCE = 25; // Deactivate tile if distance to player is less than this
private float abyssLocation; // World X location of last spawned platform
private readonly int BLOCK_PADDING = 150; // Spawn a tile chain if (abyssLocation - playerPos) is less than this
private readonly int BLOCK_CHAIN = 10; // Spaws these number of tiles at once
public static LevelGenerator Instance;
[SerializeField]
private int numberOfLevels = 3; // This represents "floors". Limits how high platforms can be generated
public List<Tile> platforms; // List of all platforms "prefabs" used for spawning. Must be filled in editor
public List<Tile> hazards; // List of all available hazard "prefabs" used for spawning. Must be filled in editor
//object pool of type tiles
private List<Tile> tiles; // A cache of all active tiles(platforms and hazards) for object pooling.
private List<Tile> activeTiles; // Cache of active tiles
private PlayerMovement player;
private int currentLevel = 0;
private int previousLevel;
void Start () {
player = FindObjectOfType<PlayerMovement>();
//platforms = new List<Tile>();
tiles = new List<Tile>();
activeTiles = new List<Tile>();
previousLevel = currentLevel;
//Spawn a set of 10 blocks at the start of the game
InitialBlockSpawn();
}
void InitialBlockSpawn()
{
int BLOCK_CHAIN = 7;
for (int i = 0; i < BLOCK_CHAIN; i++)
{
Tile t = GetTile(0, true);
t.transform.position = new Vector3(abyssLocation, currentLevel * 7, 0);
t.transform.parent = this.transform;
abyssLocation += t.GetLength();
activeTiles.Insert(0, t);
t.Activate();
}
}
If everything is setup correctly, the code should spawn 10 blocks as soon as the game starts. The idea for platform initial generation is as follows:
“GetTile” gets a tile from the object pool. If it can’t find any suitable objects, it creates a new object and returns it.
“abyssLocation” is the position in the game beyond which no platforms exist. So, that’s where the player is headed and where the platform should be spawned next. Once we have the platform ready, we increment the “abyssLocation” with our platform’s length so that the next platform can go in that location.
“currentLevel * 7” lets us spawn platforms at varying heights(Y). If currentLevel is 0, all platforms take the position (X, 0). If currentLevel is 1, they take up (X, 7) and so on. It’s a simple, hacky and definitely not a production quality code. But, it gets the job done for this simple tut.
Once we have our X and Y, we have a location where we can spawn our platform.
We still haven’t looked at how to spawn tiles and get them from object pools. let’s do that now.
Tile GetTile(int id, bool platform)
{
//Debug.Log("Inside the get tile");
Tile t = null;
t = tiles.Find(x => x.Id == id && x.Platform == platform && !x.gameObject.activeSelf);
if(t == null)
{
GameObject g = Instantiate(platforms[id].gameObject);
t = g.GetComponent<Tile>();
t.Platform = platform;
t.Id = id;
tiles.Add(t);
tiles.Insert(0, t);
//Debug.Log("Tile is.......: " + t);
}
return t;
}
Tile GetHazard(bool hazard)
{
Tile t = null;
int id = Random.Range(0, hazards.Count);
t = tiles.Find(x => x.Id == id && x.Hazard == hazard && !x.gameObject.activeSelf);
if(t == null)
{
GameObject g = Instantiate(hazards[id].gameObject);
t = g.GetComponent<Tile>();
t.Hazard = hazard;
t.Id = id;
tiles.Add(t);
tiles.Insert(0, t);
//Debug.Log("Tile is.......: " + t);
}
return t;
}
void DeactivateTiles()
{
for(int i = 0; i < activeTiles.Count; i++)
{
if (activeTiles[i].transform.position.x - player.transform.position.x < -TILE_DEACTIVATION_DISTANCE)
{
activeTiles[i].gameObject.SetActive(false);
activeTiles.RemoveAt(i);
}
}
}
“GetTile” and “GetHazard” are pretty identical in their functionality. This is the basic idea:
If we have multiple platforms and hazard prefabs, select one randomly and see if a game object exists in the “tiles” pool with similar parameters.
If an object matches our requirements and it’s not currently active, return that.
If no such object exists or is currently active, create a new game object and return that.
We now have all the tools at our feet and only thing left to do is coding the platform generation.
void Update () {
if(Mathf.Abs(player.transform.position.x - abyssLocation) < BLOCK_PADDING)
SpawnBlock();
DeactivateTiles();
}
void SpawnBlock()
{
//Get the block instantiation chain
int chain = Random.Range(1, BLOCK_CHAIN);
for(int i = 0; i < chain; i++)
{
Tile t = GetTile(0, true);
t.transform.position = new Vector3(abyssLocation, currentLevel * 7, 0);
t.transform.parent = this.transform;
abyssLocation += t.GetLength();
activeTiles.Insert(0,t);
t.Activate();
//instantiate random hazard
if (i != 0 && i != chain-1 && abyssLocation > 75)
{
if (Random.Range(0f, 1f) > 0.8f)
{
Tile h = GetHazard(true);
h.transform.position = new Vector3(abyssLocation, currentLevel * 7, 0);
h.transform.parent = this.transform;
activeTiles.Insert(0, h);
h.Activate();
}
}
}
//Generate a gap after a series of blocks
float randomGapVariable = Random.Range(0f, 1f);
if(randomGapVariable < 0.3f)
{
//Generate a gap of a random length
int gapLength = Random.Range(5, 12);
abyssLocation += gapLength;
}
//Change the path level with a certain random value
if(Random.Range(0f, 1f) < 0.15f)
{
if(currentLevel == 0)
{
currentLevel++;
}
else if(currentLevel == numberOfLevels - 1)
{
currentLevel--;
}
else
{
currentLevel = Random.Range(0f, 1f) > 0.5f ? currentLevel+1 : currentLevel-1;
}
}
}
We check if the player is getting closer to the “abyssLocation” in update loop. If the distance crosses a certain threshold, SpawnBlock is called.
A platform chain of previously specified number is spawned at once.
When platform is spawned, there’s a 20% chance to spawn a hazard on it. I hardcoded this as “(Random.Range(0f, 1f) > 0.8f)” , but this can be exposed to editor to tweak around.
To not make things too monotonous, there are going to be breaks after platform chain. There’s 30% chance for a gap to apprear.
Finally, there’s ~15% chance to change the level(Y position) of platform spawns.
This is going to be a relatively short write on adding and modifying custom handle caps in a Unity Editor. If you prefer to go through this tutorial as a video, you can check it out here on my channel. Let’s first create an empty scene with a plane, a sphere, a camera and a directional light. We are going to add a script on the sphere to make it spawn our handle caps. Handle caps are basically those gizmos that you use to move, rotate and scale an object. Unity provides is with quite a few shapes to meet our requirements. So, let’s set up the initial scene and get started exploring Unity editor Handle caps.
I created a “CustomHandle.cs” script that will be added to sphere and it’s corresponding script in “Editor/CustomHandleEditor.cs”. We will be working with these two files. CustomHandle.cs is pretty simple and is self explanatory. We expose handleParams to the editor so that we can modify a few parameters of handle caps directly from editor.
///CustomHandle.cs///
public enum HandleTypes
{
Arrow, Circle, Cone, Cube, Dot, Rectangle, Sphere
}
[System.Serializable]
public class HandleParams
{
public float size = 1.0f;
public float offset = 0.0f;
public HandleTypes Type = HandleTypes.Arrow;
}
public class CustomHandle : MonoBehaviour
{
[SerializeField]
public HandleParams handleParams;
}
I’m breaking this down into 3 steps: – Drawing handle caps. – Highlighting them if mouse is closest to one of the handle caps we created. – Making them respond to mouse events.
Drawing Handle Caps
Crux of our logic will stay inside OnSceneGUI() function in Editor script. To render the handle caps, we just need to call the appropriate function in Unity’s Handle class and pass right parameters. I have a helper function that lets me pass additional parameters. To draw anything, we are going to do it when editor fires a “Repaint” event. I assigned 11, 12, 13 as IDs for forward, right and up handle caps respectively. They are going to be useful in next steps.
Compiling and running the code above should draw handle caps in editor with customizable size and offset params as follows:
Highlighting Them
Let’s now try and make them change color when mouse hovers over them. To do that, we need to create handle caps during “Layout” event and then re-draw them during a repaint event. This is because, we would be using “HandleUtility.nearestControl” to retrieve handle nearest to our mouse. To compare out handle ID with nearest handle ID and draw the handle with proper color, we would need this information prior to the “Repaint” event. This is how the code looks after making the necessary changes. We would basically be creating handle caps twice. I weirdly could not find any utility function to retrieve handles from handle IDs. If there was such a function, we would retrieve the Handle from ID and repaint.
Finally, to make them interactable, I added 2 members to the editor class: a “Vector3 prevMousePosition” and an “int nearestHandle”. Here, we have mouse down, up and drag events and each sets the variables appropriately. Append this code to “OnSceneGUI” function that we’ve been modifying and you should have handle caps that you can click and drag around.
That is all folks! I didn’t elaborate much in this post because most of the concepts are pretty straight forward and math is also quite simple. Please do post a comment if anything requires a deeper analysis.
While navigating the turbulent waters of game development, you might’ve had to implement a system that showcases an object (like an unlockable or a status screen with interactable character). Now, there are multiple ways to achieve this feature like having a predefined camera position in a level or if you have multiple screens that are showcasing different types of objects, you might’ve hardcoded camera positions in a file that’s read from on demand. All of these are viable solutions, but I’m going to demo a way that’s a quite flexible by moving camera automatically and frame object to an area on viewport dynamically defined!!!
There’s also a video available here if that’s your preferred way to learn. Now let’s get into it!
Prelims
Let’s first setup a scene with an actor to frame. I have a very simple scene with a floor, podium, a sphere that should be framed to a portion of the screen and a camera that we are going to code to move around to meet our framing requirements. This is how it looks…
Next, we are going to setup a widget with a dedicated area to which the sphere should be framed to. To make the demo simple, this widget also has 2 buttons for gathering widget settings and sending them to C++ code and another button to initialize the “Frame” C++ code. Here’s my widget setup. I will be reading the dimensions of “FrameOverlay” and sending them over to C++ to move the camera accordingly. The image inside the “FrameOverlay” panel is only used as visual confirmation for our logic. Serves no real purpose to the actual logic. We then spawn this widget inside level blueprint.
Setting Up!
Now that we have our scene and widget setup, we need a way to send information from widget to C++ and modify our active camera’s parameters from there. A GameInstancSubsystem fits perfectly for all our needs. So, I create a new C++ class inheriting from UGameInstanceSubsystem with following contents.
To do anything, we first need to register a camera and the object to frame. They should be valid entities and this can be setup inside you Level blueprint (using “SetCameraActor” and “SetFrameActor”) immediately after adding widget to viewport. After registering camera and actor, we need to register the area of the screen to frame our actor to. This is done inside our widget by pressing “Gather Settings” button. I’ve noticed that it’s not ideal to gather these settings during widget construction/initialization events as the geometry might not have been constructed yet and might give you invalid results. Therefore, gathering settings on a button press would be ideal for this demo!
Now let’s start coding the core logic to make this work. This can be broken down into multiple steps:
We first get the actor bounds and retrieve the Screen Space coordinates of Top-Left and Bottom-Right positions. These are represented in code as “TopRightSP” and “BottomLeftSP”.
These coordinates form a 2D rectangle. We then use the dimensions of this rectangle to find the ratio of width and height with respect to our “FrameSize” (dimensions of the overlay panel). The MAX of these ratios is used as our FOV multiplier. All our calculations assume that Camera and Actor doesn’t move further apart or come closer. This lets us scale FOV and and move the camera in the plane perpendicular to camera’s forward vector to frame the actor. If camera moves closer or away from actor, the bounding box would shrink/expand.
We use the multiplier to modify the FOV of camera and force-update it.
Next, we need to get a World-Space direction vector of the center of preview area.
Finally, we start from the center of bounding box and move in a direction defined by the negative of vector and distance from camera. This will be the final position of our camera.
With that you should have a pretty good system that can frame actors to a dynamically defined area on screen. It’s not perfect though! If the camera is way out of view of the actor, the results might be skewed. A way to fix it would be to run this code in a loop 2 or 3 times or have additional checks that verifies the area of the resulting bounding box in the preview area and redo the logic.
If you learnt something or have suggestions/improvements, please do drop a comment ๐