BRANE

PROJECT “BRANE



BRANE GRADEX Trailer  

BRANE Gameplay

EXPERIENCE BRANE HERE!!!!


In-Game Music Credits:

two out of the four songs used in the game were made by my friend Haker. Please check out more of his music in the link below:

https://haker.bandcamp.com/

1]Haker - AJ://01_explore

2]Haker- AJ://02_cubes

3]Oscuro - Chasing Time

4]Veil44 - Pulse


Artist Statement

“Brane” is a project that seeks to push the boundaries of the gaming experience by creating a unique, immersive virtual world that combines the worlds of photogrammetry and 3D art. As the sole creator of this project,I wanted to create an explorable space that allows users to explore and discover a world that is not constrained by physical boundaries or cultural barriers, and that encourages collaboration and community among users.

The inspiration for “Brane” comes from the concept of hypothetical higher-dimensional objects from String theory. By creating an alternate dimension that is coupled with our reality, the project allows users to preserve objects indefinitely in a virtual world. This idea is based on the Stick and Rope theory incorporated by Hideo Kojima in his game ‘Death Stranding’, which is based on the works of Japanese writer Kobo Abe. “Brane” takes this concept a step further by allowing users to submit their own 3D assets, which are randomly placed within the world for players to discover and place in the game world.

The use of photogrammetry technology is a key aspect of “Brane,” as it allows for the creation of realistic and detailed 3D models of objects and environments. By combining these models with 3D art, the project creates a visually stunning and immersive gaming experience. However, the focus of the project is not just on aesthetics. By incorporating objects from different cultures and places, “Brane” aims to create a collaborative space that celebrates diversity and promotes the preservation of cultural heritage.

One of the key features of “Brane” is its community-driven approach. By allowing users to submit their own 3D assets, the project encourages collaboration and community among users. This creates a shared experience of exploring and discovering objects in the virtual world, which fosters a sense of belonging and connectedness among users. This collaborative approach also allows for the creation of a grand city that combines objects from all cultures and places, making them accessible to people worldwide.

“Brane” is a project that combines technology, art, and collaboration to create a space that transcends physical boundaries and cultural barriers. By focusing on the preservation of cultural heritage and its community-driven approach, the project aims to inspire a new wave of artistic and creative video games that are inclusive and collaborative. Through “Brane,” I hope to create a space that brings people together to appreciate the beauty of different cultures and places, and that encourages collaboration and community among users.


Project Statement

The Brane project exists in the world because I wanted to create a unique and innovative virtual experience that would offer players an immersive gaming experience. I knew that combining photogrammetry and 3D art could create a one-of-a-kind virtual world that would be unlike anything else out there. My main objective for Brane was to preserve objects that hold cultural significance in a virtual world and make them accessible to people worldwide. By creating a collaborative space that combines objects from different cultures and places, Brane aims to promote cultural diversity and encourage community-driven exploration and discovery.

To bring this project to life, I used Unreal Engine 5.1 and photogrammetry technology to create an immersive and visually stunning virtual world. During the development of Brane, I learned so many things in Unreal Engine 5.1. For example, I learned how to create and texture landscapes, build foliage and collision, add water bodies, prepare photogrammetry scans, and use the blueprint system. I was able to use this knowledge to create a beautiful and immersive world for players to explore.

One of the unique aspects of Brane is that it allows users to submit their photogrammetry assets, which are randomly placed within the world for other players to discover. This collaborative aspect of the project was important to me because it allows users to contribute to the world and create a space that is diverse and inclusive. By involving users in the creation of the world, I was able to create a space that encourages the exploration and discovery of objects from different cultures and places.

During the development of Brane, I also learned the significance of preserving cultural heritage and the potential of technology to make cultural objects accessible to people worldwide. By using photogrammetry technology, I’m able to preserve objects indefinitely in a virtual world, ensuring that they’re accessible to future generations. This is important because it allows us to preserve cultural objects that might otherwise be lost or destroyed.

Finally, I learned that artistic and creative video games have the potential to promote cultural diversity and encourage collaboration and exploration. By combining technology and art, we can create immersive experiences that bring people together and promote cultural understanding. I believe that Brane has the potential to do just that.

Furthermore, I am currently working on a new system that will allow users to upload photogrammetry assets during runtime. This new system will make the collaborative aspect of Brane even more accessible and dynamic. Players will be able to contribute to the world in real time, adding new objects and helping to shape the virtual world as they explore it. I’m excited to see how this new system will change the experience of playing Brane and how it will allow us to continue to promote cultural diversity and collaboration.


Literature Review

The creation of “Brane” was influenced by a range of artistic and digital movements, including the digital art movement and the photogrammetry movement. The digital art movement is characterized by the use of digital technologies to create interactive and immersive works of art. The movement has been influenced by the evolution of digital technology and its use in art, as well as the exploration of new forms of expression in the digital space (Dieter, 2009). [1]The photogrammetry movement, on the other hand, is based on the use of photographs to create highly detailed and realistic 3D models of objects and environments. This technology has been used in a variety of applications, including video game development and digital art installations.

One of the main sources of inspiration for “Brane” is the Stick and Rope theory, which was introduced in the video game ‘Death Stranding’ developed by Hideo Kojima.[2] The theory is based on the works of Japanese writer Kobo Abe, who explored the idea of a “rope” that connects people and a “stick” that keeps them apart.[3] In ‘Death Stranding’, the theory is used to create a connection between players, who are tasked with delivering packages and connecting isolated communities in a post-apocalyptic world. [4]In the context of “Brane,” the Stick and Rope theory provides a framework for exploring the relationship between technology and society, as well as the role of art in bridging cultural barriers. The use of photogrammetry and 3D art allows for the creation of a shared virtual space where users can collaborate and interact with objects from different cultures and places. This expands upon the concept of connection and interdependence by emphasizing the potential for mutual learning and appreciation.

The use of photogrammetry technology in “Brane” was made possible by the work of several researchers and developers like Paul Debevec, a computer graphics researcher at the University of Southern California who is known for his work on photogrammetry and image-based lighting. He has been involved in the development of techniques for capturing and modelling realistic human faces and environments, including the creation of a digital model of the Notre Dame Cathedral in Paris using photogrammetry. [5]The technology has been used in a range of applications, including architectural design and engineering, and has been a source of inspiration for many artists and designers. For example, the digital artist Geoffrey Lillemon has used photogrammetry to create highly detailed and realistic 3D models of human faces, which he has incorporated into his digital art installations (Lillemon, 2020).[6]

The project also expands upon the digital art movement by creating a collaborative asynchronous space that combines objects from different cultures and places, encouraging community and collaboration among users. An asynchronous collaborative 3D digital space is a virtual environment where multiple users can contribute to a project at different times and locations. This type of space allows for asynchronous communication and collaboration, where users can work on the project at their own pace and contribute their ideas and assets without the need for real-time interaction. An example of an art project that utilizes an asynchronous collaborative 3D digital space is the “Museum of Symmetry” by artist and filmmaker Paloma Dawkins. The Museum of Symmetry is a digital art installation that allows users to explore a whimsical world filled with surreal landscapes and characters. Users can contribute to the project by submitting their own artwork, which is then incorporated into the installation and can be discovered by other users.[7]

“Brane” draws inspiration from a range of artistic and digital movements and builds upon the works of many artists and designers. The Stick and Rope theory, which served as the seed for the project, inspired the concept of creating a virtual world that transcends physical boundaries and cultural barriers, allowing for the exploration and discovery of objects from different cultures and places. The use of photogrammetry technology and the collaborative approach in the project were influenced by the evolution of digital technology and its use in art, as well as the exploration of new forms of expression in the digital space.

References:

[1]Dieter, M. (2009). Digital art. Thames & Hudson. Eliasson, O. (2004). 

 [2]Kojima, H. (2019). Death Stranding [Video game]. Kojima Productions. 

 [3]Abe, K. (2013). The rope. Vintage. Boudreau, C. (2020).

 [4]Death Stranding’s “Stick and Rope” Theory: An Analysis of Social Connection in a Fractured World. Game Studies, 20(1). Retrieved from http://gamestudies.org/2001/articles/boudreau

[5]Debevec, P. (2000). Paul Debevec and the art of photogrammetry. VFXPro.com. Retrieved from http://pauldebevec.com/Items/VFXPro-20001120/vfxpro-debevec-photogrammetry-20001120a.pdf 

[6]Geoffrey Lillemon: 3D scanning & photogrammetry. Retrieved from https://www.geoffreylillemon.com [6]

[7]Dawkins, P., Casa Rara Studio, & National Film Board of Canada. (2018). Museum of Symmetry [VR experience]. Retrieved from https://store.steampowered.com/app/870890/Museum_of_Symmetry/



DEV LOG 3: THIRD PERSON AND PHOTOGRAMMETRY

PROJECTBULK/BRANE

DEV LOG 3:

THIRD PERSON AND PHOTOGRAMMETRY.


Third-person Camera Modes

With a vast landscape like the Land of ‘Brane’ I wanted to experiment with a different camera mode and give players the freedom to switch between them. I’ll call this camera mode the ‘far-angle follow camera’. It looks at the player from a further distance than the normal camera mode. This allows for an increased field of view along with a cinematic effect that puts the player in contrast with the broad landscape surrounding them. The player becomes part of the environment smoothly as the camera zooms away from the player.
To achieve this I have to make the camera move a further distance away from its original location and the player smoothly by the press of a button and then return the camera to its original position smoothly when the button is pressed again. To do this I tried two different techniques. The second technique ended up working much better than the first one but both of them have different features and drawbacks. First I need to map a key that when pressed, will switch the cameras. To do that, I create an input action for switching cameras. I’ll have to bind a key to the input action using Unreal Engine’s new enhanced Input mapping system which works a little differently from the action mapping system of previous versions of the engine. Instead of mapping actions to keys in the Project Settings, you now create an input mapping component where you can map the keys to actions. Unreal Engine’s Third personal content includes an enhanced Input mapping component with keys mapped to third-person actions of moving and jumping. Here, I create a new component for switching the camera and map the key ‘c’ to it.


1st Technique:
The first technique is based on a tutorial for switching between third-person and first-person cameras on YouTube by Gorka Games.[1] I wanted to implement this for two third-person camera modes. I start by opening the third-person controller blueprint and creating a new camera. Let’s call them Camera1 and Camera2. I add the node for the InputAction for switching the camera that I previously created. I attach a FlipFlop node to it, this node alternates between output A and output B when it receives input from the previous node. To output A, I attach a SetActive node with my Camera2 attached to the target, I set it as active. I attach another SetActive node with Camera1 Attached and don’t set it as active. I repeat the process for output B of the FlipFlop node but this time I set Camera1 as active instead of Camera2. While this allows me to easily switch between the two camera modes in my gameplay I cannot switch between them smoothly which breaks immersion and ruins the ‘ASMR-esque’ nature of the game and would not go well with soundtracks. This is why I decided to go with the second one which ended up working well. Instead of deleting the nodes, I will use them later on to develop a first-person camera.

2nd Technique :
This technique is based on another tutorial by Gorka Games for locomotion.[2] In this technique, I take the existing InputAction node + FlipFlop node and for A and B outputs I add a Timeline node, I attach output A to play and B to Reverse from the end. In the timeline itself, I create a new track and call it Cam Length and add two keyframes, the first one at time 0 and value 0 and The second one at time 1 and value 1. This creates a smooth slope from 0 to 1 over a period of 1 second. Going back to the event graph I add a Set Target Arm Length node to the update output of the timeline node. I set the target arm length at 600. This is the new distance that the camera will be away from the player, the original and default third-person target arm length is 400. Then I added a lerp node and set value A to 400 and Value B to 600 and attached the cam length (track) output of our timeline to the alpha of the lerp node this will allow the camera to smoothly travel between from 400 to 600 distance as per the keyframes of the track in the timeline.


Replace mannequin with MetaHuman

It’s time to put a main character in there! I would love to work on a fully realized character customization system for this game. But before that, it’s time to add a character of my own to replace the default mannequin that comes with the Third person template in Unreal Engine 5.1. 

I want to use Epic Games’ Metahuman technology to create my first character. To do this I followed a tutorial by the YouTuber Jobutsu.[3]  Metahumans are high-fidelity digital humans created using Unreal Engine’s MetaHuman Creator tool. They are designed to be realistic and expressive, with advanced facial animation and motion-capture technology. To create your own Metahuman, you can use the MetaHuman Creator tool[4], which allows you to customize various aspects of the character, such as their facial features, hair, clothing, and body shape. The tool offers a wide range of presets and sliders to choose from, and you can also import your custom assets. 

Once I created my Metahuman, I added it to my project in Unreal Engine 5.1 using Quixel Bridge, where it gets automatically uploaded. I open my Metahuman blueprint. In this blueprint, inside Class Settings, I change the parent class from actor to the Third Person Blueprint. When I compile this, I get a bunch of errors. 

I go to the first error which tells me ‘Target must have a component’. Clicking on this error takes me to the Set Update Animation graph. Here, the Get Children node is missing a Target node. I add the mesh as the target and compile it. The second and third error tells me that ‘usage of Get Skeletal Mesh has deprecated’. Clicking on these errors takes me to the Enable Master Pose and Construction Script graph. Here, in both cases, I replace the Get Skeletal Mesh node with the Get Skeletal Mesh Asset node. Then I open The Third Person Blueprint, go to the event graph and select everything except the Event BeginPlay node. I copy everything I selected and paste it into my Metahuman blueprint event graph. Here, I connect the output to the Hair LODSetup node to the Cast to PlayerController node. Now I go to the Metahuman Blueprint viewport and Set the location and the rotation to 0.

After that, I click on my mesh. In the details panel, set Visible to off and Visibility Based Anim Tick Option to ‘Always Tick Pose and Refresh Bones’. Then under the variables section. Find Live Retarget and select UseLiveRetargetMode and in details under Default value switch it on. 

 Now The MetaHuman is successfully retargeted, but there is something uncanny going on with the arms, to fix that. I open the LiveRetargetSetup under functions in my Metahuman Blueprint. Enjoy that is a node that connects the Get and Branch node. Inside this node, I click on browse. This opens up a folder inside the asset browser that contains the Metahuman Animation Blueprint. In this Blueprint, under, functions, I click on AnimGraph which takes me to the AnimGraph blueprint. Here I click on the Retarget Pose From Mesh node and in detail, inside IKRetargeterAsset I click on Browse. This opens the asset browser, here I select the RTG_Metahuman Component and duplicate it and rename it. I select this duplicate component as the IKRetargeterAsset and hit compile. Then I open this duplicated RTG component and I see that the arms of the two mesh do not align with each other, causing the issue. Here under details, in the source IKRig asset, I set the SourceIKRigAsset as IK_Mannequin, Source Preview Mesh as SKM_Quinn, the TargetIKRigAsset as IK_Metahuman and the Target Preview Mesh as the same body as my Metahuman (in this case, m_med_nrw_body_preview). Next, under Chain mapping. I click on Left Arm and under details, I set the Blend To Source from 1 to 0. I repeat this for the Right Arm as well. I hit save and close the IK Retargeter. Then I compile the AnimGraph and close that and now my Metahuman is completely Retargeted. 

Now to replace my third-person player. I go to the ThirdPerson folder in the Asset Browser and inside that inside the Blueprints folder, I open the Third Person Game Mode Component. Here I change the DefaultPawnClass from the Third Person Blueprint to my Metahuman Blueprint. And hit compile. It’s finally done! Now when I hit play I can see my Metahuman character as the main player instead of the default mannequin.


Turning on Collision for 3D Assets


Now that the landscape looks fairly set up, It’s time to make things have a physical sense by adding collision. When we play the game we noticed that the player character does not have any kind of physical interaction with the 3D assets in the game. The player just passes through an object as if it does not exist. To turn on collision for a particular foliage object or a 3D object.

I begin by searching for and opening its static mesh component from the asset browser. Then I click on Collision which is on the top left area of the viewport. Here I will choose the auto-convex collision option. This will open up a convex decomposition widget in the bottom right of the screen. You can adjust the Hull Count, Max Hull Vertices and Hull Precision these three values correspond to how detailed and how accurately The convex collision capsule will capture the shape of your 3D asset. Once I said these values I click apply.

 I close this window and then I select all my foliage. I side the details panel, I can see the root component and all its branches. I select all the branches and inside details search for Collision Presets and change it from No Collision to Block All. And just like that collision has been turned on for all that particular 3D asset. Now when you play the game your player character cannot pass through that object and can also stand on the object if it is level enough.


Photogrammetry
Photogrammetry is a technique that uses photographs to create 3D models of objects, environments, and structures. The process involves taking multiple images of an object or scene from different angles and using specialized software to extract the 3D geometry from the images. Photogrammetry offers many benefits, including the ability to capture accurate and detailed models, the ability to work with large and complex environments, and the ability to easily update and modify models over time. With advancements in technology, photogrammetry has become more accessible, allowing more industries and individuals to take advantage of its capabilities.



In ‘Brane’, photogrammetry is being used to collect 3D scans of objects from around the world, which are then used to populate the virtual world of the game. Users can submit their photogrammetry assets, which are then added to the world, creating a diverse collection of objects from different cultures and places. The use of photogrammetry in this project allows for the preservation and sharing of objects that hold cultural significance. By creating a virtual world that includes objects from all over the world, ‘Brane’ aims to promote cross-cultural collaboration and appreciation. Users can search for and find objects in the world and add them to this landscape, which is a collaboration and an alternate dimension for all the 3D assets. Photogrammetry plays a critical role in this project.

Scanning 3D Objects using Scanniverse
Scaniverse is a powerful app for 3D scanning that allows you to easily scan and export 3D objects as FBX files. Here are the steps to use the app to scan a 3D object and export it as an FBX.


First, open the Scaniverse app on your device. Next, select the option to “Scan an Object” from the main menu.
Position your object on a flat surface and ensure that it is well-lit. Then, use your device’s camera to capture images of the object from different angles. The app will guide you through the scanning process and show you which areas still need to be scanned.



Once you have captured enough images, the app will begin processing the data and create a 3D model of the object. After the processing is complete, you can refine the model by using the editing tools in the app. 



Once you are satisfied with the model, select the “Export Model” option from the main menu. Choose the FBX file format and set the desired resolution and quality settings. Finally, save the file to your device or export it to a cloud service. With these steps, you can easily use Scaniverse to scan a 3D object and export it as an FBX file.


Setting up 3D photogrammetry scans for Unreal Engine 5.1
Usually apps like Scaniverse, Polycam, etc. You are very decent job of processing and stitching together photographs to create a 3D mesh. The 3D mesh is good for viewing purposes but when you import the mesh inside a game engine or 3D software you start to notice cracks. It is not optimized for handling lighting effects or the mesh is just not stitched correctly, this can give rise to a lot of collision, lighting and texture issues when it comes to importing the asset in the game. The reason that these 3D models are not processed correctly is that they are processed within the app itself which is not meant to be a sophisticated 3D modelling and processing app. I have a very easy fix for this.



 I use Maya as a ‘middleman’ between the photogrammetry app and Unreal Engine. Maya is a rigorous 3D modelling and animation app and it is good at processing and exporting 3D Objects. We will use this software to rebuild our 3D objects. This is very simple to do. You just have to import your 3D scan from your photogrammetry app in FBX format.

 After importing in Maya you can crop out unwanted parts by viewing the faces and selecting the unwanted parts and pressing delete. After making any required changes, export all as FBX and make sure to select the ‘Embed Texture’ option before exporting. This is the FBX file you import into Unreal Engine 5.1. This 3D model is built properly, does not glitch and even handles collision. This is how I will be refining the 3D models before putting them in the game.


Send me your 3D Photogrammetry Files (FBX Format)!!!!

I am finally set up to accept photogrammetry files from users. While I currently do not have a method for users to directly upload 3D files into the project. I am working on it because it is a natural upgrade to this project.
Until I figure that out. Users can simply submit their 3D scans to me but dropping their FBX files into this Google drive folder. Simply drag and drop your files in the driv and you will find them in the game.
DROP YOUR 3D FBX PHOTOGRAMMETRY FILES HERE AND SHARE THIS LINK WITH YOUR FRIENDS!

https://drive.google.com/drive/folders/10YXfbfQRLprdQfMcgh0sQ0s1U-rlR8ov?usp=share_link


REFERENCES

How to switch between third person and first person in Unreal engine 5 - in 3 mins! YouTube. (2022, September 10). Retrieved April 1, 2023, from https://youtu.be/lMuinhr0SXU


 Unreal engine 5 RPG Tutorial series - #2: Locomotion - blendspace, crouching and procedural leaning! YouTube. (2023, February 21). Retrieved April 1, 2023, from https://youtu.be/WcDj4uZygyE


[new] metahuman as Thirdperson - UnrealEngine 5.1. YouTube. (2022, December 5). Retrieved April 1, 2023, from https://youtu.be/vuhquaUE5HI 


Metahuman. Unreal Engine. (n.d.). Retrieved April 1, 2023, from https://www.unrealengine.com/en-US/metahuman 


DEV LOG 2: LAYERING AND FOLIAGE

PROJECTBULK/BRANE

DEV LOG 2:

LAYERING AND FOLIAGE.


After creating the landscape mesh based on the height map and creating maps for multiple layers, it was time to add textures to the landscape. 

According to the mood board, The kind of landscape I have in mind is a sort of amalgamation of Icelandic Rocky and tundra landscapes respectively.


Colour Scheme

The colour of your surroundings in a video game says a lot about the genre and type of experience you’re going to get out of the world. While GTA 5 has a very realistic city in Los Santos, a parody of Los Angeles. The colour scheme of the city is very colourful with extremely sunny skies and popping colours which feel like watching a Hollywood movie. This colour scheme really enhances the experience as it really helps in exaggerating the superficiality of Los Santos, an indication to the players that this is just like a ‘movie’ and they can do anything they want. 

For my game, I want the experience to be mysterious, relaxing, calm and realistic but rare. I used ‘Coolors’, an online colour palette generator to create colour palettes. 


Even though my target is to create a dry icy landscape I also want the colours to be cool and dark. It will make the place look surreal. In my project, there are two distinctly different terrains, the icy mountains and the mossy ground. The icy mountains will have Icy rocky foliage on their surface while the mossy ground which I want to be dark green, should have subtle foliage full of light grass, yellow and red berries and flowers and alpine vegetation. The foliage should not attack the player. It should complement the world when viewed up close. 

Colour Schemes for my Landscape Top, Ground and Foliage including flowers and berries, etc.

Assets

First I start with gathering assets. Unreal Engine 5.1 gives you access to a brilliant resource known as Quixel Bridge, which is a mind-blowing library of very high-quality 3D assets, surface textures, foliage, decals, architecture, etc. The assets related to landscape painting that I’m looking for in my project can be divided into three sets, Surfaces, 3D Assets and 3D Foliage


Surfaces :

These are 2D texture maps that include a colour map, Normal Map, Roughness Map and a Displacement Map. These maps are usually jpeg or png image files and are used to Texture a 3D mesh. Colour map and specular maps dictate the colour of the objects and the way it interacts with light whereas Normal Maps are used to give things ‘artificial’ depth in their textures using smart colour manipulation and displacement(height) map is used to literally deform the mesh as per the height map.

Surface Texture Maps.

3D Assets:

These are 3D objects in Unreal Engine that include all kinds of static meshes, actors and their respective textures and accompanying files. These 3D objects can be imported inside unreal engine or downloaded through their Quixel Bridge platform.  


Contents from my 3D Assets Folder 

3D Foliage:

These are 3D meshes and texture files corresponding to grass, shrubs, trees, plants, etc. These objects will either be procedurally placed or painted across the landscape, sometimes in large quantities.  These 3D objects can be imported inside Unreal Engine or downloaded through their Quixel Bridge platform. 

Contents from my 3D Foliage Folder 


OpenLand

Now that I have all my assets in their respective folders in the content browser, I am ready to start texturing my landscape. Instead of building a sophisticated and optimized auto material for my landscape from scratch, I decided to use OpenLand by Arunoda Sursirupala. Openland is a completely customizable auto material created and optimized for Unreal Engine. It is available for purchase on the Epic games store. [1]

OpenLand Node Editor Graph

OpenLand Layer Blending nodes


If you look at the node editor graph of OpenLand’s default material. You will notice that it is very similar to the auto material I created in my first documentation. There are two different types of auto material called ‘forest’ and ‘beach’. Forest auto material is the graph that is optimised for areas with no coast and the beach is optimised for island-type landscapes. For my project, I am using the forest auto material. This auto material comes with four primary layers that we are familiar with- Ground, Mid, Slope and Top Layers respectively.


 OpenLand Megascan texture selector widget


They also have an unreal widget that makes it very easy to apply Megascan textures, it’s called the OpenLand Megascan texture selector.

The texture selector has a lot of options for applying textures. Tile near and far options allow you to resize tiles as a function of distance. Other options are colour correction, displacement multiplication, and normal and roughness map intensity. UV rotation, tint variation and The influence on texture variation from different parameters allow you to reduce repetition and tiling. 


Issues with Virtual Heigtfield Mesh

After texturing my landscape first time to add some displacement to the textures to give them real depth. In older versions of Unreal Engine, displacing the mesh based on the texture height map was done through a technique called tessellation. In Unreal Engine 5 this technique has been completely removed and replaced by another technique called Virtual Heightfield Mesh. In its current state, VHM is not too stable and unfortunately for this project, it is not working as intended. When a height field mesh is applied the terrain gets distorted and starts glitching. Hopefully, VHM gets patched in the next unreal update.

Virtual Heigtfield Mesh not working properly


Infinite Ocean

Before I start adding anything I wanted to make sure that the surroundings of my landscape are not completely empty. Since my landscape does not have a defined coast, I decided to sculpt a coast using the Unreal Landscape mode and its sculpting tools. 

Sculpting a coast using the Unreal Landscape mode


Once  I sculpted my coast I decided to add an infinite ocean and the sea around my landscape. 

To add an infinite ocean, first I had to enable the water plug-in from Unreal Engine preferences. Then select my landscape, go to details and enable edit layers. Once that is done I can add a WaterBodyOcean volume to my Unreal Project by keeping the highest point on my landscape as the target location. Once this is done an ocean will spawn in the workspace. I adjusted the height of the water body as necessary till my landscape juts out like an  island 

Adding an infinite ocean and the sea around my landscape. 


Foliage

Once your landscape has colour and texture, It’s time to put stuff in it. I have all my 3D meshes uploaded and ready to go. Since the foliage in my game such as grass trees and small rocks are not that dynamic, instead of placing them as actors in my landscape I will build a Static Mesh Foliage of the 3D mesh I want to use as foliage. Actor Foliage has the same rendering cost as adding regular Actors to a scene, but Static Meshes put using Foliage Edit Mode are automatically grouped together into batches that are generated utilising hardware instancing where numerous instances can be rendered with only a single draw call.[2]

Once I created my Static Mesh Foliage, I created a Procedural Foliage Spawner by right-clicking in my Content Browser and creating one. Then I selected my foliage and applied it to my project. 


Using Procedural Foliage Spawner to add Foliage


After the procedure spawning some foliage, there were, of course, parts of the world that I wanted to give a personal touch to. Add some variety according to height and in some crevices and corners. And give some more attention to detail when it came to foliage. Have different colour schemes for different areas with the colours of flowers, bushes and berries. To do this I used another mode of the Unreal workspace called foliage mode. This model has the tools necessary to paint and remove foliage in your landscape.


Using Foliage Mode Painting tools to add Foliage


RVT blending

After the foliage was introduced. I noticed that a lot of the foliage did not blend with my ground layer due to the differences in the textures used. There is a very cool fix for this known as Runtime Virtual Texture Blending or RVT Blending for short. A Runtime Virtual Texture (RVT) is a virtual texture that, like a standard texture map, generates its texel data as needed, utilising the GPU at runtime. Landscape shading, which employs materials that are similar to decals and splines that are designed to follow the contours of the terrain, is a suitable fit for the RVT since it caches shading data over wide areas.[3] You can use RVT to blend textures with mesh. 

RVT Blending Options for a Material Instance and Use_Openland_RVT_Tools Node in its Master Material 

To do this OpenLand has created a widget that allows you to enable RVT support in your landscape. Once you have done that you can go to the node graph of the master material of any mesh and make sure to insert ‘Use_Openland_RVT_Tools’ in the graph before the output. 

Now you have RVT blending enabled for your Foliage Mesh and your Foliage can blend seamlessly with your landscape.

Before RVT Blending


After RVT Blending


Screenshots

Next Update

Once I am done with texturing and adding Foliage to my landscape. The next steps will be to think about customizing the third person controller and third person camera angles and refining my sky box and potentially adding a weather system.


References

[1]Sursirupala, A. (n.d.). OpenLand Documentation. OpenLandDocumentation. Retrieved March 11, 2023, from https://gamedev4k.notion.site/OpenLand-Documentation-2268081d3b8e4a49a0d824a7ab0b7b44#931a91219e4944c5a2672b193ec6699c


[1]OpenLand - customizable landscape auto material in environments - UE marketplace. Unreal Engine. (n.d.). Retrieved March 11, 2023, from https://www.unrealengine.com/marketplace/en-US/product/openland-customizable-landscape-auto-material?sessionInvalidated=true 


[2]Static Meshes. Unreal Engine Documentation. (n.d.). Retrieved March 11, 2023, from https://docs.unrealengine.com/4.26/en-US/WorkingWithContent/Types/StaticMeshes/


[3]Runtime virtual texturing. Unreal Engine 4.27 Documentation. (n.d.). Retrieved March 11, 2023, from https://docs.unrealengine.com/4.27/en-US/RenderingAndGraphics/VirtualTexturing/Runtime/

Using Format