Explore Free Game Optimization Tutorials – GameDev Academy https://gamedevacademy.org Tutorials on Game Development, Unity, Phaser and HTML5 Tue, 11 Apr 2023 08:31:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://gamedevacademy.org/wp-content/uploads/2015/09/cropped-GDA_logofinal_2015-h70-32x32.png Explore Free Game Optimization Tutorials – GameDev Academy https://gamedevacademy.org 32 32 Unity vs Godot – Choosing the Right Game Engine for You https://gamedevacademy.org/unity-vs-godot/ Tue, 11 Apr 2023 05:44:19 +0000 https://gamedevacademy.org/?p=20270 Read more]]> Most modern video games are developed using a game engine – which allows developers to focus on building their game rather than the tedious backend systems that run it. This makes finding a game engine that works for your project essential, since you’ll be spending quite a bit of time working with it.

With a ton of game engines available, though, how do you pick?

In this article, we’ll be exploring Unity and Godot – two powerful and popular game engines used for 2D & 3D games.

When looking for a game engine, it’s essential to assess its versatility, power, and popularity within the industry. We’ll be taking a look at several factors – such as their versatility and industry presence, and also get you learning resources so you can dive into the engine of your choice.

If you’re ready to pick your game engine, let’s get started!

What is a game engine?

Before we get started, for those new to game development, we first want to talk a bit about what a game engine is. In this way, the reason why a game engine can help you is more clear (and you can temper your ambitions by thinking you need to make your own).

A game engine, sometimes referred to as game architecture or game framework, is a software development environment complete with settings and configurations that improve and optimize the development of video games, integrating with various programming languages.

Game engines can include 2D and/or 3D graphics rendering engines that are compatible with different import formats. They will also often include a physics engine that simulates real-life properties, AI that is designed to respond to the player’s actions, and a sound engine that controls the sound effects within the game.

As stated previously, game engines are primarily designed to make your life easier. Without them, not only would you have to program your game mechanics, but instructions for your computer on how to access and play sounds, how to display your graphics, and so on. This quickly becomes a huge tedium of work – which is often why a big deal is made whenever a AAA company makes a new in-house engine; they really change everything about how a game runs in general.

To summarize, game engines are simply a powerful tool for your game development arsenal. They make sure you aren’t stuck programming every single tiny detail (unless you want to), and get to have fun with the stuff most players actually care about.

BUILD YOUR OWN GAMES

Get 250+ coding courses for

$1

AVAILABLE FOR A LIMITED TIME ONLY

Versatility

There are a variety of different types of games that you can choose to develop, from 2D to virtual reality. A good game engine will support coders in creating a wide range of games, and both Unity and Godot do this. Here are the different types of games you can choose to develop and how Unity and Godot can support your development journey:

  • 2D. Both engines are more than capable of developing 2D games, with Unity giving its users a broad tool set. However, new updates to Godot 4 have significantly improved its ability to create 2D games, including 2D lighting, 2D materials, 2D light & shadow support, and 2D masking and clipping. It’s also worth noting that Godot offers an actual dedicated 2D engine, while Unity still technically uses its 3D engine to render 2D games. This has some performance implications for more complicated projects.
  • 3D. While Godot is capable of making 3D games, it isn’t as powerful and doesn’t have as many features as Unity. In terms of graphic fidelity, Unity is therefore the superior choice. That said, Unity is consequently the heavier-duty engine and may not work as well on older computers as Godot.
  • Augmented Reality (AR). There are currently no AR capabilities for Godot, whereas Unity has an established AR interface and has been contributing to AR output for years.
  • Virtual Reality (VR). Unity is an excellent game engine in terms of VR, as the plugins used are versatile and are able to integrate into the XR infrastructure. While VR capabilities have improved with Godot 4, export doesn’t yet work for meta quests due to licensing issues. For now, Unity is still the superior choice.
  • Mobile. Both Godot and Unity have mobile capabilities. That said, Unity perhaps offers a few more tools when it comes to the actual development process, such as the Device Simulator.
  • Multiplayer. Both platforms have multiplayer features to offer. The Godot 4 update in particular has massively improved the ability to make complex multiplayer games. The update includes improvements in scene replications, RSET, and state updates. As for Unity, with the recent release of the Unity Multiplayer Networking features, it’s easier than ever to develop multiplayer projects. In this area, both over a relatively good basis to work from.

Unity vs Godot - Choosing the Right Game Engine for You

Coding

The coding language you are most comfortable with will have a determining factor in what game engine you decide to use.

Unity uses C# for its scripting logic, which is generally considered a fairly well-balanced language to learn. This language offers some enhanced readability compared to C++, but has a plethora of advantages that other high-level languages can’t offer.

If coding plainly isn’t your thing, Unity also does offer a visual scripting option in its newest versions. This drag-and-drop approach means you don’t have to learn tedious C# syntax, but still get all the game logic you would with regular coding.

In comparison, while Godot is compatible with a few languages, its main language focuses are GDScript and C#. We’ve spoken about C# already for Unity, but GDScript is perhaps Godot’s “main” language. This Python-like language is made to be super easy to read and use with the Godot engine specifically (as it was developed by the Godot team). While this doesn’t have the versatility of C#, it does come with a variety of benefits for making games that much easier to make.

Industry Presence & Popularity

The game engines that professional developers are using is a good way to judge the versatility and usability of the software. Unity and Godot are both popular game engines used to create high-powered games that are popular on the market. However, each has different uses.

Unity is popular with AAA and indie developers alike because of its abundant resources. These resources include things like instant assets, online community assistance, Unity-provided tutorials, and intuitive tools for a variety of applications. It offers a lot of developer support along the way, and makes the coding process easier compared to other similar game engines. Plus, Unity offers tons of companion services (such as monetization for mobile games), making it a one-stop shop for many users.

There’s also the benefit that the Unity game engine is as powerful as it is popular. Thus, it’s been able to spread to a ton of other industries such as film, architecture, and so much more.

Popular games created using Unity include Hearthstone, Cities: Skylines, Rust, Ori and the Blind Forest, and the majority of mobile games.

In comparison, Godot is a lot younger than Unity and doesn’t have the same presence. However, Godot is quickly rising to become a major competitor. Godot also has an advantage that Unity does not in terms of development: it’s open source. As such, developers get ultimate control over the engine itself and, if push comes to shove, can make the engine do what it wants.

Despite its youth, Godot has been used to create many successful games including Kingdoms of the Dump, Cruelty Squad, Rogue State Revolution, and Dungeondraft.

Unity vs Godot - Choosing the Right Game Engine for You

Community

A strong and supportive community is very important when choosing a game engine, as you’ll be able to seek support from subreddits, YouTube channels, Discord chats, and whatever else there is to offer (plus, asset stores count here too). Luckily, both Unity and Godot have thriving communities offering help to new and seasoned developers.

  • Unity has a game developer convention held yearly called Unite. The event mostly focuses on how to use Unity with some YouTubers teaching engaging classes.
  • Unity also has a subreddit providing expert advice and knowledge, and a YouTube channel with tutorials from expert developers.
  • Godot also hosts many in-person and online events, such as Godot @ GDC 2023, where developers will showcase their new games made using Godot.
  • Godot helps its community with a subreddit, and has their own YouTube channel as well.
  • Godot is active in a ton of other channels such as Discord, Twitter, and so forth – all of which are viewable on their promoted Community page.

Both Unity and Godot also have an asset store – a marketplace for 3D models, textures, systems, etc. that can be used on the engine (with free and paid options). These assets are beneficial for developers who need extra assistance in design or coding, and are largely community supported.

This said, if we had to pick, we would note that Unity’s community is larger simply because of its longer-established reign as a popular game engine.

Cost

Last but certainly not least, let’s talk about money. What would using these game engines cost you?

Unity has a free plan – but there is a catch. In general, the rule of thumb is that once you’re earning $100K annually, you need to purchase a paid plan. That said, the majority of users will be fine with the free plan (so unless you become a AAA overnight, don’t worry too much about it).

This said, the free plan does come with fewer features, though this centers more so around developer support. For the most part, the free version still includes things like the platform itself, core features like visual scripting, and even the Unity Plastic SCM for version control (3 users and 5GB of storage).

The paid plans are as follows, though, if you’re interested:

  • Plus – $399 per year per seat
  • Pro – $2,040 per year per seat
  • Enterprise – Custom quotes depending on need

By comparison, since its open-source Godot is entirely free, with absolutely no strings attached. Of course, this does mean it doesn’t offer the same sort of premium services Unity does, but it can be less stressful to know there can’t be any shenanigans.

Unity vs Godot - Choosing the Right Game Engine for You

Tutorials & Courses

At this point, you’re probably leaning one way or another on whether to pick Unity or Godot. However, the best way to find out your preference is simply to try them out. So, to get you started (and demonstrate the quality of learning materials available), we’ve listed out some of our favorite resources.

Unity

  • Unity Game Development Mini-Degree, Zenva. With this curriculum, you’ll explore a variety of tools and features Unity has to offer. In addition, you’ll get the chance to build a ton of projects suitable for a professional portfolio. You’ll not only learn the fundamentals of game development, but make real games including RPGs, idle games, and FPS games.
  • Unity 101 – Game Development Foundations, Zenva. This free course teaches you the very basics of how Unity works and allows you to start playing with your first game objects. You’ll also learn skills needed to build your own games in any genre you choose.
  • How to Program in C#, Brackeys. This free YouTube course teaches you how to read, write, and understand C# coding from scratch, and lays the foundation for learning Unity.
  • C# Tutorial, Derek Banas. In this tutorial, you’ll learn how to install Visual Studio and Xamarin. You’ll then cover key programming knowledge including input, output, loops, data types, and more.
  • C# Basic Series, Allan Carlos Claudino Villa. In this free course, you’ll cover everything there is to know about C# to give you the knowledge needed to create games with Unity.

Godot

  • Godot 101 – Game Engine Foundations, Zenva. With Godot 101, you’ll learn the fundamentals of working with the Godot 4 editor, including understanding the core difference between Nodes and scenes. Plus, you’ll get experience working with both 2D and 3D objects!
  • Godot 4 Game Development Mini-Degree, Zenva. This comprehensive collection of courses gives you the knowledge needed to build cross-platform games using Godot. You’ll be given the tools needed to create 2D and 3D games including RTS games, platformers, and survival games.
  • Make Your First 2D Game with Godot: Player and Enemy, GDQuest. Learn to create your own games using Godot in this beginner tutorial series, hosted on YouTube. This course gives you an entire run-through on using Godot to program different game types, perfect for complete novices.
  • Make your first 2D platformer game In Just 10 Minutes, Eli Cuaycong. In this short tutorial, you’ll learn the basics to help you with your game development journey, including tile maps, world scene, and spritesheets.

Conclusion: Unity vs Godot – which is better?

Now we’ve covered the differences between Unity and Godot, let’s get back to the ultimate question – which game engine is better?

This entirely depends on what type of game you want to make, the game’s style and needs, and what kind of knowledge you’re bringing to the table.

For instance, for 3D, AR, or VR games, Unity is definitely the superior choice as it offers all the tools needed and the power to make those games work. However, on the opposite end of the spectrum, Godot is definitely the winner when it comes to 2D given it’s the dedicated and more performant rendering engine for this aspect.

Even then, there are exceptions even to the above! For example, a game like Paper Mario would probably work better with Unity, whereas a 3D game might work better with Godot in cases where you need to work with the engine’s code itself.

Regardless, both are truly great options, and you can’t go wrong with either. Plus, with plenty of courses available for both, they’re both easy to learn.

Regardless of your choice, we wish you the best of luck with your future games!

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
Best Unity AI Tutorials – Unity Game Development Guide https://gamedevacademy.org/best-unity-ai-tutorials/ Mon, 23 Jan 2023 08:06:50 +0000 https://gamedevacademy.org/?p=15371 Read more]]> In many single-player games – and some multiplayer games – a form of artificial intelligence (AI) is at work.

Some of these AI’s are very rudimentary, such as the “AI” that controls the opposing padel in Pong whose only goal is to bounce the “ball” back to you. Others are a lot more complex, such as enemies you might find in games like Fire Emblem who need to have complex strategy algorithms. Regardless of the complexity, though, there is no questioning that without AI, we would not have the beautiful fabric of games that are available to us today.

Unsurprisingly, learning how to add AI to your games is a challenging but rewarding task. And, we find there’s no better place to learn AI than with the popular Unity engine, which comes with several helpful tools for setting up our AIs. In this article, we’re going to showcase some of the best Unity AI tutorials available so you can start adding this great feature for your own games.

Let’s dive in!

What is Artificial Intelligence?

Before we get to the tutorials, we did want to take a moment to define the scope of artificial intelligence. This way, new game developers can get on the same page.

Artificial intelligence can be defined as a system of instructions that allow computers to replicate tasks that would normally require a human. This isn’t dissimilar to coding itself, which is just sets of instructions and rules for the computer to follow. With AI, however, we’re focused more so on mimicking human intelligence. This comes in many forms such as speech recognition, visual identification, and, most importantly for games, decision-making.

In the realm of games, we most often associate AI with programmed enemy characters. These characters have to be able to make decisions, whether that’s navigating terrain, deciding how to attack you, and so forth. However, other things like speech recognition have become more common as well as our technology has improved. Regardless, AI programming is all about instructing computers “how” to decide how tasks should be performed, and then performing those tasks to suit a pre-defined situation.

Within AI, though, there is one more concept we want to address: machine learning. Machine learning is a subfield focused on creating machines that can take in data and “learn” to perform the tasks we want them to better. In fact, things we mentioned already, like visual identification, are generally built up through machine learning. This principle works through sheer pattern recognition, which machines are even better at than humans.

Though machine learning is a lot newer when it comes to games, it still has shown a lot of potential for allowing us to create even smarter opponents, among other things. As such, we will be including machine learning elements on this list!

FULL 3D ENEMY AI in 6 MINUTES!

Duration: 6 Minutes

This Unity tutorial by Dave / GameDevelopment is a fantastic – if not quick – jumpstart into the topic of AI enemies. Without getting too complex, the tutorial will show you how to set up three states for an enemy AI: patroling, chasing, and attacking. All three states operate on the simple parameter of how close the player is to the enemy character. Based on proximity, the AI is designed to choose the appropriate state, which ultimately controls its action.

In this case, an enemy character will move around randomly with patroling. When the player is close enough to be “seen”, the enemy will chase the player. If the player is close enough to be attacked, the enemy attacks. While a very simple decision tree, it is a fantastic start for beginners who want to experience a taste of how AI decision-making works for games.

In addition, this tutorial also covers a bit of NavMesh: Unity’s component for pathfinding. As such, without going too deep, you’ll learn a bit about how enemies decide just where they can actually go.

LEARN NOW

Create a simple AI with behaviour trees

Duration: 23 minutes

Created by Mina Pêcheux, this Unity tutorial takes a more in-depth look into how a finite state machine is structured. Before even touching Unity or any C# coding, the tutorial talks extensively about behavior trees. These behavior trees, which essentially map out the behavior we want objects to take and when, form the design basis for implementing our AI in code. Additionally, it allows you to explore the limitations of finite state machines, and how to plan around those limitations when creating your enemies.

After theory, though, the tutorial does jump into Unity and shows you how to adapt the behavior tree you design to actual C# code. The tutorial uses the examples of a patroling state, a targeting the player state, and an attacking state to demonstrate this. On top of this, the tutorial explains fairly comprehensively how the actions associated with the states – i.e. the behavior – is implemented.

This is also a fantastic tutorial if you’re at all interested in working with animations, as these too can be controlled via the behaviors.

LEARN NOW

The Ultimate beginner’s guide to AI with Unity & C#

Duration: 20 minutes

In this tutorial by Blackthornprod, you’ll once again get the chance to explore some basics of AI development. This includes things such as how to create state machines, how to add behaviors to your states, and similar. However, this tutorial differs from some of the other beginner-friendly tutorials in two important ways.

First, this tutorial is focused more so on 2D games. 2D games have their own little quirks when it comes to implementing behaviors – particularly in the realm of movement. As such, this tutorial will teach you how to recognize those considerations. This is particularly important when it comes to pathfinding which can take a bit of a different form compared to 3D.

The second difference is that this tutorial is focused less on state machine principles, and focused more so on getting you up and running with some of the most common and useful behaviors. This includes being able to follow players, being able to flee from players, being able to attack players, and being able to walk around randomly. Additionally, you will get to explore 2D pathfinding, as well as line of sight – both very important and common concepts to deal with when it comes to triggering behaviors.

LEARN NOW

A.I State Machine Made EASY

Duration: 8 minutes

If you’re a bit confused by the concept of state machines, this tutorial by Sebastian Graves is for you. Whereas a lot of tutorials on this list focus on programming the behaviors themselves along with the state machine, this tutorial focuses on the state machine aspect almost exclusively.

State machines, for those who don’t know, are basically AI machines that control different AI behaviors. When you want an object to exhibit a certain behavior, you change its entire state to execute that behavior, and by consequence, use states to store those behaviors. With this tutorial, you’ll explore the proper C# scripting methods in full for setting up a state machine. You’ll discover not only how these state machines are structured, but how to smoothly switch between states.

Plus, you’ll also learn how to build a state machine fairly independently of the behavior. In so doing, you’ll discover how to quickly adapt your state machine to incorporate the behaviors you personally need for your game project without getting confused by what those behaviors actually do.

LEARN NOW

NPC AI and Combat for Survival Games

Duration: 1.8 hours

One of the few premium resources on this list, this course by Zenva covers not just AI mechanics, but how to incorporate your AI with other sorts of game mechanics.

You’ll start off by exploring the creation of state machines to add wandering, fleeing, and attacking states for NPCs. However, unlike previous tutorials on this list, you’ll do so in a way that is more modular – so you can assign different NPCs to use different combinations of those states. Along with this, you’ll also learn to work with the NavMesh component for dynamic pathfinding.

As mentioned though, this particular resource is great if you want to then take the next step in further integrating AI with the rest of your game. You’ll discover how to implement things such as combat and loot along with your AI controlled enemies. To add to this, you’ll also combine your programmed state machine with the Unity Animator (which is, in itself, a state machine) to get your NPCs moving in tandem with their behaviors. All in all, this course is a total package if you’re looking for a bit more oomph to learning AI programming.

LEARN NOW

Unity NavMesh Tutorial – Basics

Duration: 12 minutes

Although this has been featured a bit above, this tutorial by Brackeys should be your go to if you’re mainly interested in AI pathfinding the easy way using Unity’s NavMesh.

The Unity NavMesh component is a tool that allows you to take your terrain and turn it into a static “navigation mesh”. This navigation mesh marks out the walkable areas of your level. After which, you can then create a NavMesh Agent which is able to work with this navigation mesh to determine how to reach a set point. In other words, you don’t have to code anything complicated for the NavMesh to figure out how to get an object somewhere – Unity handles everything for you.

This tutorial will cover all those basics comprehensively, including how to set up your navigation mesh quickly and easily, as well as how to adjust settings that come as part of the component. You’ll also learn how to create the NavMesh agent, resulting in a quick project that lets you click on a spot and have an object travel to it.

LEARN NOW

Sync Animator and NavMeshAgent States

Duration: 10 minutes

For fans of automated approaches to handling AI states in Unity, this tutorial by Llam Academy is a great one to check out to see just how much control you can have.

In this case, the tutorial focuses on two separate systems within Unity: the Unity NavMesh Agent which deals with pathfinding and the Unity Animator which deals with when and how to play animations. Though both aspects have been covered in previous tutorials, this one takes a more thorough and intermediate approach for those who aren’t afraid of things like events or non-primitive models.

The tutorial first shows you how to set up an Animator state machine, though with a big focus on creating parameters along with each state. Then, using C#, you’ll explore how to not only set up your NavMesh Agent, but sync it more comprehensively to work with the Animator states automatically. You’ll even get to see it in action with animations like jumping, giving you insight into how you can keep your characters dynamic even with things like AI.

LEARN NOW

How to use Machine Learning AI in Unity! (ML-Agents)

Duration: 45 minutes

Wouldn’t it be great if our AIs could just figure things out for themselves? Well, this tutorial by Code Monkey will show you how possible that it.

As mentioned at the start, machine learning is a subset of AI focused on machines being able to take in data and improve how they process a task to reach a desired goal. Though it is newer, within Unity we have the ability to access this with ML-Agents. The tutorial featured here is an in-depth study of those ML-Agents, how they work, and how to use them.

Besides setup, which is a very involved process, you’ll learn a few key characteristics about working with ML-Agents and using reinforcement learning. This style of learning involves programming your ML-Agents with actions, giving them the ability to make observations, allowing them to make decisions around the programmed actions, and rewarding them when specific results are obtained. In addition, you’ll also cover how these steps allow you to “train” your agent and, based on results, improve your models.

LEARN NOW

An Introduction to Unity’s ML-Agents

An Introduction to Unity's ML-Agents

Duration: 12 minutes

Given how complicated machine learning is, we also wanted to include this tutorial by Tim Bonzon.

Like the previous tutorial, this one focuses on using ML-Agents in Unity with a reinforcement learning style approach. You’ll of course learn the important aspects of creating ML-Agents with actions, decision-making abilities, rewards, and so forth. Likewise, the tutorial also covers how to train your agents so they can get better at the assigned task.

Where this tutorial shines is its practical approach. If you’re not as interested in the theory parts of machine learning and just want a project up and running, this is a good tutorial for that. The tutorial focuses almost solely on building the project – teaching the computer to balance a ball on a moving platform. Thus, like the machine, you’ll really learn by doing the task itself and picking up hands on how reinforcement style learning is implemented with ML-Agents.

LEARN NOW

Bolt Tutorial – Game AI for Creatures in Unity

Duration: 13 minutes

Insofar, all the chosen tutorials have had a focus on C# scripting – with a few Unity tools and components here and there. This tutorial by Home Mech shows you that visual scripting with the popular Bolt package is also possible when it comes to AI in Unity.

Using not a single piece of manual code, the tutorial will show you how to build a state machine using Bolt’s drag and drop nodes, featuring a resting state and an evading state for a little frog. You’ll also learn how to “program” transitions between these states, just as you would with regular C# scripts.

Besides this, the tutorial does also cover how to set up behaviors along with those states. This includes the complicated matter of making the frog move during its evasion state. The state pertains to ideas such as how 3D space movement works, how to make the frog do things like hop and flip, and how to use SLERP to make smooth movements as the frog moves.

LEARN NOW

A* Pathfinding Tutorial (Unity)

Duration: 3 hours, 11 minutes

This tutorial series by Sebastian Lague is probably the most advanced on the list. However, we want to include it if you’re looking for a bit of a challenge in programming pathfinding AI.

A* is an algorithm in computer science that was originally designed for traversing regular old graphs. As time has passed, it’s become a rather common method for pathfinding in games, though. Without getting to in-depth, it allows the computer to calculate the shortest distance to a specific point using a complex formula involving calculating costs of moving to each space. This works even for levels with obstacles, as they can be accounted for within the A* setup.

The series focuses very in-depth on this topic. You’ll first learn how the algorithm works, how to divide your levels into “grids”, and how to implement the algorithm with C#. You’ll then get the chance to explore more intermediate topics, such as making movement smooth or giving different terrain movement penalties to consider. For super advanced users, you’ll also learn how to use multi-threading to improve performance, which is imperative with this style of pathfinding.

LEARN NOW

Language Recognition AI with Unity and Azure

Duration: 1 hour, 30 minutes

Last but not least, we have another premium resource from Zenva. In this course, you’ll take a bit of a different approach to machine learning by not programming that part at all.

To elaborate, this course focuses on using Microsoft’s Cognitive Services. These services provide pretrained models capable of certain tasks, such as image recognition or, in this course’s case, speech recognition. Though these solutions are customizable to many extents, they allow you to add AI to your projects much more easily, since much of the tedious groundwork is already done.

Through this course, you’ll work with the speech recognition aspect of these services to build a voice commanded rover for exploring planets in Unity. This will involve discovering how to set up which commands should be looked for in the game, but also how to tie those commands to an action. You’ll also learn this in ways that can be applied to many other projects where you want to save a bit of time on your AI programming.

LEARN NOW

Parting Words

While there are surely more tutorials on AI out there, this collection of best Unity AI tutorials should get you started. We’ve tried to include a little bit of everything – from NavMesh to state machines to even speech recognition. However, there is always more to discover in the realms of AI, and each game will require some different decision-making processes. Nevertheless, AI is a powerful tool that can add a lot of replay value and challenge to your games.

So, regardless of what you’re looking to build, we wish you the best of luck with adding AI to your projects!

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
Best Programming Language for Games – Making a Video Game https://gamedevacademy.org/best-game-development-languages/ Sun, 08 Jan 2023 01:00:12 +0000 https://gamedevacademy.org/?p=12601 Read more]]> So, you’re ready to start creating your very own video games. However, there comes an important question to answer when you start: what programming language should you learn how to code?

While arguably most programming languages can be used to create games, including high-level languages like Python, some choices do have more benefits than others. Additionally, choosing what programming languages to learn how to code may ultimately lock you into certain engines or frameworks as well, which further affects the development process of your game. To make a long story short, choosing the right game programming languages can be a stressful endeavor.

However, in this guide, we intend to cover some of the popular programming languages available to you to learn for game development and provide the necessary information that may help you decide. If you’re ready to learn how to code and jumpstart your game development career or hobby, let’s dive into the best programming languages for games!

BUILD YOUR OWN GAMES

Get 250+ coding courses for

$1

AVAILABLE FOR A LIMITED TIME ONLY

JavaScript

About

JavaScript is commonly known as one of the core pillars of web development.  It first appeared in 1995 and was designed to suit the new ECMAScript specifications that were attempting to standardize the web and web browsers.  While HTML informs web layouts and CSS informs web aesthetics, JavaScript is the true computer programming language that breathes life into websites, adding most amounts of interactivity you see on a day to day basis.

However, with the emergence of HTML5, JavaScript has also become the core pillar of HTML5 game development in terms of game programming languages.  As it was originally designed with both object-oriented and event-driven systems for web user interaction, this made it the perfect choice to extend for games.  Additionally, with Flash now being obsolete, it also made way for these sorts of HTML5 games to rise up and become the mainstay of browser-based game development.

Babylon JS solar system scene

Pros

  • As HTML5 games are based on the web, JavaScript makes it easy to make browser-based games and mobile games.
  • Given JavaScript is a core part of the web, it’s easy to integrate such games with JavaScript-based frameworks and libraries, like Node.js and Express, for multiplayer video game creation.
  • HTML5 games are generally the easiest to share since they can be hosted directly on a website for anyone to visit.
  • JavaScript is generally less resource-intensive for game development, meaning it’s great if you don’t have a powerful computer to develop games on.
  • Since JavaScript is an extremely stable coding language due to its need for the web, HTML5 games are easier to maintain and don’t require the same sort of updating games made with engines do.

Cons

  • Options for 3D graphics are limited to specific frameworks, generally forcing most people to rely on 2D graphics for their video games.
  • It is a rather high-level programming language, so it isn’t as efficient as other game programming languages on this list in terms of how fast it performs tasks.
  • Due to not being as efficient, HTML5 games have more limits in terms of scope and size of the games you can make.
  • While JavaScript itself receives lots of support for web development, HTML5 game communities are a bit smaller compared to other popular programming languages and engines for video game development.
  • You don’t really see JavaScript being used as much for console games.

Turn-based RPG map screen made with Phaser

Relevant Engines & Frameworks

Popular Games Made with JavaScript

Where to Learn JavaScript

C#

About

C# is a general-purpose programming language created in 2000 by Microsoft with the specific intent of working with their .NET framework.  Given the popularity of C++ and Java, it was designed to take the best of both programming languages and combine it into a new, easy-to-read, object-oriented programming language that had great cross-platform capabilities.  However, it also strove to keep businesses in mind so that it could be easily used for software development.

As for video games, C# also found a home in the industry due to its relative efficiency and scalability.  In particular, it became the default programming language for the popular Unity engine, with all modern Unity libraries being built around the language.  Given Unity is used for a large percentage of the video game industry, this has given it a tight hold in this regard.

City building game made with Unity and C#

Pros

  • Comparatively, C# is a very beginner-friendly language with fairly easy to read code.
  • Automatic memory management means you don’t have to do a deep dive into those aspects and can focus more on just developing your game.
  • As a language developed by Microsoft, it is a top choice for games on Windows PCs.  However, it is capable of working on most modern systems.
  • C# is a type-safe language, meaning your games will have more security and won’t exhibit tons of unexpected behaviors.
  • It is relatively efficient and scalable, meaning it’s well-suited when used to create game projects.

Cons

  • With some exceptions, outside of game engines, C# isn’t widely used for games.  Thus, an engine is almost required in this case for community support.
  • While more efficient than JavaScript, it isn’t as efficient as C++ or Java, meaning game performance can suffer if the video game is sufficiently complex.
  • As the language was designed to work specifically with Microsoft’s .NET framework, it isn’t as flexible as other programming languages on the list.
  • In the business world, while in high-demand for general business applications, it isn’t as demanded for game developers as C++ is.

2D RPG made with Unity

Relevant Engines & Frameworks

Popular Games Made with C#

Where to Learn C#

C++

About

The C++ programming language was originally called “C with classes.”  It was created to take modern principles, like object-oriented computer programming, and combine it with the low-level features seen by languages such as C.  In so doing, it would allow users to more easily create their programs with readability, while not losing advanced features such as memory management.

Given its general-purpose nature, C++ has, all around, become one of the most widely used programming languages, having applications for software and – as is the topic of this article – games.  In fact, many modern engines, such as Unreal Engine, are built on the language, so learning to code C++ is considered key by many professional developers. Of the programming languages, then, this can be considered one of the most commonly used in general.

Creating an Arcade-Style Game in the Unreal Engine

Pros

  • Being so close to C, C++ is amazingly efficient and is one of the fastest programming languages to choose if you have lots of complex tasks to run in your games.
  • C++ has perhaps the largest community and tutorial support given its universal usage almost everywhere.
  • Its ability to do things like memory management is very handy if you want tighter control on game performance.
  • It has a large amount of scalability and can be used for both small and large video game projects.
  • It is platform-independent, meaning you can port projects around very easily regardless of OS.

Cons

  • While there are plenty of game engines to use, finding lighter-weight frameworks for C++ game development can be a challenge.  You also can’t easily develop games with JUST C++.
  • Of the languages on this list, C++ is probably the most difficult to learn and is the least beginner-friendly.
  • Though C++ gives you more control over memory management and the like, this comes at the cost of lacking automatic garbage collection – which means more work on the developer’s end.
  • As an older language, some modern features seen in other languages are not present or standardized with C++.
  • Since C++ allows developers to do more, this also allows less security – meaning you could get tons of unexpected behavior in your games without intention.

How to Create a First-Person Shooter in the Unreal Engine

Relevant Engines & Frameworks

Popular Games Made with C++

Where to Learn C++

Java

About

Created in 1995, Java is an object-oriented programming language created for general computer programming.  The design principle behind the language was to have it require as few dependencies as possible – especially compared to other programming languages at the time and even now.  In so doing, this meant that programs created with Java could easily run on different systems as they weren’t as dependent on the underlying computer architecture.

Given this cross-platform nature, Java is used fairly extensively for application development.  However, in the realm of games, it also finds a place.  Though not as extensively used as other programming languages on this list, quite a number of desktop games are still made with Java.  In addition, as the top choice programming language for Android devices, Java is commonly used by a number of developers for mobile games and apps.

Pet database app made with Java

Pros

  • As Java is the foundation for Android devices, it is well-suited to making mobile games.
  • Despite its age, Java is capable of utilizing modern technologies like multi-threading for better game performance.
  • As long as the platform supports JVM, Java games can be run almost anywhere.  This includes systems like Linux.
  • It is well-suited to server development, so multiplayer games can be made fairly easily with Java without the need for extra libraries and so forth.

Cons

  • Even though successful games have been made with Java, it is not the standard choice for game development in the eyes of most developers.  Thus, community support for it in this field is limited.
  • Though it does have automatic memory management, it is known to have some latency issues for games because of that.
  • Few engines or libraries specific for game development exist for Java compared to other languages.
  • Most modern consoles do not support JVM, so despite its ability, Java games are often platform limited in this regard.

Color selection app made with Java

Relevant Engines & Frameworks

Popular Games Made with Java

Where to Learn Java

Conclusion

Unidentified person playing a game

As we hoped to establish here, there is no wrong or right programming language to learn to code when it comes to games. All of them have different features, different target platforms, and different sorts of developers who prefer them.  However, the collection here is, no doubt, some of the best programming languages you can opt to learn when it comes to game development. That being said, don’t be afraid to explore elsewhere as well, as you may find other languages easy to learn and expand your skills. Many others of the best programming languages still find usage in games, and there are tons of frameworks and engines available that make use of the different programming languages available.

Regardless of your choice, each is set to help you develop your game project.  So whether you pick C# so you can use Unity, want to dive into the challenge of developing with Java, or something else, learning to code is a profitable skill sure to help you in your long-time game hobby or career.

So get out there, learn to code, make games, and develop skills to last you a lifetime!

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
Unity Havok Physics Tutorials – Complete Guide https://gamedevacademy.org/unity-havok-physics-tutorials/ Fri, 06 Jan 2023 23:41:46 +0000 https://gamedevacademy.org/?p=11742 Read more]]> Havok, is a popular physics engine which can be found in many AAA games, and now it’s coming to Unity. So what does this mean? Well for many years, people have been asking for an alternative to Unity’s built in physics system. It can be limiting for some developers, who want better performance from larger and more accurate physics simulations.

Announcing Unity and Havok Physics for DOTS – Unity Technologies Blog

Havok Features

Havok is designed to handle the performance of many complex games which require many physics interactions.

  • Stable stacking of physics bodies
  • Better accuracy of fast-moving bodies
  • Better simulation performance (twice as fast as the existing system)
  • Higher simulation quality
  • Deep profiling and debugging

Although Havok and the existing Unity physics may seem quite different, switching between the two can easily be done on runtime. This is because in the back-end of Unity, both physics systems run off the same data.

How To Add Havok to Your Project

Right now, Havok physics is in preview on the Package Manager with a minimum version of Unity 2019.3 required.

Links

Other New Unity Features

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
Unity vs. Unreal – Choosing a Game Engine https://gamedevacademy.org/unity-vs-unreal/ Mon, 19 Dec 2022 03:51:08 +0000 https://gamedevacademy.org/?p=12773 Read more]]> When learning game development, people often wonder about what the best game engine is – in fact, we’ve done a whole entire article on the matter. In terms of versatility, power, popularity, and use in the industry – there are two that most people talk about though: the Unity game engine and the Unreal Engine.

Answering which one is better is a difficult matter. Some will argue Unreal is better simply for the fact it is a top choice for AAA studios. Others, however, will cite the fact that Unity is more well-rounded and, for indie developers, is often a better entry into the industry. Objectively, though, is one better than the other?

In this article, we’ll be going over the pros and cons for each engine and have a true battle of Unity vs. Unreal. We will also help you get started with learning both so no matter your ultimate decision, you can jump into creating your own games right away. Let’s get started, and hopefully by the end, you will be able to make an informed choice about which game engine is the engine of your dreams.

BUILD YOUR OWN GAMES

Get 250+ coding courses for

$1

AVAILABLE FOR A LIMITED TIME ONLY

Versatility

As a game developer, you might want to experiment with different types of games – 3D, 2D, multiplayer, VR, AR, etc. Having an engine that caters to a wide range of games is important and luckily, both Unity and Unreal do just that. Let’s have a look at a range of different game types and which engine would be best suited for them:

  • 3D – Both engines have great 3D capabilities, although Unreal is best in terms of graphical fidelity.
  • 2D – Both engines can do 2D, although Unity has a much larger focus and tool-set.
  • Virtual Reality – Unity excels in VR as the plugins are very versatile and integrate into the overall XR infrastructure.
  • Augmented Reality – Both engines can do AR, although Unity has been doing it for longer and has much more defined systems.
  • Multiplayer – Both engines can do multiplayer, although Unreal is the only one with integrated support. Unity’s integrated multiplayer is still in-development although there are many 3rd-party frameworks.
  • Mobile – Unity is considered the best engine for mobile.

Creating a 2D game in the Unity engine.
Creating a 2D game in the Unity engine.

Creating a 3D game in the Unreal Engine.
Creating a 3D game in the Unreal Engine.

Coding

When starting out with a game engine, what language you code in can be a determining factor. In Unity, you write code using the C# language, while in Unreal you use C++. Generally, C++ is considered a more difficult language to learn, although Unreal has its own integrated visual scripter called Blueprints. Visual scripting is a great alternative to coding as it allows you to do the same things – yet with no coding required. Just create nodes and connect them together in order to develop logic for your game.

While both engines have visual scripting, Unreal Engine’s Blueprints Visual Scripting system has existed longer, and is a more established way of “coding” for the engine. Though recent versions of Unity have added visual scripting as an option, the standard way of scripting behaviors is still considered to be C# programming.

Unreal Engine Blueprint visual scripting.
Blueprints in the Unreal Engine.

All in all, if you are looking to code, Unity may be the easier option with C#. Although, if you don’t want to code, you can use Unreal’s Blueprints.

Industry Presence

You may choose a game engine based on what the professionals are using. Both Unity and Unreal are used to create games on the market, but in different ways.

First, Unity is the most popular engine for indie developers and mobile games. There are a number of larger games made with Unity such as: Hearthstone, Cities: Skylines, Rust, Ori and the Blind Forest, and most mobile games.

In terms of the AAA-industry, Unreal is used more than Unity. Games such as: Fortnite, Bioshock, Sea of Thieves, Star Wars: Jedi Fallen Order, and a large number of others use the engine.

Something to also keep in mind is how the engine developers themselves use it. Unity don’t create their own games apart from small educational projects. Epic Games (developers of the Unreal Engine) on the other hand, have developed many games such as: Fortnite and Gears of War using the Unreal Engine.

Worldwide global forecast for the games market

Community

An important aspect of a game engine is the community. Both engines have a pretty large online presence, with their own respective forums, Sub-Reddits, YouTube channels and more.

  • Unity – has a yearly game developer convention called Unite. Most game development YouTubers focus on using and teaching Unity.
  • Unreal – Epic Games has more of a presence online with live tutorials.

Both engines also have their respective asset stores. An asset store is a marketplace for 3D models, textures, systems, etc for an engine which you can get for free or a price. These can be great for developers who may not be the best artist or lack knowledge in a certain area.

Several programmers working as a team

Tutorials

Both Unity and Unreal have great learning resources. Documentation, tutorials, online courses, etc. Below we’ve listed a number of different courses on Unity and Unreal.

Unity

Unreal

Robot hand playing Go

Conclusion

We’ve let our battle of Unity vs. Unreal play out, but let’s turn back to our original question: which engine should you use?

Ultimately, this will depend on you and your needs. However, if we might be so bold, we can at least say the following:

  • If you’re a beginner looking to learn how to code and create a wide range of games – go with Unity.
  • If you’re not interested in coding and want better graphical performance – go with Unreal.

Overall, these are still quite surface-level statements, so we recommend you try both before making any decision. However, as you try these engines out, you can keep the information here in mind, as there are things to consider you can’t learn by doing! Even so, remember there’s no best game engine – there’s only the game engine you feel most comfortable using. Whether you pick Unity or whether you pick Unreal Engine, the world is at your fingertips. So get out there, and create some amazing games and apps!

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
How to Make a Game – A Guide to Making Video Games https://gamedevacademy.org/how-to-make-a-game/ Mon, 19 Dec 2022 01:00:13 +0000 https://gamedevacademy.org/?p=12298 Read more]]> How does one make a game? 

Perhaps this is a sentiment you’ve thought of before as you daydreamed about your video game project that could be amazing if only you could make it reality.  Of course, you could hire a small studio to make it for you, but most people don’t have a spare $1,000+ laying around to afford even a few days of programmer and artist labor.  Instead, many opt for the route of building computer games themselves, since that only costs your own time.  That still begs the same question though: how do you even get started making games?

BUILD YOUR OWN GAMES

Get 250+ coding courses for

$1

AVAILABLE FOR A LIMITED TIME ONLY

This question comes with a bunch of other questions as well.  What game engine should you use?  Where is the best place to publish your game?  How much programming do you need to learn before you start making your games?  Do you even need to know how to code to make a game?  How do you design a game?

Unity playground platformer demo

In this guide, we will aim to provide a baseline understanding of as many questions as possible, both in terms of how to plan out your video game, what engines you can use, and so forth.  While this guide won’t specifically cover creating your first game from scratch, it will lead you in the right direction so that you will be able to do that via the resources provided.

So, if you’re ready to learn how to make a game, let’s dive in.

What is the Cycle of Game Development?

In game development, there is a general cycle that many game projects follow, whether we’re talking about a huge 200-person studio or a solo indie project.  We will delve into each section in-depth, but as a brief overview, the cycle is as follows when it comes to making a game:

  1. Thinking of an idea: Developing an idea in your head of what you want the game to be.
  2. Designing the game: Developing that idea further, creating documents, and formulating each of the systems, levels, art style, etc.
  3. Making the game: This is where you begin to create the game. Many people like to develop a very simple version of their game with basic graphics to quickly get a feel for how it will play before polishing everything.
  4. Testing the game: Showing the game to other people. As the developer, you already know everything about the game, so in order to know if the game works, is fun to play, easy to understand, etc., you need people testing it out. This process should also be done regularly as new changes to the game might change how people play it.
  5. Finalizing the game: In a sense, no game is ever finished. You either run out of time or money. Eventually, you need to, or feel you need to, finish up on the game and get it out there.
  6. Publishing the game: This is when you publish your game for everyone in the world to see.

5 Steps in the Cycle of Game Development

Thinking of a Game Idea

Everyone has an idea of what their dream video game would be, but not many people can actually make that a reality. If you’re wanting to learn to make video games, it may seem tempting to just jump in and create your game with all the amazing technology that’s available. But I don’t recommend you do that. When creating a game, you need to think about scope. Ask yourself: how long will this take to make? Do I have all the skills required to make this game? Do I have an understanding of the game and how I might make it?

Understanding your game is the most vital part. You may have the story in your head, the setting, or some of the mechanics – but to understand your game, you need to know every aspect. How each of the systems interact, what the player can/can’t do, the goal, etc. This may seem like a lot of stuff to keep track of, but do remember that large games are created by large companies.

As a solo game developer, I’ve found the best way of creating a manageable game with an appropriate scope, is this method:

  1. Think of a core mechanic.  Mario’s jumping or the grappling hook in Just Cause are but a few examples of core mechanics.
  2. Develop the game around that core mechanic. Every feature of the game should encourage players to use the core mechanic.

Puzzle platformer example

Let’s take Mario for example. Mario’s core mechanic is jumping. Pretty much every aspect of the game required the player to jump.

  • Jumping on enemies
  • Jumping up to punch blocks
  • Jumping over gaps
  • Jumping on the flag at the end of the level

This is part of the reason why the Mario games (especially the earlier ones) were so successful. The developers focused on building the game around one core mechanic to make it as fun, polished and versatile as possible. Here’s a list of resources to help you develop a game idea and figure out a core mechanic:

Designing your Game

So you’ve got an idea and need to develop it further. If you’ve got a small game with one or two mechanics then you could probably just keep that in your head, but if it’s any larger or especially if you’re working in a team, you need to document it. A game design document is what you can use in order to layout: the idea of the game, how it works, the goal, the player, interactions, art style, theme, etc. You should be able to give a GDD (game design document) to two people and have them both develop a fairly similar game. If you’re working in a team, then this is necessary to communicate how you want to make your game. Here’s some helpful resources to do with GDDs:

Now in terms of actually designing the game – that’s up to you. Game design is one of those fields where there’s no 100% way to do something. There’s no formula for creating a unique and fun game. This doesn’t mean there are no good practices or guidelines you should follow. Knowing game design can help you develop a game that’s engaging and easy for the player to understand. Here are some online resources which can help you in game design:

Game Designer planning out game

What Type of Game Do You Want to Create?

When thinking of a game to make, you probably also know what type it’s going to be. Here’s a list of different types of games and platforms you can develop for.

  • 2D is what most game engines provide and is generally the best step for beginner game developers.
  • 3D is what many of the most popular game engines provide and is also a great first step for beginners.
  • Mobile can open you up to an entirely new market and user interface with touch controls.
  • Virtual Reality is a rapidly growing sector of the gaming industry and allows for immersive experiences.
  • Augmented Reality is a technology that has uses both in and out of the games industry – so there are lots of applications for it.

Making the Game – What is a Game Engine?

With an idea in your head and a plan down on paper (hopefully), it’s time to get to the “how” in our question of how to make a game.  However, there is a crucial step that will determine the entire process: which engine do you use and which coding language should you learn?

These are all questions you should ask yourself, but there is no one answer. What to learn will depend on the types of games you want to create, your current skills, and whether or not you even want to learn programming.

So what is a game engine? A game engine is a piece of software or a framework that allows you as a developer to create games. It provides a platform to structure your game, build levels, assign logic to objects and build it to your specified platform. There are a large number of game engines out there, with each of them providing different features and specialties.

Below is a list of some popular game engines, the type of games you can create with them, and the skills you’ll need to learn. We have a detailed blog post about the various different game engines you can read here as well in case you need a bit more time to decide. You may also want to delve into what the best coding languages are for game development too.


Unity

Unity logo

Unity is the most popular engine out there on the market right now, with many online learning resources to get you started. Unity prides itself on being very accessible, allowing almost any type of game to be created.

What types of games can I create? Unity is one of the most versatile engines, allowing you to create: 3D, 2D, VR, AR, and multiplayer games on a large number of platforms.

Do I need to learn a programming language? Unity uses the C# language, although there are many visual scripting plugins available to purchase, along with an integrated solution coming soon to the engine.

Links
Tutorials

Shader Graph demo example from Unity Engine


Unreal Engine

Unreal Engine logo

Unreal Engine, is developed by Epic Games and features powerful 3D graphics. Alongside Unity as one of the most popular game engines, Unreal is also used by many AAA game studios.

What types of games can I create? Unreal is primarily a 3D engine although it does support 2D. You can also develop VR, AR and multiplayer games.

Do I need to learn a programming language? Unreal Engine features a powerful integrated visual scripter, which is ideal for beginners. The engine can also be used with C++.

Links
Tutorials

Game demo as seen in Unreal Engine 4 Editor


Godot

Godot logo

Godot, is an open-source engine which can be used to create 2D and 3D games. Since the engine is open source, there is constant fixes and features being added, along with customized versions made by developers.

What types of games can I create? Godot can be used to create 2D and 3D games, with many new upcoming features to their 3D engine.

Do I need to learn a programming language? Godot primarily uses their GDScript language (similar to Python), but also has support for visual scripting, C# and C++.

Links
Tutorials

Godot game example in Godot Editor


Phaser

Phaser logo

Phaser, is an open-source, 2D framework for making HTML5 games. Unlike the previously mentioned engines, Phaser does not have a user interface. Instead, it provides you with a game programming library you can use while programming.

What types of games can I create? With Phaser, you can create 2D games for desktop and mobile.

Do I need to learn a programming language? Phaser uses JavaScript.

Links
Tutorials

Phaser concept example as seen in mobile phone

Testing Your Game

Testing your game is an important part of development. How do you know if something is going to be obvious to the player? Will they know where to go? What to do? For you it may seem obvious, but for someone who has never seen the game before – things might be very different. This is why it’s important to test your game all throughout development. Here’s some resources for learning more about testing your game:

Finalizing Your Game

Some game developers will say that the first 90% of your game will take 10% of the time, and the last 10% will take 90% of the time. This is a bit of an overstatement, but the idea is still the same. This is where you’re ironing out the bugs, adding in the final art style, polishing everything, and doing some final testing. Here are some resources to help you get through the final step of finishing your game:

Two programmers showcasing exhaustion and happiness

Publishing Your Game

With your game now complete, you probably want to show some people. Luckily, we live in a time where putting your game out there is easier than ever before. There are many online platforms to publish to. Some are free and some are paid. Here’s a list of those platforms, the requirements and how you can get started:

Desktop

  • Itch.io is a popular platform for indie developers. It’s free to publish your game here.
  • Game Jolt is another popular platform for indie developers, allowing you to publish your game there for free.
  • Steam is the largest distributor of PC and VR games. $100 through Steam Direct.
  • Epic Games Store is a relatively new and growing PC game distributor, similar to Steam. Complete a form for Epic to consider your game.

Mobile

Console

Virtual Reality

Here’s a list of resources which can help you deploy, publish, and market your game:

Conclusion

Game creation is hard work and takes some time.  Learning these skills also won’t come to you overnight.  Theory is one thing, but understanding what it takes to make a game is another thing entirely (let alone working with programming languages). Even if you are an expert programmer or artist, certain phases and skills in the cycle of game development can’t be skipped over no matter what you do.

But the best way to improve and learn how you make games is by making games.

So, start making games the first day you begin your learning journey, as I can guarantee it will excel your learning tremendously. There’s a lot of technology out there for you to use, so don’t hesitate to try different ones in order to find what serves you best.  However, the skills and resources provided here will give you a great stepping stone, and part of making is a game is how you plan to make that game.

Good luck out there, and I wish you the best of luck with your game maker journey!

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
A Guide to Handling Huge Open Worlds in Unity – Part 1 https://gamedevacademy.org/how-to-handle-huge-worlds-in-unity-part-1-deactivating-distant-regions-to-improve-performance/ Sun, 18 Dec 2022 17:49:55 +0000 https://gamedevacademy.org/?p=6014 Read more]]> Have you ever wondered how open worlds in video games work performance-wise?

In this tutorial series, we’re going to explore just how video games deal with huge worlds and make the best use of the processing power and memory available to the computer.

For this first part of our huge world tutorial, we’re specifically going to focus on how to process terrains, split them, and write scripts that will hide them when the player is too far away. In Part 2, we’ll circle back to how to deal with other objects, like trees, and also cover fog so you can easily hide your environment manipulations.

If you’re ready to learn the tools necessary for building open-world environments, let’s get started!

Source code files & Requirements

You can download the tutorial source code files (the Standard Assets folder was not included to reduce the file size)  here.

Additionally, please note that this tutorial does assume you already know the basics of Unity and C#. If this is your first time venturing into Unity, we recommend pausing and taking some beginner-friendly courses first to explore the fundamentals you’ll need to jump in.

If you’re an educator, you can also consider trying out Zenva Schools. Zenva Schools is an online learning platform aimed at K12 institutions. It comes with a variety of features for classroom management, and also offers many beginner-friendly courses on Unity as well.

BUILD YOUR OWN GAMES

Get 250+ coding courses for

$1

AVAILABLE FOR A LIMITED TIME ONLY

Creating the world

Before starting the tutorial, create a new Unity project.

There are different ways to create the world map of your game. Since the focus of this tutorial is on how to handle such world map, and not on how to create it, we are going to use an automated tool to create the map.

The tool we are going to use is called L3DT. This tool allows you to generate huge world maps which you can import in Unity later. Start by downloading and installing the program. Then, open it and select File -> New project.

L3DT with File menu open

We are going to create a Designable map. Select the map size you prefer.
In this tutorial, I’m going to generate a map with size 512×512. You can leave the other parameters in their default values.

L3DT Wizard with Heightfield size settings open

In the calculation queue window, select all maps to be created. In the end, we are only going to use the height map and the texture map, but generating the other maps will add information to the texture map in the end. For each map, you can leave the default parameters.

L3DT Wizard with Calculation queue settings open

In the end, l3dt will generate a map like this one.

World generation provided by L3DT

What we need to do now is exporting the height map and the texture map, so that we can import them later in Unity. In order to export the height map, select the heightfield tab, then right-click on the map and select Export. Unity only imports RAW, so we need to export it in this format. I’m going to create a folder called L3DT in the Assets folder of my Unity project, so that I can export the L3DT files there.

L3DT Wizard with Export Map window open

The texture map can be exported in a similar way. Select the texture map tab, right-click on the map and select Export. We are going to export it in the PNG format.

L3DT Wizard Export map options to export texture

Importing the map in Unity

Now, let’s import this map in Unity. First, open the Unity project you have created. In this project add a new Terrain object (right-click on the hierarchy and select 3D Object -> Terrain). This will create a new object with the Terrain component.

Terrain object in the Unity Inspector with Terrain Component open

Now, we are going to import the height map and the texture map into this terrain. In order to import the height map, select the terrain settings, go to the Heightmap section and click on Import Raw. Then, select the height map you have export from L3DT. It is important to set the Terrain Size to be the same as the L3DT map size (512×512). The size in the Y axis defines the height of the mountains, so you can try different values until you find the one that looks better in your game. I’m going to use a Y size of 100.

Import Heightmap window in Unity with 100 set for the Y

In order to import the texture map, you need to select the Paint texture option in the Terrain component. Then, click Edit textures to add a new texture.
Select the exported L3DT texture and set its size to be the L3DT map size (512×512). However, notice that, due to different coordinate systems between L3DT and Unity, the Y size must be negative (-512).

Add Terrain Texture window in Unity with world selected

After doing so, you should have your map imported in Unity. Now, our next step will be to slice this map into tiles.

World deformation in Unity game scene

Slicing the map into tiles

As I mentioned, we are going to activate the map regions that are close to the player, while deactivating those far away from it. In order to do so, we need to slice our world map into tiles. Instead of writing our own code to slice the map, we are going to use a solution available online.

Here you can download a Unity package to split terrains. After downloading it, just open it and it will import itself into your project. This will add a tab called Dvornik in your Unity project. By selecting it you can split the create terrain into four tiles.

However, once you do this, you will see that the terrain texture is being repeated for each tile.

Unity map object with tiles sliced

You can fix that by clicking on edit texture and adding the following offsets for the tiles:

  • Terrain 0: offset x = 0, offset y = 0
  • Terrain 1: offset x = 0, offset y = 256
  • Terrain 2: offset x = 256, offset y = 0
  • Terrain 3: offset x = 256, offset y = 256

We can repeat the process for each created tile, further dividing the terrain into sixteen tiles. Again, after splitting each tile, we need to adjust the offsets accordingly. However, instead of adding an offset of 256, now we need to add an offset of 128. For example, the offset of the new tiles created from Terrain 1 are:

  • Terrain 1 0: offset x = 0, offset y = 256
  • Terrain 1 1: offset x = 0, offset y = 384
  • Terrain 1 2: offset x = 128, offset y = 256
  • Terrain 1 3: offset x = 128, offset y = 384

In the end you should have a map with 16 tiles. Now that we have our world tiles, let’s start coding the script which will activate and deactivate terrain tiles according to the player proximity.

Deactivating distant tiles

We want our game to keep track of the player position and, when the player is far away from a given tile, to deactivate this tile from the game in order to improve performance. This way, this tile won’t be rendered, and our game won’t be wasting CPU and memory on such distant tiles.

The first thing we need is creating a HideTilesScript. This Script will be added to the Player object, and it will keep track of the distant terrains in order to hide them. In order to do so, we need the following attributes in this script:

  • tileTag: the tag of the tile objects, so that we can identify them.
  • tileSize: the size of each tile (128×128 in our case).
  • maxDistance: the maximum distance the player can be from the tile before it is deactivated.
  • tiles: the array of tiles in the game.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class HideTiles : MonoBehaviour {

	[SerializeField]
	private string tileTag;

	[SerializeField]
	private Vector3 tileSize;

	[SerializeField]
	private int maxDistance;

	private GameObject[] tiles;

	// Use this for initialization
	void Start () {
		this.tiles = GameObject.FindGameObjectsWithTag (tileTag);
		DeactivateDistantTiles ();
	}

	void DeactivateDistantTiles() {
		Vector3 playerPosition = this.gameObject.transform.position;

		foreach (GameObject tile in tiles) {
			Vector3 tilePosition = tile.gameObject.transform.position + (tileSize / 2f);

			float xDistance = Mathf.Abs(tilePosition.x - playerPosition.x);
			float zDistance = Mathf.Abs(tilePosition.z - playerPosition.z);

			if (xDistance + zDistance > maxDistance) {
				tile.SetActive (false);
			} else {
				tile.SetActive (true);
			}
		}
	}

	void Update () {
		DeactivateDistantTiles ();
	}

}

Then, in the Start method we use the FindGameObjectsWithTag method to retrieve the tiles in our game using the tileTag. Those tiles are saved in the tiles array, so that we can call the DeactivateDistantTiles method.

The DeactivateDistantTiles method, by its turn, will check the distance from the player to each tile in the game. Notice that the tile position is added with half of the tile size. That’s because we want to measure the distance from the player to the center of the tile, and not its bottom left corner. If the sum of the distances in the X and Z indices is greater than the maximum distance, we deactivate the Tile. Otherwise, we activate it. Finally, this method must also be called in the Update method, so that we keep updating the tiles status.

Now, in order to test our Script we need to add a Tile tag and a Player object. In order to add the Tile tag, select an object, click on the Tag menu and select Add Tag. Then, create a new Tag called Tile. Finally, you can assign this Tag to all the tile objects.

Terrain 0 0 object in the Unity Inspector

In order to create the Player object, we are going to import a Unity package. So, in the Assets menu select “Import Package -> Characters”. You can import all the Assets from the package.

Import Unity Package window with Standard Assets selected

This will create a Standard Assets folder inside the Assets folder of your project. Drag and drop the FPSController (Standard Assets/Characters/Prefabs/ FPSController) prefab into your game. This will be the player in our game, so let’s rename it to Player and add the HideTiles Script to it. Remember to properly set the Script attributes. In this tutorial I’m going to set 256 as the maxDistance attribute.

Player object in the Unity Inspector and Hide Tiles script

Now, you can try playing the game to see if the distant tiles are being properly deactivated. You can put the Scene view in the left side of the Unity Editor (alongside the Object Hierarchy) so that you can see its content while moving the player in the game.

Unity world map in a cross shape

And that concludes this tutorial. In the next one, we are going to add more stuff to our world, as well as adding a fog to hide the distant tiles that are being deactivated.

In the meantime, you can further expand your skills by learning about sound design for 3D worlds as well – another must-have feature to create the right atmosphere! You can also expand your Unity skills in general with online courses for both individuals or classroom settings, since one needs a game project to make use of a game world.

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

]]>
How to Optimize Games in Unity – Mobile Game Dev Tutorial https://gamedevacademy.org/optimize-unity-tutorial/ Sun, 06 Nov 2022 01:00:38 +0000 https://gamedevacademy.org/?p=13005 Read more]]>

You can access the full course here: Publishing and Optimizing Mobile Games

Light Baking

In this lesson, we’re going to go over the concept of actually baking your lighting inside of Unity. When it comes to performance impact, lighting is one of the major factors that contribute.

Types of Lighting

In Unity, there are three different types of lighting: Baked, Real-time, and Mixed.

Light Modes Description
Baked Unity pre-calculates the illumination from baked lights before runtime and does not include them in any runtime lighting calculations.
Realtime Unity calculates and updates the lighting of Realtime Lights every frame at runtime. Unity does not precompute any calculations.
Mixed Unity performs some calculations for Mixed Lights in advance and some calculations at runtime.

When baking lights, Unity will create a special type of texture called lightmap, which is basically like a blanket that is put over everything in the scene with all the lighting data and information already pre-applied. This can greatly reduce the rendering cost of shadows, as real-time lighting updates shadows and lighting every frame.

Setting up Baked Lighting

To start setting up baked lighting, we need to define which objects we want to include in the baking process. Let’s select all gameObjects that will remain static for the rest of the game:

Objects selected in Unity Hierarchy

Unity objects selected in scene

We can then check these to be ‘Static‘; in other words, once this game is playing, we cannot move these objects around.

Static option selected in Unity Inspector

Once that’s done, we can select the light source (i.e. Directional Light):

Unity Hierarchy with Directional Light selected

… and change the Mode from ‘Realtime’ to ‘Baked’.

Unity Inspector with Light Mode options displayed

We can then click on the Auto-Generate Lighting button on the bottom left corner of the screen.

Auto-Generate light pointed to in Unity Inspector

This is going to open up the Lighting window. And in here is where we can set up our baked lighting:

Unity Mixed Lighting and Lightmapping Settings in Unity Inspector

First of all, make sure that you have the lighting settings set up; if it doesn’t exist yet, you can create one by clicking on New Lighting Settings.

Lighting Settings with New Lighting Settings selected

Then you can scroll down to set up the Lightmapping Settings, where we can select which processor to bake light with.

If you have a good graphics card, we recommend selecting Progressive GPU as this is going to greatly decrease the time it takes to bake the lighting.

Lightmapping Settings with Progressive GPU selected in Unity

When we are generating lightmaps, we need to take samples around the scene to determine how the light is going to behave, by looking at the samples’ light intensity, shadows, etc.

Increasing these values will make it take longer to render, but will result in a finer detail of lighting.

Unity Inspector various Lightmapping Settings

For more information about Lightmapping, refer to the official documentation: https://docs.unity3d.com/Manual/Lightmapping.html

Light Generation

Once you’ve finished configuring the lighting settings, you can click on Generate Lighting:

Unity Inspector Lighting Component with Generate Lighting selected

And you’ll now see a progress bar down at the bottom right going through each of the various different processes in order to bake the lighting.

Progress bar showing light baking progress in Unity

In exchange for greater performance, you may encounter some glitches, such as this banding artifact. Feel free to finetune your lightmap settings and re-generate lighting until you’re satisfied with the result.

Unity lighting glitches after baking

Optimizing the UIs

In this lesson, we’re going to go over a few different ways you can optimize your user interfaces inside of Unity.

Removing Raycaster

Having multiple UIs on your screen at a time can slow down your performance. And one of the most performance impacting factors is Raycaster.

When a Graphic Raycaster is present and enabled inside of a Canvas, it will be used by the Event System to see if you are clicking on any of the UI elements. In other words, it will translate your input (touch or mouse click) into UI Events.

Unity Inspector with Graphic Raycaster circled on Canvas

For optimized performance, you should minimize the number of Graphic Raycasters by removing them from non-interactive UI Canvases. You can remove the raycaster by Right-click > Remove Component.

Remove Component option for Graphic Raycaster in Unity

Splitting Canvases

It’s recommended that you have multiple canvases with a single UI object rather than having a single canvas with multiple UI as child objects. This is because each time a canvas is active, it is going through every single child object checking to see if it needs to be rendered on screen, if it is active and enabled, etc.

Unity Canvas with many different menu screens

So instead of having various elements in one canvas, you should split them up into different canvases.

Unity Hierarchy with multiple Canvases

Disabling Raycast Target

Inside of an image or a button, there is an Image component. You’ll see that inside the Image component is a Raycast Target checkbox. This will determine whether or not the image is going to be detecting Raycasts.

Raycast Target checked for Image in Unity

Having the Raycast Target enabled allows the image/button to be detecting touch or mouse clicks. This may also be enabled inside of a non-interactable object (e.g. Text).

Unity Extra Settings with Raycast Target unchecked

For every element that doesn’t need to be interacted with the finger or the mouse click, you can basically disable Raycast Target.

Optimizing Scripts

In this lesson, we’re going to be going over scripting optimizations to help out with performance.

As you script your games and increase the number of scripts and complexity of them, it will potentially get to a point where the scripts are impacting their performance and this can be due to a number of different reasons.

First of all, let’s go over the concept of caching objects.

Caching Objects

Let’s say, you have a game where you want to change the ball’s color to a certain color every single frame.

void Update()
{
   GameObject.Find("Ball").GetComponent<MeshRenderer>().material.color = Color.blue;
}

In this example, we are finding the Ball object, then we’re getting the MeshRenderer component, and then changing its color to blue.

In fact, this line of code is very inefficient because:

  • GameObject.Find is an expensive function to call, as it searches through the entire scene for an object called “Ball”.
  • .GetComponent is also an expensive function to call, as it will return a specific component attached to the GameObject after looking up all components.
  • This method allocates memory, and has a high lookup cost, and should be avoided inside a performance-critical context or risk per-frame allocations and poor performance.

We can fix it by caching the ball’s MeshRenderer component. Since we’re calling it every single frame, it is helpful to cache this to a variable.

private MeshRenderer ballMeshRenderer;

void Awake()
{
   //  get the ball's mesh renderer
   ballMeshRenderer = GameObject.Find("Ball").GetComponent<MeshRenderer>();
}

void Update()
{
   ballMeshRenderer.material.color = Color.blue;
}

As you see here, we’ve created a private MeshRenderer variable called “ballMeshRenderer“.  And we’re only calling GameObject.Find and .GetComponent “once” in the entire game (inside of the Awake function).

The following functions are expensive and should not be called inside of the Update function.

  • GameObject.Find
  • GameObject.FindObjectOfType
  • GameObject.FindObjectWithTag
  • GameObject.GetComponent
  • Camera.main (= same as GameObject.FindObjectWithTag)

Reducing Function Calls

Similarly, you can reduce the number of function calls inside of the Update function. Let’s take a look at this example:

void Update()
{
    ExpensiveFunction();
}

void ExpensiveFunction()
{
    //This will be called approx. 60 times a second - inefficient!
}

There will be certain functions that can be called every 0.1 or 0.2 seconds, instead of every single frame. For example, you could be constantly checking to see if the player has reached a certain score. If the functions aren’t directly tied to player inputs, such as looking, shooting, jumping, etc. you can actually give it a delay as following:

private float lastCallTime;

void Update()
{
    // call the function every 0.2 seconds.
    if(Time.time - lastCallTime >= 0.2f)
       ExpensiveFunction();
}

void ExpensiveFunction()
{
    //This will be called approx. 5 times a second - great!
}

Other Optimizations

  • Reduce Debug.Log calls
  • Reduce frequency of Raycasts
  • Do not use Loops in the Update function
  • If you’re instantiating lots of the same object, use Object Pooling

When a Debug.Log() is called, it can affect performance and also clog up your console for other important messages.

If you’re looping through many elements in the Update function, keep in mind how many things you are looping through every frame as it can greatly slow down performance.

If you’re instantiating particle effects or bullets, try using Object Pooling.

Object Pooling

Object pooling is where you pre-instantiate all the objects you’ll need at any specific moment before gameplay — for instance, during a loading screen.

Instead of creating new objects and destroying old ones during gameplay, your game reuses objects from a “pool” by enabling/disabling them.

For more information, refer to the official guide: https://learn.unity.com/tutorial/introduction-to-object-pooling

 

Transcript

Light Baking

Hey, everyone. In this lesson, we are gonna go over the concept of actually baking your lighting inside of Unity.

Now, when it comes to performance impacts inside of Unity, especially in 3D games, lighting is one of the major factors that contribute. And when it comes down to lighting, you have two choices. You have real-time lighting and you have baked lighting.

Real-time lighting is what is enabled by default, and this basically means that the lighting is calculated every single frame. So, if you have an object moving around it will have a shadow that is tracking it around and every other object is gonna have its shadows, its lighting, the intensity, and all of that calculated every single frame. Now that’s really great if you wanna have waving trees, if you wanna have players that can run around and have their shadows being displayed.

But when it comes to having more complex lighting, such as having lighting bouncing off various different objects and having a better performance, that is where baked lighting comes in. And baked lighting is lighting that is calculated by us, the developers, before people play the game. And it bakes it into the textures known as a lightmap, which is basically a blanket that is put over everything in the scene with all the lighting data and information already pre-applied. So realistically, it’s just rendering a texture when we play the game rather than calculating and displaying the lighting every frame.

And baking your real-time lighting can actually have the biggest performance increase if you are using a 3D scene, and the more complex your 3D scene is, the more performance you should see from baking your lighting.

Now baked lighting isn’t good in all circumstances. Again, if you do want your shadows to be updated as you’re running around, real-time lighting may be better for that. So in that case, you can use something such as a mixed light, for example, so you can bake your buildings, your terrain, you can bake your trees and all that. All the objects that aren’t gonna move in your scene, you can have baked lighting for that and have real-time lighting for stuff such as the players, the enemies and everything that is gonna be moving around and dynamic.

So how do we go about actually setting up baked lighting? Well, if we select our directional light here and this is just a basic scene that I created just to demonstrate this. On our directional light you’ll see that where it says mode it is currently set to real-time. If we select this, we can choose between real-time, which is updating the shadows and lighting every frame, mixed which is a mix of both real-time and baked, and baked being it’s going to generate the lightmap and then apply it to our scene when we play the game.

So the lighting is pre-calculated, okay? So right now in real-time lighting, now, let’s just say we want to go ahead and start baking this lighting. Well, the first thing we need to do is not switch it over to baked. Instead we need to define which objects we want to include in the baking process.

So here I have a dropdown of all the various game objects just stored under a empty game object right here. And what I’m gonna do is I’m gonna select all of the game objects I want to include in the baking, so the ground and all these other cubes here. And inside of the inspector, you’ll see that we have a checkbox here for whether or not this object is going to be static. So let’s enable that right there.

And what static means is basically once this game is playing we cannot move these objects around. They are static, they are built into the scene and pretty much we can’t rotate them, move them, or scale them, okay? Okay, so now that we’ve made these objects static we can actually begin the process of baking our lighting.

So I’m gonna select our directional light right here and I’m gonna change the mode from real time to baked. Nothing has really happened, we don’t really have anything happening right now. So what we need to do is go to the lighting settings window. And to do that we can go down to the bottom right corner of our screen and click on this auto-generate lighting button down here. This is gonna open up the lighting window and in here is where we can set up our baked lighting.

So, first of all, if you don’t have a lighting settings here set, you can click on it, new lighting settings that’s gonna generate a new lighting settings asset for you, and then you can start editing it here. So the first thing we want to do is scroll down to where we have the light mapping settings as this is where we are going to define all the various different settings and options when it comes to light baking.

First of all we have the light mapper, and this is basically how we are going to generate our light maps. Now, by default, this is on progressive CPU, meaning that the light baking is gonna take place over on your processor. And then we also have progressive GPU. Now, if you do have a pretty good graphics card, I recommend selecting progressive GPU as this is gonna greatly decrease the time it takes the bake as the larger your scene is, the longer it’s gonna take to actually bake the lighting. And it can take some time. So I’m gonna personally select progressive GPU.

Now, if you are on a fairly low end computer then you probably should select a progressive CPU. Although if you do know that you have a pretty good GPU, if you’re on a gaming computer for example, then select progressive GPU.

Okay? Now down here, we have direct samples, indirect samples and environment samples. What these are are basically when we are generating our light map, we need to take samples around the scene to figure out the intensity of the lighting, the shadows, pretty much everything in order to determine how the light is going to behave. Because with light it comes out from the light source, such as our directional light here, and it can bounce off objects.

So if we have a light bouncing off the green floor, for example, it can then reflect up onto one of the pillars right here and leave a little bit of green lighting on it when it bounces off. This is what we can do with baked lighting. And with real-time lighting, light bouncing isn’t really possible.

So, direct samples, indirect samples, and environment samples. The higher these numbers are, then the better your actual light map will look. Although when you do increase these values it does take longer to render as it does need to do more samples. So I recommend just keep it on the default for now. If you do create a much larger scene and you do wanna get more in-depth into light baking, then you can of course do much more research on what each of these individual aspects are and tweak them to your liking. As when it comes down to lighting, there is no one solution for all projects as each scene has its own specific lighting settings which looks best for it, okay?

Apart from the these, we can go down to where it says bounces. This basically determines how many light bounces each ray is going to do. I’m just gonna keep it on two for now. Light map resolution, this is gonna be how many tech cells per unit. So every Unity unit, what is the resolution going to be? Okay, think of it as pixels on an image, think of this how many sort of pixels of lighting we’re gonna have. So how high resolution do you want the lighting to be, the shadows to be.

We’re gonna keep it on 40 for now, which is the default. Max light map size, it’s gonna be 1,024. Again, if you do want to increase the resolution and the complexity of your light map, you can increase this number, although we’re gonna keep this at the defaults. And ambient occlusion, which is basically adding in tiny little micro shadows, where two objects intersect. In real life, if you do look at objects that are sitting on top of each other, you can see a slight little shadow where they intersect or where they touch. So we can select that if we do want to have ambient occlusion included.

And apart from that, we can just go ahead and click the generate lighting button down here. I’m gonna dock my lighting window over in the inspector here. And down at the bottom right you can see we have a bar that is gonna begin going through each of the various processes in order to bake the lighting. And in our scene view here, you can also see it is going ahead, it’s doing all the calculations.

And depending on the size and complexity of your scene, it can take some time. Although since this is quite a basic scene, we are pretty much complete right now.

And here we go, this is our baked scene, and it looks very similar to the previous one where we had real-time lighting. Although if we look around, you’ll see that you do have some sort of artifacts and sort of low resolution looks to this. And this is just due to the fact that we have a lower lightmap resolution that can be increased. We can increase the direct samples, the indirect samples to get rid of this.

Although when it does come down to generating light maps you will always have these sort of artifacts compared to real-time lighting. Although on the pros side thing, you do now have much greater performance. And it also allows for light bounces and more complex lighting setups.

So that was a basic overview of Unity’s light mapping in baking systems. Of course, you can go into much more depth into each of these various different options, tweaking them. So I recommend if this is something that interests you and it is something that you are thinking of using for your future projects, do more research into this, as there are many different avenues you can go down when it comes to lighting inside of Unity. So thank you for watching.

Optimizing the UIs

Hey everyone, in this lesson we’re gonna be going over a few different ways you can optimize your user interfaces inside of Unity. And this will help for both mobile, and for pretty much every other single game you create.

Now here in my game that I have just for this course, there isn’t a lot of UI. We have our canvas over here, and inside the canvas, we have a button and an end screen, which just has a button and text on it.

So when it comes to optimization, this isn’t necessary for this type of game, but let’s just say you have a game where you have maybe hundreds of buttons, hundreds of text elements, and it’s maybe a UI-focused game. Well, in that case, having the UI have that many elements, it can slow down performance for a number of different reasons.

First of all, one of the most performance impacting factors when it comes to UI is the Raycaster. So if I select the canvas right here and go over to the Inspector, you’ll see that we have a number of different components on our canvas.

We have our Rect Transform, which is basically a transform component for UI elements. We then have Canvas, which is in charge of rendering the elements to the canvas and just the overall management of the canvas. The Canvas Scaler is in charge of changing the aspect ratio and resolution of the canvas, and the Graphics Raycaster here, which is in charge of detecting inputs from our finger or our mouse when we are clicking on various different UI elements.

And this Game UI script down here is just part of the game, so we can ignore that. Now inside of Unity, let’s just say you have a canvas which has text only. Maybe you have a canvas that is just there to display the player scores, or just to display the players’ HUD elements, and there are no interactions on that canvas. Well, in that case, a Graphics Raycaster would be not necessary. You don’t need to click on anything.

And in that case, you should actually remove this component. And the reason why is because even if you don’t have any buttons on the screen, this Graphics Raycaster is still going to be using up part of your CPU, checking to see if you are clicking on any of the UI elements. So what we can do is just right click and go Remove Component. I’m not going to do that since I do need the Graphics Raycaster for my buttons on the end screen and the next play button right here. But if you do have a canvas that is lacking in buttons, or is just there to display information, then you should remove the Graphics Raycaster.

Now, when it comes to canvases, generally when creating your first game, you only really have one canvas. Although as you do later on expand your games and increase the UI in those games, it’s recommended that you actually have multiple canvases because, think about it, each time a canvas is active or enabled, in this case, it is going to be going through every single child object, checking to see if it needs to be rendered on screen, checking to see if it is active, disabled, and various other things based on how it should show that object.

Now let’s just say you have a game which has the players’ HUD, the menu, the scoreboard, various different elements in a single canvas. Well, every single frame it’s gonna be going through and checking to see each of those if they need to be rendered, it will then render them. Now the problem with that is if you do have various things, such as the scoreboard or pause menu not always active, then it’s quite unnecessary to have that in an activated canvas.

So instead of having like a pause menu and a score board or a setting screen or a menu screen inside a single canvas along with every other thing in your game, you should split them up into different canvases. So have a canvas for the players’ HUD that’s always gonna be active, have a canvas for the pause menu, have a canvas for the settings menu, try and segmentate all your various different UI sort of sections into various diffferent canvases, so you can disable and enable the canvases rather than the specific UI elements. And that will increase performance in the long run depending on how large your project actually is.

So we’ve gone over canvases. Now, one more thing to do with the Graphics Raycaster is if we select the button right here, for example, and we go down to the image component, you’ll see that there is a Raycast Target Boolean right here which we can enable or disable. And this is going to determine whether or not this object right here, this component is going to be detecting raycasts.

Now, since this is a button, we do need to have this enabled in order to actually detect clicks. But if I open up my end screen, for example, here and we go to one of the text elements here, I’m using TextMeshPro, you’ll see that if we click on the extra settings here, Raycast Target is also enabled.

Now, when working with text elements, that is not necessary. So I can just disable that on all of my text elements right here, disable Raycast Target, as well as on any, pretty much every single UI element that doesn’t need to be interacted with the finger or the mouse click, you can basically disable Raycast target. So it would take that out of the pool of possibilities that the Graphic Raycaster has to go through in order to determine if it’s being clicked.

So these are just a few small things that can help you out with optimizing a UI as when it comes down to mobile game development, any sort of performance increase is really great. And eventually, you may have a game which is very UI heavy, has lots of UI elements, and in that case, optimizing a UI may be necessary in order to get the performance that you wish. So thank you for watching.

Optimizing Scripts

Hey everyone. in this lesson we are gonna go over a few scripting optimizations to help out with performance. As you create your games, and increase the number of scripts and complexity of them, it will potentially get to a point where the scripts are impacting the performance, and this can be due to a number of different reasons.

First of all, let’s go over the concept of caching objects. So, let’s just say, for example, you have a game where you want to change the ball’s color to a certain color every single frame. Now, in this example, inside of the Update function, we are finding the Ball object, then we are getting the MeshRenderer component and changing its color to blue. Now, this isn’t really a proper game. This is just an example to show off how we can improve this line of code.

So, we’ve got this and in fact this line of code is very inefficient for a number of reasons. First of all, GameObject.Find is a very expensive function to call. This basically searches through the entire scene for an object called Ball, and then .GetComponent is also expensive function to call as well. And doing this every single frame, will definitely impact performance if we do this quite a lot.

So, how do we fix it? Well, we can fix it by caching the ballMeshRenderer component, since we are calling this every single frame, it is helpful to cache this to a variable. So as you can see, I’ve created a private MeshRenderer variable called ballMeshRenderer, and inside of the Awake function, we are setting this to be equal to the GameObject.FindBall.GetComponentMeshRenderer. We are only calling GameObject.Find and GetComponent once in the entire game.

Compared to before, we’d be calling it potentially 60 times a second. Okay, so that’s gonna greatly increase the performance of this set code, and down in the Update function, we had just getting the color of our variable and changing it to blue, which is gonna greatly improve our performance as well doing it that way.

Now, inside of UNC, there are a number of helpful functions such as GameObject.Find, which finds an object in the scene. FindObjectOfType, which finds an object with a specified component. FindObjectWithTag, finds an object of a certain tag, GetComponent, and Camera.main. These function calls are expensive, and they should not really ever been done inside of the Update function. They should really only be code once if you can help it define a single object, then cache that object and just use that variable, refer to that variable whenever you need to access it then.

Camera.main especially behind the scenes, that’s basically just GameObject.FindObjectWithTag, it’s basically calling that upon the scenes. And generally when looking at tutorials, or I had to do something you would always see Camera.main being used, and if you’re calling that every single frame, especially for if you want to track the camera to the player, calling that every frame, it is very good if you cache that Camera.main into its own variable.

Another thing that can help greatly on our performance, is reducing function calls, okay. ‘Cause generally inside of the Update function, you’ll be wanting to run certain things, such as movement, turning, and all of that, yet it should be done every frame, but there are often things as well that you can call, not every single frame. Maybe you want to be constantly checking to see if the play has reached a certain score, or if the player has gone somewhere or reach some sort of objective.

In that case, you don’t want to be calling it every single frame. Instead you want to be calling it every 0.2 or 0.1 seconds. If it’s not something that is directly tied to the play inputs, such as moving, looking, shooting, jumping, or anything like that, then you can actually give it a delay. So instead of calling it 60 times a second, you can call it five times a second. And again, if it’s not an input, this difference is not really going to be noticeable.

Some other optimizations are reducing Debug.Log calls. And know when you wanna test out certain things, test us on functions, if statements, you’ll be chucking in Debug.Log calls constantly, and the problem with that is Debug.Log is actually quite expensive as well. So I recommend that once you have finished testing it out, that you just remove that from the card because as well as impacting performance, it also clogs up your console for other important messages.

Reducing the frequency of Raycasts is also an important factor as you could imagine, Raycasts are quite expensive, as well as using loops in the Update function. You’re going to be very careful of this one. If you’re looping through maybe four elements or five elements, that’s fine. But if you’re going up to maybe hundreds of things, you’re looping through every single frame that can greatly impact your performance. So do keep in mind how many things you want to loop through every frame. And if you can help it try and refer back to reducing these function calls here, but only calling it a certain amount of times a second, compared to every single frame, okay.

And finally, if you’re instantiating lots of the same object such as bullets or particle effects use object polling. Now, object pooling is basically the idea of reusing game objects that are all the same. So again, for bullets or for particle effects or for something where you have lots and lots of exact same object that you want to instantiate and then destroy, you can use object pooling. Because traditionally instantiating and destroying, they can be quite expensive if you’re doing consistently over a very high frequency.

So object pooling, basically, at the start of the game, you instantiate a large number of these objects. So you have a big performance impact for the first frame or so, but apart from that, it should be smooth. And then pretty much instead of instantiating and destroying these game objects, you enable or disable them when you need them and when you don’t. Now, there are many methods and systems online and on the asset store that handle object polling, or you go ahead and create your own system. So I recommend that you do your research and see what the best method for you will be.

See yep, that is just a few different ways that you can help increase performance with your scripting and optimize our various different aspects. So I hope you learned something new and I’ll see you all in the next lesson.

Interested in continuing? Check out the full Publishing and Optimizing Mobile Games course, which is part of our Mobile Game Development Mini-Degree.

]]>
A Guide to the Unity Device Simulator for Mobile Game Dev https://gamedevacademy.org/unity-device-simulator-webclass/ Sun, 30 Oct 2022 01:00:40 +0000 https://gamedevacademy.org/?p=12987 Read more]]>

You can access the full course here: Publishing and Optimizing Mobile Games

Unity Device Simulator Tutorial

In this lesson, we’re going to be looking at the Device Simulator inside of Unity.

Device Simulator is a package you can install as an alternative to the default Game view. It allows us to get a preview of how an app will look at various different aspect ratios and resolutions.

Installing Device Simulator

To install Device Simulator, we can go to Window > Package Manager:

Unity Window Menu with Package Manager selected

And inside of the Package Manager, we can click on Packages: In Project and select ‘Unity Registry’.

Unity Registry selected from Packages dropdown

Then select Device Simulator and click on Install.

Device Simulator package in Unity Package Manager

If the Device Manager doesn’t appear in the list, it may be hidden as a Preview Package.

To enable viewing preview packages, click on the Gear icon at the top right, and select Advanced Project Settings.

Advanced Project Settings selected from gear icon

Click on Enable Preview Packages, and click on ‘I understand’.

Package Manager agreement popup for enabling preview packages

Configuring Device Simulator

After installing, you will be able to click on the dropdown that says Game, and then select Simulator.

Unity Game dropdown with Game and Simulator options showing

The screen will then switch over to the Simulator View, where we can change the device that we want to look through.

Simulator View for Unity Device Simulator

When you click on the default device– Apple iPad (5th Gen), you will see there’s a large list of various different mobile devices.

Mobile device options for Unity Simulator view

This is a great way of checking how our UIs will look on a mobile device, without having to actually build it and testing it out. Note that some UI elements may be cut off by any of the bevels or notches.

Simulated device in Unity with phone notch circled

You can click the Safe Area to toggle the area that is recommended for your UI placement.

Safe Area option circled in Unity Device Simulator view

To test the app, just press the Play button and play it in the view, just like our normal game view. You can also zoom in/out using the Scale bar.

Scale option in Unity Device Simulator view

You can also rotate the screen by using these buttons.

Rotate option in Unity Device Simulator

Fit to Screen resets the scale and as large as it can with the phone on the screen.

Fit to Screen button in Unity Device Simulator

We can also change the Allowed Orientations, Resolution, and Auto-rotation.

Screen Settings for Device Simulator in Unity

 

Transcript

When developing new gaming Unity, generally a lot of the time you want to see how it looks on a specific device. If you have an iPhone and Android or any sort of mobile device that may have bezel, curved edges, or a weird aspect ratio you definitely wanna test it out to see if the UI fits.

If your game works, just to see generally how it is and inside Unity, by default in the game window, we actually do have the ability to modify both the aspect ratio and the resolution. Right now I’m rendering this game here that I’m working on at one 1920 by 1080, so 1080 resolution. And we can of course change it to five by four aspect ratio 16 by nine and various other things. You can click on the little plus icon down here to add in your own resolution or aspect ratio.

But when it comes down to mobile devices they are very different from computer screens. With mobile devices, they have their very own bezel, they have notches for the cameras and really it can be hard to develop around that. So what we can do inside of Unity here is use something known as the Device Simulator. And the Device Simulator allows us to basically simulate our game view if it was to be played on a specific mobile device.

So, here is how he can get the device simulator. First, we need to go up to the window here and go Package Manager. Now with device simulator is a package inside Unity which we can download. And as at the time of recording this, it is a preview package which means that we need to enable the ability to see and download preview packages.

So in the Package Manager, click at the top here and make sure you are at the Unity Registry, which basically just displays a list of all the various different packages from Unity which you can install and click on this gear icon here at the top right, go Advanced Project Settings and in here make sure that you have Enable Preview Packages ticked depending on the time that you are watching this course, the devices simulated could be out of preview and be a full official build but right now it is still preview.

So we’re gonna scroll down until we find the device simulator right here and we just wanna click on the install and this is going to begin installing this package into our Unity project. Okay, and when that is complete, we can close out of the package manager right here.

And if we go over to our game window, you’ll see that something has changed. At the top left, we now have a new drop down, which allows us to select Game or Simulator. Game is just the default game view that we have had Unity so far and simulator is gonna switch us out to the device simulator. So let’s click that and see what happens and straightaway you’ll see there is a lot of information on the left-hand side and the center panel changed to display our game on what seems to be an iPad.

So, pretty much here on the left hand side we have a specification about the device that we are testing it on. We can see the resolution, we can see some of the hardware details, we can see the specific resolution here that we can also then modify as well as enabling and disabling Auto Rotation, the Allowed Orientations. So you can really fine-tune how you want your game to be presented on these specific devices. Now on the right hand side, we can change the scale by zooming in and out if you do want to have a big view and you can use the scroll bars on the sides to actually navigate around.

We can click fit to screen to basically fit the device to our current simulated screen here, we can rotate left and right to get a different angle here and enabling safe area here shows us the bounds where it is safe to place UI.

Now, since this iPad here has pretty much a square or rectangular resolution aspect ratio with no bezel or with no notches, the safe area pretty much is the entire contents of the screen. But if we change the device, and to do that we can go up to the top left here, click on where it says Apple iPad and we’ll see there is a large list of various different mobile devices.

Now let’s pick a device that does have some bezel and notches, and for that we are going to pick the iPhone X S right here. And as you can see, it has the notch at the top here set up for us as well as the bezeled edges. Now, if we click Safe Area you’ll see that the safe area has now changed. We have the top bar, which is now below the notch and at the bottom, it’s basically making it so that this rectangle is not clipping into any notches or corner bezel, okay. And when developing your UI, try and fit it inside of these bounds right here.

So, what we can then do is rotate it sideways like so and just like inside Unity, we can press play and test out our game. As you can see here, I’ve got a little physics puzzle game set up that I’ve created. So we can play a game just as we were before in the normal game window but this time we are playing it on the specific device, we can see the correct resolution, aspect ratio and keeping in mind that bezel, notches, and any other occluding elements that a specific mobile device may have.

So yeah, that’s pretty much the device simulator. A very great if you wanting to test out on various different mobile devices as we are here and of course we can then change it to something such as, let’s just say the hasty, HTC 10, there we go. We’re testing on this new device now, we can change it on the fly. Let’s go to the galaxy S6, there we go. We can then change it to something such as the Sony Xperia Z, and here there you go, and of course as new devices are being released, they will be updated here, so do make sure that if you are using the device simulator that every now and then you check to see if there’s an update available and install that.

So that is the device simulator inside of Unity. Thank you for watching.

Interested in continuing? Check out the full Publishing and Optimizing Mobile Games course, which is part of our Mobile Game Development Mini-Degree.

]]>
Create a Loading Screen in Phaser 3 – Web Games Tutorial https://gamedevacademy.org/creating-a-preloading-screen-in-phaser-3/ Sun, 17 Apr 2022 10:13:19 +0000 https://gamedevacademy.org/?p=6656 Read more]]> Have you ever played a game where you met with a blank, black screen while things loaded?

Sometimes, there isn’t getting around loading screens for your game, especially as your game gets larger and larger. Loading screens exist to provide players with important feedback – not only to indicate what the game is doing but also so players don’t think the game crashed.

In this tutorial, we’re going to cover how to make a loading screen for Phaser 3 games – an important skill considering these games loading are at the will of your internet connection.

If you’re ready to provide better UI-based user experiences, let’s start learning.

Intro and Projects Files

One of the things that almost all games have in common is a loading screen, which can be used to inform the player how long they must wait to play the game. Even though no one likes waiting to play a game, a loading screen is a valuable tool. Instead of having players stare at a blank screen, the loading screen can be used to inform the player how much longer they have to wait, or at minimum let the player know the game is doing something.

In Phaser, before you can use any assets, you must first load them in preload function of the scene. If you load a large number of assets, it can take some time for all of the assets to be loaded into the game, and this is where a preloader really makes a difference.

The goal of this tutorial is to teach you the basics of creating a preloading screen by creating a progress bar that will dynamically update as the game loads the assets. You can see what we will be creating below:

You can download all of the files associated with the source code here:

BUILD GAMES

FINAL DAYS: Unlock 250+ coding courses, guided learning paths, help from expert mentors, and more.

Learn Phaser 3 with our newest Mini-Degree

The HTML5 Game Development Mini-Degree is now available for Pre-Order on Zenva Academy. Learn to code and make impressive games with JavaScript and Phaser 3!

Get Instant Early Access

Project Setup

In order to run your Phaser game locally, you will need a web server for running your game. If you don’t already have this setup, you can read how to do that here: Getting Start With Phaser. You will also need an IDE or Text Editor for writing your code. If you don’t already have one, I would recommend the Brackets editor since it is easy to use, and it has a feature called Live Preview that will allow you to run your Phaser game without installing a web server.

Once you have these setup, we will setup the basic code for our game. Open your IDE, and create a new file called index.html. We are going to create a basic html page, add a reference to Phaser, and create our Phaser game object. In index.html, add the following code:

<!DOCTYPE html>
<html>

<head>
    <meta charset="utf-8">
</head>

<body>
    <script src="//cdn.jsdelivr.net/npm/phaser@3.0.0/dist/phaser.min.js"></script>
    <script type="text/javascript">
        // The game config that is used by Phaser
        var config = {
            type: Phaser.AUTO,
            parent: 'phaser-example',
            width: 800,
            height: 600,
            scene: {
                preload: preload,
                create: create
            }
        };

        // Create a new Phaser Game object
        var game = new Phaser.Game(config);

        function preload() {
        }

        function create() {
        }

    </script>
</body>

</html>

Let’s review the code we just added:

  • We created the configuration that will be used for our Phaser game.
  • In the config object, in the type field, we set the renderer type for our game. The two main types are Canvas and WebGL. WebGL is a faster renderer and has better performance, but not all browsers support it. By choosing AUTO for the type, Phaser will use WebGL if it is available, otherwise it will use Canvas.
  • In the config object, the parent field is used to tell Phaser to render our game in an existing <canvas>  element with that id if it exists. If it does not exists, then Phaser will create a <canvas>  element for us.
  • In the config object, we specify the width and height of the viewable area of our game.
  • In the config object, we embedded a scene object which will use the preload  and create functions we defined.
  • Lastly, we passed our config object to Phaser when we created the new game instance.

If you try running your game, you should see a black screen, and if you open the console in the developer tools, you should see a log with the version of Phaser your game is running.

Loading Our Assets

Now that our project is setup, we can get started. Before we can create our preloader, we will need to load some assets into our game. To keep things simple, we are going to use one image and reload it a few times to simulate loading a large number of assets. The asset for the game can be downloaded here.

You will need to place the image in same folder as index.html.

To load our image and display it in our game, you will need to update the preload  and create  functions in index.html:

function preload() {
    this.load.image('logo', 'zenvalogo.png');
    for (var i = 0; i < 500; i++) {
        this.load.image('logo'+i, 'zenvalogo.png');
    }
}

function create() {
    var logo = this.add.image(400, 300, 'logo');
}


If you reload your game in the browser, you should see the logo appear in your game.

Creating the Preloader

With our assets loaded, it is time to create our preloader. In the preload  function, add the following code:

this.load.on('progress', function (value) {
    console.log(value);
});
            
this.load.on('fileprogress', function (file) {
    console.log(file.src);
});

this.load.on('complete', function () {
    console.log('complete');
});


This code creates a few event listeners that will listen for the progress, fileprogress, and complete events that are emitted from Phaser’s LoaderPlugin. The progress  and fileprogress events will be emitted every time a file has been loaded, and the complete event will only be emitted once all the files are done loading.

When the progress  event is emitted, you will also receive a value between 0 and 1, which can be used track the overall progress of the loading process. When the fileprogress  event is emitted, you will also receive an object containing information on the file that was just loaded. Both of these can be used to create custom preloader with the information that is provided.

Here is an example of the data that is sent:

For the preloader, we will use Phaser’s GameObject.Graphics to create the progress bar. In the preload  function, add the following code at the top of the function, above the code you already added:

var progressBar = this.add.graphics();
var progressBox = this.add.graphics();
progressBox.fillStyle(0x222222, 0.8);
progressBox.fillRect(240, 270, 320, 50);

Then, update the progress  event listener in the preload  function with the following code:

this.load.on('progress', function (value) {
    console.log(value);
    progressBar.clear();
    progressBar.fillStyle(0xffffff, 1);
    progressBar.fillRect(250, 280, 300 * value, 30);
});


In the code above, we are creating two separate rectangles, progressBar and progressBox. The progressBox rectangle is going to be used as a border/container around the progressBar, and the progressBar will be used to track the overall percentage of the assets being loaded. We are doing this by calculating the width of the rectangle to be based on the progress value we are receiving. So, every time we receive the progress event, we should see the rectangle grow.

If you reload the game, you should see a nice progress bar that fills up as the assets are being loaded. However, there is one problem with it. When the all of the assets are loaded, the preloader is staying on the screen, and the logo image is being loaded over top of it. To fix this, we can update the complete event listener to destroy our preloader once all assets are loaded.

In the complete event listener, add the following code below the console.log():

progressBar.destroy();
progressBox.destroy();


Now, if you reload your game, the progress bar should disappear before the logo image is displayed on the screen.

Adding Some Text

We have the main part of our preloader done, but we can easily enhance the preloader by adding some additional text to it. First, we will add a simple ‘Loading…’ message to the preloader. In the preload function, add the following code below the progressBox lines:

var width = this.cameras.main.width;
var height = this.cameras.main.height;
var loadingText = this.make.text({
    x: width / 2,
    y: height / 2 - 50,
    text: 'Loading...',
    style: {
        font: '20px monospace',
        fill: '#ffffff'
    }
});
loadingText.setOrigin(0.5, 0.5);


Then, in the complete event listener, add the following code:

loadingText.destroy();


Let’s review what we just added:

  • We created two new variables, width and height. These variables are getting the width and height of the current viewable area of our game.
  • We created a new Phaser Text GameObject called loadingText. This game object is using the width and height variables we just created, and we set the style and default text of the game object.
  • We set the origin of the game object to be (0.5, 0.5), which will help center our game object.
  • Lastly, we updated the complete event listener to destroy our loading text once all the games assets were loaded.

If you reload your game, your screen should look like this:

Next, we are going to add some additional text that will display the percent of the loading bar. To do this, we just need to create another text game object, and update the text to use the value that is being sent to the progress event listener. In the preload function add the following code below the loadingText code:

var percentText = this.make.text({
    x: width / 2,
    y: height / 2 - 5,
    text: '0%',
    style: {
        font: '18px monospace',
        fill: '#ffffff'
    }
});
percentText.setOrigin(0.5, 0.5);


Now, in the progress event listener, add the following code above the progressBar code:

percentText.setText(parseInt(value * 100) + '%');


Lastly, in the  complete function add the following code:preload 

percentText.destroy();


Here is a quick summary of what we just did:

  • Created a new Phaser Text GameObject called percentText.
  • We set the origin to (0.5, 0.5) to help center the object.
  • In the progress event listener, we are updating the text of the object, every time a file is loaded. We are multiplying the value by 100 since the value that is emitted is between 0 and 1.
  • Lastly, we updated the complete event listener to destroy the object.

If you reload your game, you should see the progress bar percentage update as the progress bar fills up.

With the progress bar now showing the percentage, we will now add some text to display which asset has been loaded. Once again, we will create another text game object and we will update the text of the game object with the file data that is being sent to the fileprogress event listener. In the preload function add the following code below the percentText code:

var assetText = this.make.text({
    x: width / 2,
    y: height / 2 + 50,
    text: '',
    style: {
        font: '18px monospace',
        fill: '#ffffff'
    }
});
assetText.setOrigin(0.5, 0.5);


Then, in the fileprogress event listener, add the following code:

assetText.setText('Loading asset: ' + file.key);


Lastly, in the  complete function add the following code:

assetText.destroy();


Now, if you reload your game, you should see the asset text being updated as each asset is loaded.

For this example, we ended up outputting the asset key instead of the file name since we are only loading the one image. If you want to output the file name, you can update the following line:

assetText.setText('Loading asset: ' + file.key);


to be:

assetText.setText('Loading asset: ' + file.src);


Here is the completed index.html file:

<!DOCTYPE html>
<html>

<head>
    <meta charset="utf-8">
</head>

<body>
    <script src="//cdn.jsdelivr.net/npm/phaser@3.0.0/dist/phaser.min.js"></script>
    <script type="text/javascript">
        var config = {
            type: Phaser.AUTO,
            parent: 'phaser-example',
            width: 800,
            height: 600,
            scene: {
                preload: preload,
                create: create
            }
        };

        var game = new Phaser.Game(config);

        function preload() {
            var progressBar = this.add.graphics();
            var progressBox = this.add.graphics();
            progressBox.fillStyle(0x222222, 0.8);
            progressBox.fillRect(240, 270, 320, 50);
            
            var width = this.cameras.main.width;
            var height = this.cameras.main.height;
            var loadingText = this.make.text({
                x: width / 2,
                y: height / 2 - 50,
                text: 'Loading...',
                style: {
                    font: '20px monospace',
                    fill: '#ffffff'
                }
            });
            loadingText.setOrigin(0.5, 0.5);
            
            var percentText = this.make.text({
                x: width / 2,
                y: height / 2 - 5,
                text: '0%',
                style: {
                    font: '18px monospace',
                    fill: '#ffffff'
                }
            });
            percentText.setOrigin(0.5, 0.5);
            
            var assetText = this.make.text({
                x: width / 2,
                y: height / 2 + 50,
                text: '',
                style: {
                    font: '18px monospace',
                    fill: '#ffffff'
                }
            });

            assetText.setOrigin(0.5, 0.5);
            
            this.load.on('progress', function (value) {
                percentText.setText(parseInt(value * 100) + '%');
                progressBar.clear();
                progressBar.fillStyle(0xffffff, 1);
                progressBar.fillRect(250, 280, 300 * value, 30);
            });
            
            this.load.on('fileprogress', function (file) {
                assetText.setText('Loading asset: ' + file.key);
            });

            this.load.on('complete', function () {
                progressBar.destroy();
                progressBox.destroy();
                loadingText.destroy();
                percentText.destroy();
                assetText.destroy();
            });
            
            this.load.image('logo', 'zenvalogo.png');
            for (var i = 0; i < 5000; i++) {
                this.load.image('logo'+i, 'zenvalogo.png');
            }
        }

        function create() {
            var logo = this.add.image(400, 300, 'logo');
        }

    </script>
</body>

</html>


You can download the completed example here.

Conclusion

With the asset text now being displayed, this brings this tutorial to a close. As you can see, adding a preloader to your game is a great solution when you will be loading a large number of assets, and you want to keep the players informed of the games current state. With Phaser, it is really easy to add a simple preloader and you can easily extend these examples to create a more complex preloader.

I hoped you enjoyed this tutorial and found it helpful. If you have any questions, or suggestions on what we should cover next, let us know in the comments below.

]]>