Navigation

Phaser 4 Development Process

Published on 3rd September 2019

A number of people have been asking me on Discord about Phaser 4. So, to avoid any further confusion, let's set some facts straight. As of the date of this post:

1. Development of Phaser 4 has not yet started. I'm still in R&D stages.

2. Development of Phaser 3 will continue for at least the rest of 2019. It has not been stopped.

3. There is no git repo you can track yet, because, see point 1.

4. When ready, the git repo will be made public. However, only Phaser Patrons will get the examples and demos to begin with.

5. Yes, Phaser 4 will be written in TypeScript.

6. No, you don't have to use TypeScript to code your games.

7. You will need to use TypeScript if you wish to contribute Phaser 4 PRs though.

8. No decisions have been made about how the API will change, because, see point 1. However, keep reading, as that is what this whole post is about.

I noticed that Microsoft released the big new 3.6 version of TypeScript this week. There are a lot of cool new features in this version. The blog post is a great read if you've never encountered TypeScript before, and want to see why I'm moving to using it.

I guess changes to the underlying language is something I'm going to have to get used to during development. At least, by starting with this new version, I'll get off on the right foot. It will be important to track TypeScript changes as I go, in order to stay on-top of whatever they introduce further down the line.

The Problem with new Versions

Whenever there is a new Phaser release, what's the first thing you check for in the Change Log? If you're like most people, you look for the New Features before anything else. In fact, since the initial release of Phaser, I have always given the new features priority, ahead of everything else. It's human nature to be interested in the new and shiny and the tech world amplifies this ten-fold. Everything around us reinforces the message that more is better. Faster CPUs running more operations, new GPUs that can push millions more polys, frameworks with hundreds of new features, software with 'cool new tools'. The ever constant cycle of bigger and better is unrelenting - and, honestly, it feels like it's expected by consumers and end-users alike.

I mean, when was the last time you saw a new version of something that proudly proclaimed lots of features had been removed? It just doesn't happen. It's not seen as a benefit. And as a result, software gets bigger and more complex. UIs have to evolve help users explore an ever increasing list of features that most will never touch.

History is littered with apps that started out lean-and-mean, then grew and grew and eventually died a horrible bloated death, killed off when some snappy upstart came along that everyone flocked to because "It's so fast! It only does what I need!". I'm sure you can think of many pieces of popular software that have gone this way. I, personally, lament the death of minimalist apps like ICQ, MSN Messenger and Winamp. Each of them grew in insane ways and then died.

Compare early versions of Photoshop or Word, for example, and the amount of things they can do is just a fraction of what's possible today. Yet how many people are actually using these new features? You'd hope companies like Adobe and MS are able to test these sorts of things, then again, you rarely see features deprecated - so the thinking must typically be "keep it - someone might be using it". After all, more features look good on the 'feature comparison' product pages.

And open-source frameworks like Phaser are really bad offenders. This is because I've zero metrics to go on. I've simply no way of knowing just what features in Phaser are being used. I honestly have absolutely no idea if, for example, the only person who has ever used Phaser.Display.Align.To.TopCenter is me!

With no insight into how your product is used, you have to proceed assuming that, yes, someone, somewhere, does actually need that feature, so you leave it in. Plus, you've spent a good amount of time coding it, making examples and documenting it. Why on earth would you throw that away and remove it? It has to be useful to someone, surely?

The net result, though, is similar to what happens in the wider software world: ever increasing size and scale. The classic feature creep. Except, Phaser can't hide behind a UI refresh. As more moving parts are added you inherit both technical debt and exponentially increase the complexity of everything as you go. Modern tooling affords practices such as tree-shaking, the removal of parts of code that are not actually being used, yet sadly, this doesn't solve the underlying cause. All it does is hide it from your production builds.

As API sizes increase, so is the cognitive workload required of the developers using it. The upshot is that it becomes harder to learn how to do something, especially when the answer is "well, you could do it like this, or like this, or maybe like this..." As new features get added, the snowball effect continues and answers become less succinct.

In short, the bigger the API surface area, the longer it takes to find out what to do. Stumped by all the choice, you have to rely on external resources, such as tutorials, to figure things out - or even just to get started. More experienced devs will jump into the Phaser source code and follow it through, and I've done my best to keep it as accessible as possible in that regard. Still, it doesn't alter the fact that you shouldn't need to be in there in the first place. The very fact you're looking at the code to figure out what it does screams to me that it has failed at some other point higher up the chain. Perhaps in the documentation, but more than likely in the root understanding of the approach.

It was a Dark time

As I begin to plan Phaser 4 I've been thinking about this issue a lot recently, equating it to my own experiences, both with Phaser and before. Back in 2001 I joined a company called The Game Creators. They had released just one program, DarkBASIC, on the PC and I had fallen in love with it.

DarkBASIC essentially took DirectX, an API that itself was going through massive changes at the time, and wrapped it in an extremely easy-to-use subset of the BASIC language. It was foremost designed for making games and had lots of functions dedicated specifically for that. Here's a short snippet of DarkBASIC code:

I'm sure that, without having even used it, you have a pretty good idea what the above may do. And yes, that is the actual IDE you had to use, in all its glorious 640x480 glory :)

The creators of DarkBASIC had good pedigree, formed from the same team that helped launch AMOS on the Amiga. They rightly saw a huge gap in the market and swiftly filled it. It's important to understand the technical landscape at the time. GPUs, as we know them today, were in their absolute infancy back then. NVIDIA were just one of a number of GPU manufacturers, rather than the dominant juggernaut they are today, and had recently bought to market their initial GeForce 256 card. It was a great GPU and I fondly remember the excitement of installing it and firing up those early NVIDIA tech demos.

Lots of PCs didn't even have dedicated GPUs, so to get one that offered cutting-edge features like hardware lighting and shading was a real breakthrough! Since its introduction with Windows 95, Microsoft DirectX had been expanding to take advantage of the new features that these GPUs offered, and DirectX 7 was launched. DarkBASIC always claimed to be an easy way to use DirectX without having to resort to C++, and it made good on those claims.

You can draw similar parallels in the evolution of web gaming. When the 'HTML5' hype train began all we really had was the <canvas> tag. GPU acceleration in the browser was confined to a few CSS transforms, and WebGL was hidden under config flags. Due to the technical landscape at the time, what you could actually do with the technology was extremely limited, the same as with early DirectX.

As with DarkBASIC, early HTML5 game frameworks like Impact.js, were, by their very nature, constrained. They had little choice in the matter, simply working with what was available, and as a result had vastly reduced sets of functions from what is offered today. The flip-side of this is that they were really easy to learn.

It didn't take long for DarkBASIC to gain competitors, the biggest being Blitz Basic. It too originated from the Amiga, it too wrapped DirectX, and also offered a similar set of BASIC functions with a strong lean towards game development. Both gained a loyal following. Developers, like myself, would push the languages as hard as they could, creating effects and games and having a blast while doing it. Yet the landscape was changing rapidly. GPUs were becoming more common and more powerful. DirectX itself also evolved, dropping in lots of new features at a constant rate.

Being commercial software it made sense that both DarkBASIC and Blitz Basic would look to release new versions. Of course it meant new revenue for the respective companies, but more importantly, it was a way to take advantage of all the cool new features that DirectX was adding on a regular basis, as well as new tricks the developers had learned since the first versions. New versions of each were produced and released, with DarkBASIC evolving into DarkBASIC Professional and Blitz into Blitz 3D. A few years later, Blitz would rebrand again into Blitz Max.

I talked to Lee Bamber, the developer of DarkBASIC, about this time in the products life, and he had this to say:

"The first compiler was just a hobby project and my first taste of converting a script into some kind of byte code which could be run through an interpreter to make stuff happen, and I learned an awful lot. Very newbie and very indie, but I enjoyed it thoroughly, almost making it up as I went along. Of course that also meant it was not industry-proven, slower than it could have been and had a touch of Brit eccentricity about it."

"DBPro was an attempt to do better, and in the early days, it went through a few versions. I remember at one point recruiting my uncle to write the compiler, which was the starting point for me to make a compiler that could actually produce machine code from the compiler process. Malc, my uncle, and I both grew up with assembler being the primary method of coding on 16-bit and above, so we knew it was right to have the finished app run pure machine code."

"It was a long time ago, but by then we already had DB Classic out there, and an expectation that DBPro should be 'better' so I guess that was the end of the age of innocence and the start of the age of 'doing what people expected'. Before that point, everything I coded was done because I wanted to do it for my own pleasure and some small entrepreneurial gains. From DBPro onwards, it became a much bigger creature and required listening to many more people about what it should be. I don't think I ever had nightmares about the journey, in fact, I think it taught me a lot about coding and the business of making and selling software, which naturally contained as many mistakes as wins :)"

What Lee says here, I sympathise with 100% and can draw parallels with. Phaser v1 was just a fun hobby project that exploded beyond all preconceptions I had, and every version since then has had lofty expectations attached to it.

I remember the release well, as I was working for them through-out the development and publication of DarkBASIC Professional (DB Pro), managing their community and fielding customer support. I was in the firing-line of all the complaints and problems. And, honestly, sadly it felt like there were quite a lot.

Much like Phaser 3, DB Pro had a difficult gestation period. It was the vision of just a couple of developers, who were themselves trying to get to grips with the latest releases of DirectX and GPU advancement, while still trying to keep things as closely tied to the previous version as possible. After all, what had worked once would surely work again, right? While the software did indeed offer a lot of powerful new features, it did so slightly at the expense of what had made the earlier versions so appealing. It wasn't the only one. Blitz Max suffered in an equal manner as it transitioned from Blitz 3D. Developers had to learn a whole new syntax for even the most common functions.

Suddenly, the surface-area of the APIs were dramatically larger, or just totally different. The original, well honed, and limited set of commands blossomed into mighty multi-tentacled beasts. Due to the small teams working on each, combined with the closed nature of the software, lots of bugs slipped into production. This was in the days when software arrived in a box containing a CD. Patches could, and were, released onto the web, but Internet access was more limited than today, with ADSL only just starting to be available in major cities. You couldn't rely on people either knowing how to upgrade, or doing so.

Both Blitz and DB Pro had inherited decisions made years prior to their release, in order to maintain a semblance of continuity. In part, this was the right thing to do. I'm not suggesting they should have dumped their entire API and started from scratch, as lots of the API came from a solid base to begin with. But both of them focused too much on the 'shiny new' and got carried away by feature creep. They didn't ask if they should, just if they could. And I don't blame them. As developers it's exciting to break new ground, especially if you get to share it with a user-base, and the desire to add new features is so ingrained in our developer psyche, too, that it's impossible to ignore.

However, the effects of this could be felt in the respective communities. In lots of cases, developers found the transition difficult, or just didn't make it. The simplicity of the early days had been lost somewhat and things just took more figuring out now. Not all of the new features worked properly, or were GPU specific, or relied on 3rd party tools, which led to further confusion. Some things which had been possible in earlier builds, were no longer present. The communities became fragmented as a result and not everyone made the jump. I see similar parallels between Phaser 2 and Phaser 3.

Neither DarkBASIC Pro nor Blitz Max would ever see new releases again from the original developers.

One thing DarkBASIC Pro did was to tie itself to the PC too tightly. In a world that was becoming cross-platform, along with the rise of powerful mobile devices, a new approach was required. The Game Creators built AGK - the App Game Kit - which was truly cross-platform, with multiple publishing targets. This was the real successor to DarkBASIC, certainly in spirit, if not in name, but was also a complete re-write. They literally started again from scratch and treated it as a blank page. It was a successful move for them and AGK still goes strong today, even in the face of behemoths like Unity. If you look at the AGK site you'll notice they have just released AGK Studio and renamed the previous version AGK Classic. The constant cycle of reinvention continues :)

Blitz Max didn't fare so well. Attempts were made at a successor language, Monkey, which then evolved into Monkey2, but ultimately nothing really came of this. The compilers were released as open-source and there's a small but fervent community centered around them. Nevertheless, it pales into insignificance compared to its hay-day.

setReminiscing(false)

Circle back to 2019 and, yet again, I stand at a crossroads before me. One fork leads to a Phaser 4 release based entirely on the existing API. I could, quite safely, plow through every class and function, converting them all one by one to TypeScript, slap a new version number on there and call it a day.

There are many good reasons for taking this approach. The first, is that lots of developers have spent a large amount of time learning the V3 API. They've got projects that rely on it and I know of several companies that have products, or indeed whole businesses, centered around apps built in V3. For them a significant API change equates to real tangible costs. Also, I've worked really hard on V3, trying to improve it at every step. With literally thousands of updates this year alone it's pretty much my life in code form.

Yet ... I can't help feeling that in order to move forwards, we need to take a step back.

A lot has changed since work began on Phaser 3 way back in 2015. You may have only been using it recently, however, I've been thinking about and building V3 for coming on 5 years now. That's a long time in the web world. And a lot longer than the time I spent on Phaser 2.

I'm very pleased with what I'm able to create in Phaser 3. It does get a lot of things right. Yet, while it was a complete rewrite at the time, it still carried a bunch of structural baggage with it. It tried too hard to hold on to what made Phaser 2 so popular, while offering more flexibility in its approach. The end result landed slap-bang in the middle, somewhere in no-mans land.

During all this time, JavaScript itself has changed massively too. The ES6 way of coding is now 'normal'. You no longer require SystemJS to import modules. Most ES6 features run directly, without transpiling, in today's browsers. And TypeScript has utterly exploded in popularity.

The platforms on which people are playing web games has also changed. Game portals were the most dominant space when Phaser first started, but this has shifted dramatically. While they're still incredibly popular, you cannot ignore the new spaces that have opened. Facebook games, Instant Messenger games, WeChat Mini Games, Twitch stream overlays and a massive variety of interactive and playable ads. New super low-powered platforms such as KaiOS are creating brand new markets, too.

All these things combine to create a vast sea-change in both how we build and where we can deploy. When the very browser has evolved dramatically, when the language you code in is almost unrecognizable any more, and when the publishing platforms have morphed and expanded beyond recognition, you have to ask if sticking to the 'old' ways is the right move?

I genuinely believe there is a beauty in simplicity and it's this line of thinking that leads down the other path at the Phaser 4 crossroads. The one that isn't just a language upgrade, but that uses the opportunity to pare it all back to the absolute core.

Not all trees can be shaken

If you take a look at the global Phaser systems, only a couple are actually required by every game - and even those could easily have lots of their features stripped out and moved to optional modules. I would also argue that every single Game Object is optional, as well. There are games out there that have no need for textures or Sprites, but that do require geometry and Graphics. The opposite is equally true. And the less said about 3 physics systems, the better :)

The moment you make an assumption, you lock people into carrying that baggage with them into their games. Minimalist games may well not need to load any external files. Yet they'd still include the weight of the Loader system with them. A KaiOS game very likely wouldn't need to support WebGL or any kind of canvas scaling, yet disabling the WebGL Renderer and ScaleManager requires a custom build.

You may be thinking that by using TypeScript with webpack or Rollup, we can benefit from dead-code elimination, so does this even matter? The answer is that, because of the way the Phaser API is structured, dce simply won't work fully. For example, the Phaser.Game instance has its own InputManager. It's imported and assigned to the input property within the Game class. The Input Manager then creates handlers for Touch, Mouse and Keyboard input.

It doesn't matter that you didn't use any keyboard functions in your game. Dead-code elimination only looks to see if something used it - and, lo and behold, the Game class did. As a result, it would not be purged in a tree-shaking sweep, because it's not intelligent enough to know that nothing outside of Phaser used it. The very fact it was instantiated and assigned is enough to save it, even with deep-scope analysis.

The problem with factories

The same issue raises its head in lots of other places in the API. Phaser makes heavy use of factories. Those factories pull in all manner of different Game Objects, which in turn pull in Game Object components, and those components reference required modules elsewhere, such as geometry and math. All of this happens so you're able to use this.add.sprite() in your code.

No scope analyzer in the world would be able to properly untangle this. It therefore forces the work onto you. You have to be the one to spend time remembering if your code used a specific Game Object, or not, and then creating your own Game Object Factory configs in order to remove those you don't use. It's not a huge amount of effort, I agree, but it's still yet another thing that needs to be done. What is worse, the process is unique for every game you create, you can't make it part of your template or project bootstrap.

There are other modern web trends I'd like to take advantage of. For example, by designating different entry points into Phaser it would enable tools like Rollup to go through the module graph, coloring each module, and ultimately allowing it to properly code-split into an optimal number of chunks. Right now, that's literally impossible to do with the way Phaser is structured.

Code-splitting is definitely more useful in the web app world, where browser responsiveness and time to first paint is really important. You could argue that it's of much less value when it comes to games. After all, for the majority of games, the size of the assets nearly always eclipse the size of the library by a significant percent. Players are used to the concept of a loading bar in a game. As long as you don't take too long, it's not really seen as an issue. Where-as web sites need to respond virtually instantly. Even so, when you move Phaser onto platforms outside of traditional portals, the speed at which it can get going, and the overall payload, becomes a much more crucial factor. Instant Games and interactive ads are two areas where optimal bundle sizes are vital, and code splitting can definitely help to reduce the start-up time, which can only be a good thing.

While it's certainly possible to turn off huge chunks of the Phaser API by creating custom builds, it requires learning, testing time, and understanding, to achieve it. What's more, it goes against what are now established standards for this very thing in the wider web dev world. Your tools should be the ones doing this work for you. It should be automatic, not something you have to invest time in. You'll need to spend enough time getting to grips with webpack or Rollup as it is. That in itself should be plenty. The framework shouldn't then require even more manual setup steps from you.

On the other hand, while berating factories, I, personally, have always loved the ease with which I can get something on-screen in Phaser:

There's a whole lot of power bundled into that tiny script. And I honestly, firmly, believe that it's this ease-of-use that has lead to Phaser's popularity. A non-ES6 version is even smaller at just 5 lines of code.

However, it is not lost on me that this approach goes against what you will see in virtually all other aspects of your ES6 or TypeScript life. A more common way of representing the above may be:

Aside from a few imports, is this really that much harder? It has definitely lost some of the API elegance from before, but it's far more consistent with how the rest of the web world works, and I'd argue that developer skills learned in similar disciplines would transfer more easily to the above approach. The opposite is, of course, true as well. Plus, it'd tree-shake properly. The cherry on the cake is decent context awareness in any modern IDE. And we've not even touched on TypeScript, with the compile time checks and type-safety that would provide.

There are other, less technical, benefits, too. When something is easy to grasp off the bat, you can create things quickly, and that's important for early adoption. It gives developers a sense of power and reward. The API should never make them feel 'stupid'. The whole aspect of "not struggling at the start" is important. They're far more likely to become invested in what they've learned. Thus, when they do eventually hit a problem, it's more likely they will dedicate time trying to resolve it, rather than just giving up and moving on.

I personally know a number of developers who miss the ease with which they used to be able to create games. Libraries like ImpactJS had a tiny, well-honed feature set, that did just a handful of things and did them (almost!) flawlessly. Yes, it was extremely opinionated as a result. It was their way, or the highway, but there's a real charm in minimalism that shouldn't be understated.

What does this actually mean for Phaser 4?

Right now, I'm not 100% sure. There are many pros and cons to each approach. And, if I do take the route of embracing the modern coding style, rather than directly porting V3, there are still many different ways I could go within that, too.

A more formal class-based, tiny-module approach, with less factories and loose-coupling, will allow for a number of benefits and those have to be explored before a decision is made. What I'm positive about is that there is a hell of a lot of good, solid code in Phaser 3. And I have no intention of losing all of that. I'll use every last piece of it that I sensibly can. I've said it before and I'll reiterate it now: Phaser 4 should be an evolution, not a revolution. But it should evolve into something that offers a truly modern and standard way of coding, building and configuration.

That said, I can't help but feel that it would be entirely sensible to vastly reduce what is considered as being in the core. There are an awful lot of functions that, while useful, don't need to be present within some of the classes they are attached to and should be moved out. Somewhat annoyingly, this is exactly the approach I originally took when working on the Phaser 3 predecessor, Lazer. If you look at the original early code I wrote, it literally split everything up into tiny modules and used ES6 to the core. It still frustrates me that I didn't continue with that approach, actually, but again, it was a case of trying to adopt too many changes, too early, given what the tools and browsers could comfortably handle at the time.

I don't want to lose the power of the factories, yet they should be optional. There's no reason why I couldn't provide a few pre-configured Phaser entry points, that expose a healthy set of objects and factories for you, allowing you to code in the way you've been used to all these years. This will also make migration easier. As long as this approach is optional, it will give developers the flexibility I believe they should have. There were a lot of design decisions made in the V3 API that I stand-by 100% and will port over as many as possible. How you access those methods and functions may need to be different, and equally, some may need to go entirely, but most should still be present somewhere.

As you can tell, there are a lot of decisions yet to be made, and I'll keep you all posted as to what I find along the way. I'll be building V4 in public, either in a branch or a new git repo, so everyone can follow it. However, the demos and examples I write will be for Phaser Patrons only, so patrons can play with it first, track progress and sink their teeth into it. I will, of course, release these publicly as well, but that will be later on. Patrons get priority, as it's down to them I'm able to even work on this at all.

As of today I'm now splitting my time between V4 R&D and carrying on working on Phaser 3. Make no mistake, Phaser 3 is still a priority and that will not change right now. If it means taking longer for V4 to arrive, that is absolutely fine. I will be easing-up on new features in V3 though, and will focus on issues instead.

If you've any comments you're welcome to leave them below. You can also find me in the new Phaser 4 channel in the Phaser Discord if you'd like to discuss matters more directly.