Navigation

Phaser World Issue 172

Published on 22nd March 2024

Here is what the team has been up to this week:

Arian: Greetings from south Florida, 90 miles from Cuba.

This week I dedicated almost entirely to writing documentation. I wrote about the general architecture of Phaser Editor 2D and the communication protocol specifications between the server and the client. These are internal documents that will guide the rest of the team in moving the editor to the Phaser cloud.

What kind of tools do you use to develop a game in Phaser? TypeScript, ESBuild, Git, Texture Packer? At the end of the week we began a study on how to provide these tools in the browser. We found WebContainers from StackBlitz, a technology that allows you to run NodeJS in a browser-based sandbox. It's a promising thing. We also evaluated other more traditional environments, based on a client/server architecture and Docker.

Next week will be very similar to this one. I will continue to work with the team to implement our cloud editor. I also hope to work a little on some of the issues that you have reported regarding the desktop version of Phaser Editor 2D.

And since we're talking about issues, on the Phaser Discord server we have the #phaser-editor channel where you can contact us if you have any questions, found a bug, or have any suggestions regarding the editor.


Robert: ¡Saludos desde Nueva York!

With the codemods inherently done, actually getting the builds to work properly was the next task. Esbuild, vite, rollup and webpack all presented unique issues. But the nice thing is importing specific features, if isolated like Geom, is possible! This capability has significantly contributed to our efforts in balancing performance, build size, and a better developer experience. ES6 here we come! I have broken a lot this week but will hopefully have a better next!


Francisco: Hello everyone, back again for another week.

This week, we've successfully published my new Angular Project Template and continued our work on the CLI-based template installer. It's been a week dedicated to code cleanup and creating a system that allows for the generic cleaning of all templates, in addition to making corrections in the code that enable the addition of more demo templates. Alongside Richard, we've also been brainstorming suitable names that I think you're going to like.

phaser-3-and-angular-template


Can: Hey everyone!

Excited to announce some progress on a new project – a Discord Activity template built with Phaser! We are using Discord's new Embedded App SDK within the template that will base your project. In case you haven’t heard, Discord is now accepting apps and games for Activities with prizes! This will allow you to create interactive experiences directly within Discord. Great time to make web games!

You can follow more on the announcement.

I'm really excited about the potential of this Phaser template to foster more interactive communities within Discord.

phasergame


Rich: In this week's newsletter, I decided to ask Ben to diary his progress on the Phaser WebGL Renderer for the week. This is a fascinating technical deep-dive and you'll see some of the processes we're going through in order to improve Phaser performance going forward. Due to the length of the diary I've kept it until last and will skip my Dev Log this week :) So, here we go ...

A Week in the Pixel Mines

Monday:

I'm in the middle of finalising texture unit handling. This covers quite a few areas, from initializing the texture unit array with dummy textures, to creating textures, to binding textures for rendering, and then doing it again for context restoration. There are some enhancements in there already:

▪ We assign temporary textures to every texture unit, apparently for MacOS compatibility. From previous discussions, I realised I could just assign one texture to every unit, saving as many as 15 pixels. (They're not big textures.)

▪ When assigning multiple textures to units at once, the unit wrapper counts down, so that the final unit is usually 0. We could do this in any order, but ending in 0 means that all the operations which bind to 0 (basically everything except Multi and Light) don't have to change the active texture. Also a small thing, but it's one less WebGL command, and at the very least that means a cleaner debug log.

▪ Initializing these wrappers is starting to take over renderer initialization and restore in a very standard way.

Tuesday

I must be making progress, because I'm breaking things. It's just some framebuffer pipeline stuff with texture bindings, on the agenda to fix up later.

I've added WebGLRenderer.clearFramebuffer, which replaces all the calls to gl.clearColor and gl.clear across the pipelines - the git change is -103 +50.

A very basic scene with 1 item formerly took 6 WebGL commands to draw a frame, but now it takes 5, because it recognises that the clearColor command isn't necessary. I think it'll get down to 4 once scissor goes through the state management system. Again, I'm sure this isn't a significant performance optimisation, but it does show that the system is very good at doing what's necessary, and that bodes well for the future.

This is slowly beginning to pull WebGL commands out of the broader codebase, and centralize them in the Renderer and Wrappers. We're building a new abstraction, rediscovering what we're already doing and turning it into something more maintainable. For example, clearFramebuffer already effectively existed, as gl.clearColor and gl.clear were already paired in 17 locations across the codebase. But now that it exists officially, we can optimize it in one place.

I've rolled out TextureUnit management to the rest of the codebase. We now have precisely one place where gl.bindTexture is called, and everything goes through there.

I'm already seeing places where we can consolidate state management, and other places where texture unit management was implicit, possibly even making some oversights. Being more explicit about what our state needs to be is the whole point of this exercise. For example, I'm seeing the benefits of a different approach to shader textures: why bother binding them until it's time to render? If we can store explicit links between shader uniforms and texture wrappers, we can do all that management in one place, very efficiently.

Supporting that, I tested the Example "Game Objects/Shaders/Shader using an updating dynamic texture". And at first I thought something had gone wrong, because the Spector WebGL debug process didn't respond for several seconds. It turns out it was making 2286 WebGL commands per frame. Some of that is due to the way it does what it does - dozens of repeated draw operations on a dynamic texture, and we can probably think of better ways to handle this use case. (The MultiPipeline would do it in one draw, if it had the chance.) But most of it is due to repeatedly setting shader uniforms, and I'm pretty sure those calls would just go away once state management can take over uniforms and realises that they're not changing.

That example still runs perfectly smooth, by the way, although my desktop is more powerful than most phones, even though it's nearly a decade old. Phaser's got plenty of juice for me to turbocharger.

I've been considering some subtleties of texture unit management overnight. I think I'm happy with the approach I took, but I should explain why.

Right now, WebGLTextureUnitsWrapper.bind(texture, unit) will check to see whether the unit already contains the texture. If it does, no action is taken. If it doesn't, it calls WebGLGlobalWrapper.update({ bindings: { activeTexture: unit } }) to set the active texture, then binds the texture to that unit.

This means that the activeTexture unit may change when bind is called. But it also might not. Is this unpredictability a problem?

I believe it's not a problem, because activeTexture only modifies a handful of things:

▪ The target of texturing operations, used only in WebGLTextureWrapper.

▪ The unit to which textures are bound, relevant only to texturing operations and shader program execution (a uniform may be set to a specific unit).

And in both these circumstances, we should have perfect information about what we want to happen, which the state management system can actualize efficiently at the moment it's needed. So a hanging activeTexture should be irrelevant.

There are already several places where this could create more efficiency: it's common to bind null to TEXTURE0 after various drawing operations, but this shouldn't be necessary any more. In fact, there are a couple of places where this wasn't even the correct action. In the UtilityPipeline, there are several functions which bind a texture, do a draw, then bind null. But copyToGame doesn't bind null, and blendFrames binds null to TEXTURE0 but not to TEXTURE1, which it uses for an additional texture. If we take care to ensure that shaders check their uniforms just before they draw, and bind texture units if necessary, then we can just skip this bind step. One more simplification.

Wednesday

I'm considering how exactly to handle shader programs and their associated resources, uniforms and attribs.

Currently, uniforms and attribs are tracked and set by pipelines or the Shader game object, in slightly different ways. The Shader is able to use GLSLFile resources, and is in fact the only way to use them. This format comes with some metadata about uniforms. The format also assigns a set of uniforms based on ShaderToy standards, although not the full set in use today.

It is pretty simple to programmatically determine precisely what uniforms and attribs are declared in a shader program, once it's been compiled. It makes sense that we could generate the complete list of uniform and attrib locations and types as the shader program is created; this is the authoritative and unchanging list. (We do not recompile programs.) It also makes sense that we could store this list as part of the WebGLProgramWrapper.

Now, uniforms and attribs behave slightly differently.

Uniforms must be set while their program is bound.

Attribs can be set without the program, but must establish a binding to a specific vertex buffer. That buffer only needs to be bound while modifying the attrib binding.

However, both values are only ever used during a draw call.

So here's what I propose:

▪ ProgramWrapper generates an authoritative location list for uniforms and attribs on creation.

▪ Uniforms can be set, and attribs configured with a target buffer and array parameters, at any time. But these go into a "request" layer and no further action is taken.

▪ When we want to draw something, we can specify which texture units, buffer and shader to use, and only at that moment do we transfer requests over to the authoritative location list.

This ensures that uniforms and attribs only get uploaded when they're about to be used. Fewer unnecessary calls.

But it also means we can set uniforms and attribs whenever we want, without worrying about current bindings. Which will simplify some of the current processes, which need to set various uniforms at different points in a pipeline's lifecycle.

Of course, state management means that we only update these values if they are different. I anticipate this will greatly reduce the amount of WebGL commands issued in many cases.

There is a risk that we might set values which don't exist in the authoritative location list, and not find out until later. But we can define access methods to prevent this. Also, the extra work involved in setting up the authoritative location list only needs to happen once, at shader compile time. And shader compile time is pretty hefty, so this won't make a noticeable difference. (Implementing parallel shader compilation is beyond the scope of the current objective, but it is possible.)

Thursday

I think that WebGLAttribLocationWrapper and WebGLUniformLocationWrapper will be retired as part of this work.

▪ Attribs and uniforms are part of a single shader program.

▪ Shader programs don't change after compilation. (They can, but we don't do it.)

▪ If we lose the WebGL context, we have to recreate programs, attribs, and uniforms.

So we can track the raw WebGL program and locations inside WebGLProgramWrapper, where they will be regenerated together on context restore. Just like the WebGLGlobalWrapper, the Program Wrapper will contain multiple state parameters.

I'm seriously considering breaking up attributes into several program-managed buffers, one for each attribute. I'm thinking this through as I type.

It's possible to link a program's attributes to different buffers.

A buffer with just one attribute is very easy to auto-generate and hook up. We do not need to compute stride and offset values; we can omit them altogether and WebGL will figure everything out.

On the other hand, multiple buffers require multiple calls to update, which is the antithesis of what we're trying to do. Furthermore, I've seen reference to 4-byte alignment within a single buffer, which apparently makes it faster to decode attributes (WebGL treats buffers as 32-bit tokens, and would need to continually cycle the way it extracts data out of those chunks if it's not aligned). So it really sounds like a single well-aligned buffer is best for performance.

This is a significant decision, because I'm defining the data structures for attributes within a shader program. Should each attribute have its own buffer property, allowing us to optionally use multiple buffers? Or should we enforce using a single buffer at a time?

Single buffer has its own advantages. It means the program wrapper itself can dictate buffer composition, which we can infer perfectly. And that composition can be fully optimised. We can potentially direct a series of arrays into a "mixer" and pop out a correctly formed buffer.

This is starting to sound a lot like a pipeline. I don't want to turn ProgramWrapper into a complete pipeline, however; the pipelines themselves are more specialised units, which assemble precise kinds of data to meet the needs of a specific shader program, and are themselves composable into larger rendering flows. It's also probably better to keep a single buffer outside the ProgramWrapper, because pipelines might want to share data.

I think, right now, I'll choose to enforce single buffers.

I've concentrated on setting uniforms, succinctly and as automatically as possible.

Fortunately, the WebGLActiveInfo that we get from getActiveUniform contains most of what we need. It tells us the name, type, and size of a uniform. The size parameter tells us whether a uniform is a "single" value or a "vector" of several values. The type parameter tells us, indirectly, whether the uniform is integer or float, and whether it's 1, 2, 3, or 4 numbers per value.

We can find the correct method for setting the uniform by checking against this data structure which I've defined on the WebGLRenderer:

webglconsts

The keys correspond to uniform.type. If uniform.size is 1, we use the set method; otherwise we use setV, and do some extra conversion into Float32Array or Int32Array depending on whether int is true. If matrix is true, no conversion is necessary.

When using set, we can check the corresponding size value (yes, two sizes, confusing), and use a switch to select the appropriate call signature.

This should ensure that all uniforms are updated exactly as WebGL expects. And do it all in about 60 LOC kept in the ProgramWrapper.

From the outside, this looks like calling programWrapper.setUniform(name, value). The name/value pair is stored in a Map (so subsequent calls to the name will overwrite the value). When it comes time to bind the program for rendering, these "requests" are automatically checked, transcribed, and updated if necessary. This should ensure that the minimum necessary amount of work is done.

Or at least that's how it works in theory. I haven't actually hooked the system up quite yet. That's a job for the morning. I'll have to go through the various pipelines, replacing their shader management. Hopefully we'll see a moderate decrease in complexity.

And I didn't even mention attributes. That's also a job for the morning. Wrestling with uniforms for so long should give me a solid foundation for getting attributes working smoothly.

The downside to this approach is that it's slightly less apparent what you're setting, from the outside. There's no more exposed typing of uniforms. However, as we have access to the shaders, it's trivial to see what they expect, and annotate the code if it pleases us.

Small progress: I replaced gl.useProgram(program) across the codebase with program.bind(), the new method for loading in the program and its uniforms/attributes. It's not actually using those uniforms/attributes right now, just the program bind. Because of the way this works, I'm going to have to turn everything inside-out: right now, Phaser binds the program, then sets uniforms on it, whereas in the new system we have to set uniforms first and bind only when we're ready to draw. I can already see that we're going to excise a lot of calls.

I've also fixed a handful of issues, including the context restore no-no from yesterday, and all the references to the wrong data structures that I left in the ProgramWrapper. I went through a lot of permutations of data structures before settling on what I describe above! What I've made is quite simple, but simplicity is always difficult to find in the sea of possibilities.

Friday

Shader makes a lot of calls which we will outright eliminate, but we'll probably eliminate the way it works, too. I anticipate most of the work is going to go towards pipelines. This is the part which will involve a lot of rewriting.

Well, mostly. It's not updating some values consisting of ArrayBuffers. I think this is because I'm checking whether a uniform has changed like this:

▪ Is it an Array?

▪ -> If so, check each value against each other value.

▪ Otherwise, check equality.

Of course, an ArrayBuffer will fail to detect a difference, because it's not an Array, and it's the same object (reused from the Shader) so of course it's equal.

There are several possible solutions.

▪ Add checks for ArrayBuffer types, and compare values just like Arrays.

▪ Require all uniforms be set with arrays of numbers, as WebGL will convert them where necessary.

▪ Set uniforms by copying values piecemeal, and detect difference that way.

▪ Don't bother checking difference at all.

We can disregard "don't bother checking", because we want to eliminate WebGL calls whenever possible.

We can also disregard "require number arrays", as there are plenty of systems which use ArrayBuffers elsewhere. (Big chunks of the matrix mathematics, for example.) I don't feel the need to rewrite those.

So we at least need to accept ArrayBuffers. In fact, perhaps we can require ArrayBuffers for all uniforms except BOOL, INT, FLOAT, SAMPLER_2D, and SAMPLER_CUBE. (We don't cube map yet, but consider it futureproofing - it would be the only uniform type left out.) But again, I think that's restrictive. Because both Array and ArrayBuffer (and its relevant instances, Float32Array and Int32Array) have length, and can be accessed with an [index], they are easy enough to check against each other. And because we're going to copy requests number by number, we can maintain a correctly-sized ArrayBuffer within the ProgramWrapper, and just drop values into it to be converted for assignment.

Is it a good idea to copy the uniform requests number by number? I think so. If we're going to do difference checks, they need to be their own objects, otherwise we'd risk assigning the same object to request and state, and it would never be different to itself.

This area also raises the question: Should uniform values be affected by context restore? I think so. It means a few more checks when running WebGLProgramWrapper.createResource, which ordinarily creates a new list of uniforms as it compiles the shader program; we just need to check whether that list already exists, and copy it into the request map so it can be set next time the program is used.

This means that uniforms will be persistent data across context loss/restore, although it requires knowledge of both the request map and the state map.

Yep, that fixed it. It's also taken one test game from 43 commands to 31 commands (and the 43 was using this morning's codebase, so it was already optimising blend modes etc). That's removing unnecessary uniform calls, but still doing the necessary ones, which I can tell because the test has two shaders side-by-side, and without updating uniforms, one of them renders over the top of the other.

I'm pretty close to getting the attributes working. You tell the shader program which buffer to use, and how you've ordered the attributes by name, and whether you want them normalized. That's all the data the attributes need to compose themselves into a full description of the buffer, because we generated the rest of the info when the attribute was created.

They've thrown me for a small loop, though: the data formats aren't quite what I expected.

So, when you bind a buffer to an attribute with vertexAttribPointer, you give it a parameter for the data type. That's BYTE, UNSIGNED_BYTE, SHORT, UNSIGNED_SHORT, or FLOAT. That tells it whether each component is 8, 16, or 32 bits. You also give it a parameter for size: how many such components are added in a row.

I was sort of expecting the attribute info itself to also stick to that format, so I set up the attribute handling code to check those WebGL constants. But it immediately threw me a curveball: the first shader I checked was a vec2 in the GLSL shader code, which wasn't 2 FLOAT values as I naively expected. It was 1 FLOAT_VEC2 value. Which isn't even on the list!

I know what's happening; that's just a data value which is used for uniforms. Turns out attributes can use them too. All the properties of these types are well-defined; I just need to add a bit more metadata so we can correlate everything we need.

This will take a little while, and the sun has set beneath its dark rainbow. (There was an optical halo above my city today. The Sun hung within a circle of darkness. I'm sure everything's fine.) So I'll dig into attribute metadata at the dawn of the new week!