Goodbye HTML. Hello Canvas!
Part 5: The Dynamic Architecture & The Initialization
You can read the previous article here.

Reviewing
Well, in the demo of the last article we reacted to the mouse down event on the widgets by changing colors of the widgets and by changing the order of the virtual layers (which means changing the CSS z-Index of each canvas — the canvas is the heart, the substance of the panel).
CSS z-Index? We are saying goodbye to HTML/CSS or not? Besides the tags html and body, the library uses, under the hood, only two kinds of HTML elements:
- the div: just once, for the stage
- the canvas: one for each panel
And, for those two kinds of HTML elements, the library uses the basic CSS properties like left, top, and z-Index. Just you don’t deal directly with them.
Only the canvas (inside a panel) receives mouse events provided by the browser. The widgets receive mouse events created by the library.
myWidget.onmousedown = function (e) {
console.log("mouse down")
}
myWidget is not an HTML element, but you treat it as it was. Piece of cake ;)
The Dynamic Architecture
When we consider the names Static Architecture and Dynamic Architecture, we should not forget that we are talking about an engine that runs inside a browser, which has a dynamic nature. A real Static Architecture would not be able to produce those two simple animations of the last demo.
The Direct Response
We produced those animations, through direct response to a mouse event. This is simple and good. But will not be enough and may become a problem as the application grows.
Direct response is not enough when we need timed animations, like a blinking cursor.
Direct response is a problem when… well, the best way to explain is using a real case (but simplifying internal details).

One of the features of BobSprite is always giving feedback about the pixel that is under the mouse (center of the cursor): position (X, Y), color sample, and RGBA values (“opaque” means the alpha value is 255). This information appears in the lower-left corner of the application.
Using direct response, we would do something like this:
// simplified codepicture.onmousemove = updatePixelInfofunction updatePixelInfo(e) {
const x = e.offsetX
const y = e.offsetY
const rgba = getPicturePixel(x, y)
printPixelInfo(x, y, rgba)
}
Good, but we also need to repaint the app because the cursor (black and white frame) moves with the mouse:
// simplified codepicture.onmousemove = mouseMoveHandlerfunction mouseMoveHandler(e) {
const x = e.offsetX
const y = e.offsetY
//
repaintApp()
updatePixelInfo(x, y)
}function updatePixelInfo(x, y) {
const rgba = getPicturePixel(x, y)
printPixelInfo(x, y, rgba)
}
OK. But now we have a problem because repaintApp is slow (actually repaintApp is the slowest function in BobSprite) and the user may move the mouse very fast, overloading repaintApp, making the application less responsive, even having small freezes.
And we are not painting (which involves a lot more processing, including memorizing) yet, just moving the mouse.
Besides that, there are a lot of keyboards commands that change the picture (like “R” for rotation). Each one would have to call repaintApp and updatePixelInfo. But updatePixelInfo expects to receive a mouse event, and we have a keyboard event.
The biggest problem with a direct response is that it starts a chain of function calls.
It is no big deal when the chain of functions is linear (A > B > C > D) and doesn’t clash with another chain of functions.
For a drawing tool, the clashes exist, and the chains of functions become labyrinthic. Introducing a new feature in the application implies breaking and recreating the old function chains (big refactoring). Also, there is the very important problem of redundancy: easily we could, for example, call repaintApp, more times than it is necessary.
Therefore, depending on the kind of your application, using direct response to mouse and keyboard events implies, in the best scenario, creating workarounds and making our code a mess, very little maintainable.
The Indirect Response
The solution is adopting the indirect response pattern, which also handles animations like the blinking cursor.
// simplified codevar mouseX = -1
var mouseY = -1var shallRepaint = false
var shallUpdatePixelInfo = falsepicture.onmousemove = mouseMoveHandlerfunction mouseMoveHandler(e) {
mouseX = e.offsetX
mouseY = e.offsetY
//
shallRepaint = true
shallUpdatePixelInfo = true
}function updatePixelInfo() { // no parameters!
//
const rgba = getPicturePixel(mouseX, mouseY)
printPixelInfo(mouseX, mouseY, rgba)
}function mainLoop() {
//
manageBlinkingWidgets()
//
if (shallRepaint) {
repaintApp()
shallRepaint = false
shallUpdatePixelInfo = true
}
//
if (shallUpdatePixelInfo) {
updatePixelInfo() // no arguments!
shallUpdatePixelInfo = false
}
// the browser provides this timer
requestAnimationFrame(mainLoop)
}
Some remarks on the new code style:
- only one function calls the expensive repaintAPP
- redundancy is not a problem anymore; any function anywhere may FLAG that repaintApp should be called without any concern because it is just setting a boolean (shallRepaint = true), the most inexpensive procedure in the world
- also, no redundancy concerns for updatePixelInfo; it is FLAG based, as repaintApp
- a keyboard event still doesn’t know the position of the mouse; and it need not know, because it will not call the function (updatePixelInfo), it will set the flag (shallUpdatePixelInfo= true)
- mouseMoveHandler ends without calling any function; we handle the mouse event WITHOUT STARTING A CHAIN OF FUNCTION CALLS!
- although requestAnimationFrame, makes the loop run, in general, 60 times per second, the application is economic because the use of flags skips the procedures that are not required at the moment
This is a general concept. The library uses requestAnimationFrame and gives handle(s) to connect your callbacks. More on this in another article.
The Initialization
When we create an HTML/CSS page, we don’t need to care much about loading resources. We just declare “place this photograph here”, “use font ABC”, “style the buttons with this CSS sheet”…
The browser takes care of everything for us. While the ABC font is not loaded, it uses some placeholder font. After the ABC font is loaded, it replaces the placeholder font with the ABC font.
This strategy is, sometimes, a bit weird for the user. After a few seconds, while somene is reading the page, fonts (and even layout) change. I am not complaining. It is just a fact. Actually, I think this is the right strategy. The bad strategy is not showing the page until all resources are loaded.
The initialization of our canvas-based application is different. We need first load the font sheets in order to print text. The same happens to icons. It is an all-or-nothing strategy:
- we load everything before displaying something, we don’t care about filling the gap (showing content immediately, using temporary resources)
- we can change the background color off the stage, as proof to the user that something is happening
- we are very careful about the number of files to load and the size of each file; we pack images in a single sheet and pack all JavaScript code in a single file
Remember, we are creating a special application. The newcomer is not supposed to visit this page first. He is supposed to land on the home page of the website. He can wait 2 seconds.
After the images are loaded:
- the fonts and icons are unpacked
- the interface is mounted
- the mouse/keyboard event listeners are activated
- the main loop begins to run
var numberOfResourcesToLoad = 0function main() {
//
loadImages() // manages numberOfResourcesToLoad
//
recoverDataFromLocalStorage()
//
main2()
}function main2() {
//
if (numberOfResourcesToLoad == 0) { afterLoadResources(); return }
//
setTimeout(main2, 30)
}function afterLoadResources() {
//
initFonts()
initIcons()
initInterface()
//
initMouseListening()
initKeyboardListening()
mainLoop()
}
What is next
Today we studied more fundamental concepts of a canvas-based page. We haven’t talked (enough) about keyboard event handling. Also, we got no demo.
Before talking about keyboard event handling and running a demo, we need widgets for text input; because keyboard event handling is all about focus (which widget is the target of the keystrokes). A widget for text input and its operation (including blinking cursor) is the most complex/hard part of the engine. A simple demo would not be enough.
Therefore, I’ve decided that the next demo/article:
- will demonstrate focus and keyboard event handling
- will be a complete application, with a few useful, features
- will release the first version of the library, ready for use
This is the link to the next article of the series.
More content at PlainEnglish.io. Sign up for our free weekly newsletter. Follow us on Twitter and LinkedIn. Join our community Discord.