It’s been a long time sine I’ve written a blog post, and I miss it a lot!
Today, I’ll be talking about an experiment I did 2 months ago when I was trying to optimize Blazor.Diagrams: JS Interop Batching.
I’ve been working on my Diagramming Library for Blazor for quite some time now, and it basically takes a model (
Diagram) that contains multiple nodes, ports and links (which are also models), renders it and makes it editable on the fly.
In order for the library to render things correctly, it needs to know the size of these nodes and ports. Since the library lets you use any HTML/CSS you want to style your nodes and ports, the size cannot always be known beforehand and so it needs to be calculated. That’s why the library uses JS Interop to get the bounds of the elements that interests it using
Now, everything works fine until you have thousands of elements to get the bounds for. Let’s take the following example:
- A diagram with 200 nodes
- Each node contains 8 ports
This means that we have 1800 elements of which the bounds need to be fetched using JS. The following problems arise:
- Since the JS calls are done separately by each renderer, we have 1800 calls to make. Each call is a roundtrip between the JS/.net barrier, which is just extra cost.
- Since Blazor WebAssembly only has 1 thread, the UI is blocked until we get all the data.
One of the things that I tried in order to optimize this process was to batch the JS calls together in order to save some time. In this post, I’ll share how I did it and all my findings.
While thinking of a proper solution, I thought it would be a good idea for the solution to be a generic one, usable as a separate library in the future in case other users are interested.
It would also be better if we could just drop the batching functionality in without changing a lot of code. As an example, the below usage should still work by only changing the
await JSRuntime.InvokeAsync<Something>("method1"); await JSRuntime.InvokeVoidAsync("method2");
As you can see, the solution should handle generic calls as well as void calls. Actually, void calls are generic ones that use
Since the calls need to be awaited for results, we will need to use TaskCompletionSource. Once our Web App tries to call JS, we will register a new TCS and return its
Task back so that it can be awaited. Once the JS calls are made (in a batch), we can set the results of all the TCS.
JsCallclass is used internally in order to know what calls need to be made.
JsResultclass is used to deserialize the result of a batch.
InvokeAsyncis called, we create a
JsCallto enqueue the call.
- If the task can be canceled, we register a new callback in the
CancellationTokenin order to cancel the TCS, otherwise the call would hang.
- If this is the first call, we create a new instance of the timer with an interval of 50 ms (can be lower or higher).
- Whenever the timer ticks, we try to dequeue all the calls and call
- For every call, we either set the result or set an exception based on the output.
Since we specified
JsResult.ReturnValue as an object (since we don’t know its type),
System.Text.Json will fill it with a
JsonElement. Unfortunately, as of now there isn’t a direct way to turn it into a specific
ToObject method does it by creating a writer first, then deserializing to the correct type, which is an added overhead.
As seen in the above implementation, there are two interesting methods:
SetException. Since invoking a call can be done using any Type (it’s generic) and we create a generic TCS as well, we only know the actual type at runtime, which is why we need a way to set either the result or the exception without needing to know it.
I used Expression Trees to do it, but you can also do it using a generic static class “hack”.
These methods basically create & compile an Expression Tree that takes as input a TCS with an unknown type, its actual type (known from the
InvokeAsync call) and the value to set. The expression casts the
tcs object into the correct type and sets the result/exception accordingly.
With this approach, there is only a small overhead for the first call of every type, then it should be the same speed as if the type was known.
The function is pretty straightforward. It looks for the function to call (if needed) using the
identifier, and executes it with the given arguments (if any).
This is basically everything you need to do batching!
Unfortunately, the results weren’t that appealing. The gains weren’t as important as the work required to make/maintain the functionality.
- The fact that everything was unknown until runtime adds overhead, STJ adds to that as well by not being able to freely deserialize from a
JsonElement. If the need was to only support void calls, then the performance gain would be much higher.
- Batching JS calls with results means that the more calls we have, the more data we have to bring back from JS. This isn’t a big problem in WebAssembly, but it is in Server Side Blazor. Everything needs to travel through the network, and there is a limit to how much data you can transfer in SignalR. The limit can be changed, but the Blazor team don’t really advise it.
This suggests that a generic JS Batching solution won’t be of great benefit. It’s a nice plus, but not something to jump on, at least with the implementation I tried.
Having said that, I will still try and look for a nice solution. In the meantime, I’m thinking of just batching JS calls with a known return type to avoid the mentioned overheads.
Honestly, it was a very fun experiment to do. I’ve always enjoyed measuring performance and trying to find better ways to do things, so this was a nice exercise. I hope this could be useful to someone out there. If anyone has an idea to make this better, I’d love to chat.