Flutter CTO Report 2024
Get insights about Flutter directly from CTOs, CIOs, Tech Leads, and Engineering Managers!

Sharing Logic Across Multiple Platforms

Marcin Wojnarowski
Senior Flutter Developer at LeanCode

Rating: 5.00 / 5 Based on 6 reviews

May 16th, 2024 • 12 min
Free product design tips
Join our Newsletter for Expert Flutter Tips & Product Design Insights from our seasoned team!
By submitting your email you agree to receive the content requested and to LeanCode's Privacy Policy.
or follow us

When a project grows, sooner or later, a need for sharing logic across multiple clients and a server arises. At LeanCode, we completed a project that extensively employed logic sharing successfully. In this article, we highlight the steps and architecture decisions of the picked solution.

Sharing logic across multiple platforms - introduction

Say you have three platforms: a backend service, a mobile client, and a web client. While the amount of shared logic is small, you might just duplicate the logic implementation on each platform (for instance, local validation of the email/password fields for a nicer UX, even though the server still has the final say after submitting a form). The risks and consequences of the logic desyncing across platforms are small. But once the importance of the duplicated logic grows or once the amount of duplication is too large for you to sleep at night, it is time to address it.

One natural solution would be for the backend service to expose an API endpoint which, given some input, would return the computed result based on some business rules. This works great in scenarios where instant feedback isn’t crucial, and you don’t expect to call this endpoint very often. This could be further improved by moving to some real-time communication scheme (such as WebSockets) but this is still bound by the network speed. What is worse, it is more generally bound by the availability of a network connection! So what to do when instant feedback is important and lack of internet connection is a requirement?

Keep reading as this article outlines the steps and architectural choices involved in the selected solution.

Example feature

Shared calculation logic

One of the features was to collect some measurements that will be inputted in the various user-facing apps after which calculations have to be performed based on this input. Users collecting these measurements will be in remote places where the internet isn’t always available. They will also want to see the calculations immediately, as it affects their continuation of collecting measurements. The backend service will also have to perform these calculations to generate reports.

The correctness of these calculations and consistency across platforms are critical, so duplication is out of the question. Since an internet connection isn’t guaranteed, the calculations have to be performed locally.

Sharing logic with JavaScript

JavaScript is a good common denominator to be used across multiple platforms. A browser already has a JavaScript engine, iOS has JavaScriptCore, Android can embed an engine using the NDK, and a server can easily reach any open JavaScript engine. A similar argument can be made for Lua, which is known for being small and easily embeddable. 

However, we found JavaScript to be more prevalent across all platforms. For mobile, we used Flutter, which allows us to easily take advantage of the native ecosystem available on Android/iOS. For the rest of the article, we will be mentioning mobile by Android and iOS separately, as each has to be considered separately despite us using a cross-platform UI framework such as Flutter.

When choosing the technology to share logic across platforms besides scripting languages like JavaScript and Lua, we considered using FFI to interop with native C or WASM code. The idea of using FFI with C code was quickly abandoned: it requires all code to be available at build time for linking, is less flexible to frequent changes, is less forgiving and safe, and is non-obvious to hook up to a web target. 

WASM being a popular target for languages made it an interesting contender. WASM still keeps the advantages of JavaScript: sandboxed, can be loaded at runtime, and easily embeddable engines. However, we felt the flexibility of a scripting language would come in handy. We envisioned creating a graphical editor for code that would put together various snippets of JavaScript code to form a business logic unit. This editor would be then used by the client in an admin panel to modify the logic of some calculations.

Scripts

A script is what we call the smallest unit of JavaScript code that can be interacted with. It accepts some inputs and returns outputs, similar to a function call but across language boundaries.

These scripts can be prepared any way we like, so we can take advantage of a build step. To that end, we leverage TypeScript for a bit of type safety and use a bundler to minify and compile a script to a single JavaScript file. Code that will be later bundled into a single file can be freely tested using JavaScript testing frameworks. When bundling, we need to have an agreed-upon entry-point for a script. For instance, a main function: function main(inputs: Inputs): Outputs. It is important to note that both Inputs and Outputs will cross language boundaries, so the data being sent has to be serializable.

Thanks to the sandbox nature of JavaScript, we can control access to various resources ensuring that scripts won’t interact with the outside world (for example, by disabling network requests). Each script is self-contained with the exception of some global implicit dependencies.

Implicit dependencies

We quickly and unanimously decided to keep the scripts self-contained. This makes them easier to test in isolation and more predictable. But sometimes, there might be a need to share some code across all scripts, and it would be inefficient to include this shared code in every script. Tree-shaking during bundle time can help a lot with reducing size, but it is not always enough.

In the case of our feature, we need to perform precise calculations so floating point arithmetic is not acceptable. We can take advantage of the vast JavaScript ecosystem by using npm and finding a package that implements lossless computations on decimals. Such a package wouldn’t be small, so we definitely do not want to include it in every script.

When running any script we make an assumption that this package will be available implicitly. This means we will expect that before running any script the implicit dependencies have been loaded already. This incurs some downsides. Now all scripts using that dependency are coupled: if we want to upgrade that dependency, we need to migrate all scripts to the new version. We also need to make sure that all platforms executing the scripts additionally load the exact same version of implicit dependencies. We should therefore distribute the dependencies in addition to scripts which adds a bit of extra setup on each platform.

Implicit dependencies logic

Using scripts

Once written and bundled, we end up with a set of scripts that can be invoked by any prepared platform.

There are three things that have to be figured out for each platform:

  1. How will the scripts be distributed?
  2. What JavaScript engine will be used?
  3. How will scripts be prepared to be invoked?

Distribution of scripts

The way scripts are to be distributed should depend on how often you expect to change them. You could include them in a build step of your applications, but then changing any script would require releasing a new version of your application. Instead, we decided it is reasonable to set up an API endpoint that will return all scripts. Then applications could prefetch these scripts at startup and cache them for future uses. 

This will result in applications being able to execute scripts while offline, as execution is done fully locally. And if at some point we notice a bug in some script or we decide we want to add some functionality to it, we can just modify it on the server and all applications will re-retrieve the scripts through the API. So the backend server will have all scripts bundled and ready to use but also ready to send them to clients once requested through an API endpoint.

Picking a JavaScript engine

In a browser it is natural to just use the underlying browser’s JavaScript engine. For native environments, any engine that is usable as a library can be used. For a backend server a full-blown browser-grade engine can be used. Usually, the size of the backend application is not a concern, so we can pick the best engines despite their large size. 

All iOS applications have access to the JavaScript engine available on iOS: JavaScriptCore. It is a fast engine powering Safari. Finally, there is Android. Here, we have to be wary of the engine we choose. Small engines such as QuickJS implement a large portion of ECMAScript or browser-grade engines such as JavaScriptCore. It is a matter of weighing the importance of application size vs. execution speed. Here, QuickJS is the clear winner for size, and JavaScriptCore is the winner in performance.

When choosing engines, it is important to make sure that all of them will be able to run the same JavaScript code. Having a preprocessing step for scripts is helpful here, as we can transpile scripts to an older, better-supported subset of JavaScript. In the end, we picked V8 for the backend and JavaScriptCore for Android. 

At first, we used QuickJS on Android, but the performance was noticeably bad, and it degraded the user experience. We ended up trading size for performance.

Preparing execution

The startup time of an engine isn’t zero, so it is a good idea to keep the engine in memory between executions of scripts. First, we preload all global dependencies into the engine instance environment. Afterwards, each script is loaded individually. Since every script shares the same entrypoint function name (like the previously mentioned function main(inputs: Inputs): Outputs) and possibly other identifiers, we should enclose each script in some contained scope. 

For example, in another uniquely named function. In the case of mobile, we enclosed each script in the following way (where ${} refers to string interpolation of the produced code):

function ${unique_function_name_to_refer_to_the_script}(input) {
	     ${script_code}
	
	     return main(input)
}

After loading each script, we are left with implicit dependencies and these uniquely named functions in the top level scope.

To execute and collect results from scripts, we need a way to de/serialize data. Since this is JavaScript, the easiest solution would be to use JSON. On JavaScript’s side, no deserialization would be needed as JSON is a strict subset of JavaScript. Of course, we would like some type safety when executing scripts. We want to make sure that what we pass to a script is what the script expects to receive and that what a script outputs is what the host platform expects to receive. 

Since we are targeting many platforms, it would be important for all host language types to be generated from the same source of truth. At LeanCode, we have a custom contracts generation solution. We write the expected inputs/outputs types in a contracts language, after which we generate types for all of our platforms. The scripts would also consume the generated TypeScript types to define their I/O. Any platform interacting with the scripts would consume types generated from the same contracts. 

This automated process for ensuring correct I/O structures was crucial and saved us a lot of headaches. Since types are generated at build time and scripts might be changing at runtime, standard backwards/forwards compatibility considerations apply.

Then calling a specific script is a matter of doing some JSON de/serialization:

JSON.stringify(${unique_function_name_to_refer_to_the_script}(${json_input_data}))

Where json_input_data is the input data serialized to a JSON string on the host platform.

Logic across multiple platforms diagram

Drawbacks and limitations

Unfortunately, this solution is not a silver bullet for logic sharing across platforms. One noticeable limitation is the lack of database access in scripts. This means that all data needed by the script has to be provided in inputs. Therefore, every platform trying to use these scripts has to first fetch all the data needed to process the computations. We also have to be wary of the differences in engines. 

While ECMAScript is a strict standard, it might happen that one engine implements something incorrectly or not at all. This did happen to us in practice where one of the engines had an obscure bug when handling date string parsing or when one engine did not implement some modern JavaScript function. While the lack of JavaScript functions can be fixed by changing the transpilation configuration of scripts, bugs in engines require ugly workarounds. 

Another thing to consider is the speed of the engines. The shared logic now will have to be executed in JavaScript rather than your (probably) faster language. Pushing and serializing data around just to do some business logic has a non-zero cost and can add up. An engine such as QuickJS is very small and easily embeddable, but the execution speed difference between it and a browser-grade engine is very noticeable. 

The obvious answer would be to just always use these browser engines, but this brings us to the last drawback: size. Embedding a JavaScript engine can increase your application size by a lot. The platform that is most affected by it is Android. Android has no built-in engine, so we have to embed the whole engine into our application. This can easily make your application size bloat. In our case, migrating from QuickJS to JavaScriptCore added 30MB to the APK size of the Android application.

Conclusions

Despite the journey being a rocky one, the final solution ended up working very well for us. 

1. Scripts were dynamic, allowing us to change them after user-facing applications were deployed. The amount of logic that was shared across 4 platforms was invaluable.

2. Script execution was snappy, allowing users to see calculation results immediately after they entered the measurements, whether offline or online. 

3. We also leveraged scripts to compute non-trivial chart data (point chart, line chart, spline chart) to show identical charts across various platforms, even to the exactness of the colors used in each chart. If we were to do it all over again, we would strongly consider using WASM instead. The idea of a graphical user interface for code building never came to fruition, so WASM would be back on the table. WASM would allow us to use more sophisticated host languages (such as Rust), which would solve issues inherent in the dynamic nature of JavaScript. A language such as Rust, with its strong type system and general safety, would be used more confidently to write scripts. 

If you need to share logic between various platforms, consider using JavaScript with a structured pipeline. This will give you confidence that logic isn’t being duplicated and that it is consistent across platforms. Very similar arguments can be made for other languages, such as the aforementioned Lua or WASM. When choosing the language, it is important to consider its support on the host platforms as well as the implications of embedding its engine.

Free product design tips
Join our Newsletter for Expert Flutter Tips & Product Design Insights from our seasoned team!
By submitting your email you agree to receive the content requested and to LeanCode's Privacy Policy.
or follow us
Rate this article
5.00 / 5 Based on 6 reviews

You may also like

Building an Enterprise Application in Flutter

Building an enterprise-scale application in Flutter, as in any other framework, requires a specific approach toward organizing the team and the code they create. This comprehensive tech article explains how to approach such a large-scale project.
Flutter at scale by LeanCode

Identity Management Solutions, Part I: Introduction

With the changing landscape of identity management, at LeanCode we faced the challenge of selecting a new identity management solution that would be our default for the coming years. We want to share with you the whole journey. Find out more about our approach to this task.
Choosing Indentity and Access Management solution

Feature-Based Flutter App Architecture - LeanCode

Read about the LeanCode approach to Flutter architecture. We highlight some of the design decisions that should be made when developing a feature in mobile apps. This includes our approach to dependency injection, state management, widget lifecycle, and data fetching.
Flutter architecture by LeanCode