When I was tasked with writing a post about the ins and outs of stress and reliability testing and monitoring of low code/no code platforms, it became clear after a couple of days spent researching that this topic could turn into a TL;DR as a single post. That’s why we decided to split it up. Part 1 is a (very) high intro level into the world of low code/no code applications. This will be henceforth referred to as LC/NC to keep this piece in the spirit of LC/NC and save some bytes.
If you’re new to LC/NC, you might want to read Part 1 first. Otherwise, it’s time to read on!
So, why might performance testing be as important if not more so for LC/NC-based applications and platforms?
They are relatively new
The basic principles of using abstraction to make coding faster and easier have been around for decades, but the “Low Code Wave” has really built a lot of momentum in recent years. LC/NC platform providers are both growing fast and multiplying, not to mention the considerable number of established software and SaaS providers who are jumping on the LC/NC bandwagon.
However, abstraction in software very often has a trade-off in visibility, ranging from decreased transparency to down-right black-boxedness (yes I had to invent a new word just for that). This means that as a vendor updates or possibly completely changes the underlying stacks in their offerings, end customers can be taken by surprise. Maybe not by the change itself (hopefully they knew it was coming) but in the impact they may have on your existing apps. Running something like a Mendix self-hosted may mean you can even browse directories and see what’s changed first hand. But that will likely not tell you what they may mean for your application performance and – by extension – your end user experience.
New developers, new rules
LC/NC brings along with its many benefits a couple of caveats as well. New developers are emerging, helped by the reduced complexity of building software with LC/NC, who may have little or perhaps even no previous experience in building software. This means they may not be familiar with best practices for architecting say a high availability and/or highly scalable application. On the other hand, experienced developers will themselves need to learn additional best practices related to building with LC/NC platforms. In both cases, comprehensive testing of both performance and functionality becomes even more crucial.
Not to burst anyone’s bubble, but at MeasureWorks we have experienced first hand that building on a LC/NC platform does not at all mean you won’t end up with a few weeks or more of custom coding to extend out of the box functionality to meet customer requirements and just like with any other type of platform you always performance test custom code. (Right?)
Reports from the Field
I’d like to share some observations I’ve made over the last couple of years related to LC/NC platforms. For instance, different paradigms used to build apps can have vastly different performance profiles:
- The back-end design for a scrum board used by a few dozen users/hour may completely fail when applied to an ecommerce app with +5000 users/hour
- The same app may perform well for users on a laptop with a fast link, but be a miserable experience for someone with higher latency in their connection or accessing via a proxy, gateway or distant VPN concentrator. This is not uncommon due to some LC/NC platforms having the tendency to deliver apps based on many MANY small HTTP POST calls, which can compound connection woes. You can find an example from a large public transport provider here in The Netherlands below:
(I gave up counting at 30 of these small HTTP POST calls in the post-login landing page)
LC/NC is an area of high innovation which itself enables building applications that themselves have very high change/innovation rates. These factors can create a bit of a moving target on the back of another moving target, particularly where QA, site reliability and performance teams are concerned. Integrated, continuous testing of both functionality and performance become even more important in your development cycles. You can tell how serious I am about that, because I didn’t just write CI/CD.
This piece could go on for quite a while longer but if you’ve stuck it out this long I’m impressed. To try and wrap it up maybe I’ll just go with a nice, clean bullet list:
- With LC/NC, both the provider platforms AND the applications built on them can have high change rates
- Testing tools need to evolve as well
- More real browser testing can help the tester put a layer of abstraction between the test scripts and the application to be tested (using a trick right out of LC/NC’s own playbook). However, this usually comes at an additional cost in the form of computing resources
- Extra tools for scripting on protocol based load and stress testing platforms can be extremely valuable, saving considerable time and banging of the head against the screen (see HTTP POST call swarm example in previous section)
- I haven’t even begun to get my head around what this means for testing LC/NC native mobile apps (TBC I suppose)
To sum it up, LC/NC-based applications make stability, capacity and performance testing even more important than they already were in my opinion.