Natively Stateless

My first posting promised to explain in detail the native stateless web/HTTP environment that is available to the primary Mumps implementations: GT.M and Caché.  So here we go.

Time after time I’ve seen examples of people grafting mainstream web/HTTP environments and frameworks onto Mumps and Caché systems.  The architecture is always basically the same, as summarised in the picture below:

For web service enablement, it’s a similar picture: really the only thing that changes is that the client is now a web service client of some sort instead of a browser, and instead of serving up HTML and Javascript/JSON, the middle tier is typically serving up XML:

In both cases, the GT.M or Caché server is treated as a pure database, accessed through some means or other (there’s lot’s of ways this can be done, and, for example, Caché provides a range of interfaces including ODBC for SQL access).

Such architectures are certainly workable, but they are usually implemented in this way because the people concerned are blissfully unaware that GT.M and Caché aren’t just straightforward databases: they include the Mumps language which has all the same capabilities as those the mainstream web middle-tiers.  It can parse incoming data in whatever format you want to throw at it (eg XML/SOAP), it can natively manipulate the database (instead of doing it indirectly through some gateway or connection), and it can code-generate HTML, XML or Javascript code dynamically.  It can also be used to implement all the other key aspects of a web application framework:

  • session management
  • state management
  • security management

In fact, that past heritage of being able to squeeze multiple concurrent users out of low-powered hardware in the past (see A Case of Mumps) means that modern implementations of the Mumps database and language are blisteringly fast on modern hardware.  Additionally, the hierarchical structure of Mumps globals (see Mumps: the Universal NoSQL Database) turns out to be exactly what’s needed for extremely high performance session management, a key requirement of web applications.  In fact, the Mumps database and language is tailor-made for the web: I realised this in about 1994 and have focused on nothing else since then.  If it wasn’t for the fact that Mumps pre-dates the web by 20 or so years, you’d think it had been deliberately designed and optimised for use as a web back-end technology!

So let’s look more closely at how a Mumps database works to support web applications and/or web services.  It’s all down to a particular family of web gateways that simply maintain and manage a socket connection pool, linking a web server to a Mumps server.  The figure below summarises the architecture:

The difference from the “mainstream” approach is that the gateway is otherwise completely passive: no scripting occurs within the gateway or on the web server tier.  Instead, all the action happens in the Mumps domain, using code written in the Mumps language that executes within the Mumps processes at the other end of the gateway connections.

HTTP requests sent from the browser or client to the web server are passed through the gateway which forwards them to the first available Mumps process.  It is then the responsibility of code executed in the Mumps process to parse out the incoming request name/value pairs, figure out what the request has to do, manipulate the Mumps database appropriately (getting and/or setting data from/to it). and generate the response which may be HTML, JSON, XML etc.  The response is simply outputted from the Mumps process using Write commands: the output is automatically routed back through the connection to the gateway which forwards it to the web server and it, in turn, sends it back to the awaiting browser/client.

I earlier used the term “family of web gateways”: they are as follows:

  • WebLink: the “grand-daddy” of them all.  Created around 1994/5 and sold to InterSystems to provide the first ever web gateway technology for the various Mumps versions they had at that time, and also for what was to become Caché.  WebLink is still available as a “legacy” gateway for Caché.
  • CSP: the “strategic replacement” by InterSystems for WebLink, and their preferred gateway for Caché.  In fact, the core CSP gateway really doesn’t do much different from WebLink, but it does have an important licensing-related impact
  • m_apache: an Open Source gateway designed for use with GT.M.  It actually is a sub-component of M/Gateway Development’s MGWSI generic connectivity library.
  • an Open Source, Node.js module named ewdGateway which combines the capabilities of a web server and gateway, emulating the same behaviour as the gateways above, but completely written in Javascript.

All four gateways work in the same basic manner.  There’s no coincidence to this: our company has been responsible for the design, development and ongoing maintenance of all four gateways.  In the case of WebLink and CSP, we do this on behalf of InterSystems.  m_apache and the Node.js-based ewdGateway module are our own Open Source products.

Whilst these web gateways provide the basic plumbing needed to support web-based/HTTP access to a Mumps database, there’s a lot more to do in order to create a working, secure web application environment: that’s where EWD comes in.  It’s been written entirely in Mumps code, can run on both Caché and GT.M systems, and looks after those key areas:

  • session management
  • state management
  • security management

It also allows the dynamically-generated web pages that make up a web application to be described at a very high level of abstraction (essentially as nested XML tags), and includes a compiler that turns these into Mumps run-time code.  EWD also includes a native Mumps implementation of the XML DOM, with the DOM itself modelled as a graph database on top of Mumps global storage, and the XML DOM APIs implemented as Mumps functions.  Additionally, EWD includes an implementation of XPath (for querying XML DOMs) and an HTTP client (allowing web services to be consumed from within Mumps processes).

The following four figures summarise how EWD interoperates with the four gateways:

You’ll see how similar the overall architecture is, regardless of the gateway you use.  You’ll also see how everything happens within the Mumps environment, so, apart from the passive gateway connected to the web server, there is no need for any other technology.  It’s the simplest and thinnest stack possible between the browser/client and the GT.M or Caché  back-end, and involves the least possible number of moving parts.  And it makes use of the legendary performance and scalability of the GT.M and Caché technologies.

A key feature of these gateways is their stateless mode of operation.  This is a critically important aspect of their behaviour: it is what makes the entire architecture massively scalable and able to support the unpredictable and potentially huge traffic of web-based users in as efficient as way as possible within the Mumps system.  It also has an impact on how your applications need to work.

The way this stateless behaviour operates is best illustrated diagrammatically in the following sequence of diagrams:

Imagine we have a number of web browsers/web-service clients that will be sending requests to the web server that will be passed to the gateway and hence to a GT.M or Caché  server.  If user 1 sends a first request:

..then the gateway establishes a connection to the GT.M or Caché  server and a process is fired up:

Processing of the request commences:

If user 2 now sends a request:

The back-end process is busy processing user 1’s request, so the gateway opens a second connection and a second process is started to handle the second incoming request:

Now suppose the request from user 2 is completed first and the generated response is sent back to user 2:

What happens is that the back-end process and connection to the gateway remains in place and the process goes into a wait state:

Now, if user 3 sends a request, the gateway immediately passes it to the waiting 2nd process.  Process 2, having dealt with the first request from user 1, now has to process the first request from user 3:

Process 1 now completes its processing and returns its response to user 1.  Process 1 goes into a wait state:

The interesting this is what happens is user 2 now sends a second request.  Process 1 is available, so the request is immediately send to it:

So, process 2 handled the first request from user 2, but process 1 is now handling user 2’s second request.

This is how a stateless system behaves.  It brings with it benefits and implications:

  • it is the secret to a massively scalable architecture: large numbers of concurrent users can be serviced by a relatively small number of physical processes.  Those processes are constantly in use on a busy system, but the minmum number of processes are activated to handle the actual concurrent traffic.  This means that system resources are used optimally.  By comparison, a stateful environment leaves you with lots of wasteful processes that require and consume resources such as memory, but don’t actually do anything for most of the time;
  • handling a sequence of requests from one individual user becomes and interesting challenge, because there is no guarantee that each request will be handled by the same physical back-end process: in fact on a busy system they will almost certainly not be handled by the same process.  Conversely, an individual back-end process will be handling a pretty much random sequence of requests coming from a variety of different users.

Clearly some fancy logic is required to make sense of this processing model at the application level.  If we’re using such an architecture to support web applications, each user needs to appear to be maintaining a session, where each request and response has some meaningful connection and inter-relationship, and where information can be retained and re-used in subsequent transactions within an individual’s session.  Such a stateful environment has to be created on top of a stateless environment.  It’s actually an illusion that is manufactured, but a critically important illusion.

One of EWD’s key roles is to create and maintain that illusion of statefulness. EWD provides a framework that looks after the underlying stateless environment created by the underlying gateway architecture, to the extent that the programmer or developer doesn’t even need to be aware of how or why the underlying architecture works.

However, as we’ll see in later postings, life gets interesting when you want to web-enable an existing legacy application such as VistA which was originally written and designed to use a fully stateful run-time environment.

There’s one last feature of the underlying gateways that needs to be mentioned: as if the scalability offered by the stateless operation of the gateways wasn’t enough, the gateways can be configured to deliver even higher levels of scalability if required.  This is possible because an instance of a gateway can actually establish and maintain connections to multiple GT.M or Caché  servers:

…and if you need even further scalability and/or resilience, you can have multiple web servers,each with its own gateway, each connected to the same set of back-end GT.M or Caché  servers:

This kind of architecture is used by the biggest customers of InterSystems to support their Caché -based, Internet-facing web applications, allowing them to run fully resilient 24 x 7 business-critical systems.

So, in conclusion, there is absolutely no need to use any additional technologies on top of GT.M or Caché  in order to support web applications or web-services: they can do everything much more effectively, much more scalably and much more efficiently by virtue of the combination of:

  • the family of gateways designed for use with GT.M or Caché ;
  • the Mumps language which is just as capable as any other web middle-tier scripting environment;
  • EWD to provide the framework to look after session, state and security management.

You end up with fewer moving parts, less to go wrong, less to manage, and just one language to deal with.

GT.M and Caché  really are tailor-made for the web!

About these ads

One comment

  1. There’s one footnote to add, to pick up on something I mentioned in passing with respect to the CSP gateway but didn’t explain.

    You’ll notice that in my description of the way the stateless mechanism of the gateways works, I explained that a gateway-connected back-end process becomes immediately available for use by the next incoming request from *any* user. Actually, whilst that is true of WebLink, m_apache and the Node.js-based ewdGateway module, it *isn’t* true of the CSP gateway.

    In the case of the CSP gateway, what happens is that when a back-end Cache/CSP process has responded to the client that sent it the request, for a period of time known somewhat ironically as the “grace period”, that process is only immediately available to the client that sent the previous request, and *isn’t* available for use by any other incoming request.

    I shall leave it to the reader to determine what the implications of this subtle little wrinkle is, and to surmise as to why InterSystems may have introduced it.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 60 other followers

%d bloggers like this: