I’ve been a developer since the late 1970’s and still going strong. Over the years I’ve had to learn and develop using most of the well-known computer languages and one of the least-known. I’ve had to understand a large number well enough to code-generate them. Most I learned to hate. Most were too cumbersome, tedious, boring, sucking the interest out of the task of application development or just wasting my time in a mass of redundant syntax and detail.
Actually there’s a very good reason why I clicked with both: they share a number of key attributes:
– they are both interpreted languages
– they are both typeless (actually both apply similar automated and intelligent type conversion in certain limited contexts)
– both have similar variable scoping rules: a variable is globally-scoped unless specifically declared otherwise
There are differences:
– the syntax of Mumps is more like that of Python, in particular with respect to the significance of white space at the beginning of each line.
– Mumps doesn’t support closures
A modern Mumps developer using modern Mumps implementations (as found in Cache and GT.M) would write code that would be just as readable and just as clean as most of the current modern languages. OK, the Mumps logic might not have the benefit of object orientation or closures, but in 30 years of development I’ve not come across a single situation where I wished I had such techniques available. The fact is, for the purpose it was designed (manipulating the integrated Mumps database), it is plenty good enough.
Yet if you do even a small amount of Googling about the Mumps language, you’ll quickly come across what appears to be damning evidence that it’s a complete nightmare of a language, a retarded, out of date disaster waiting to happen, and the road to an unmaintainable ruin. How can this be? Against this weight of apparent evidence to the contrary, how can I be so positive about it? Am I suffering from complete delusion?
Well, let me explain how such a dichotomy of views has arisen. It’s largely an historical thing, and the sad thing is that the poison that has circulated from the “Mumps is a pile of c**p” school has managed to unfairly turn people away from what is otherwise a fascinating, hugely productive and capable application platform.
Mumps first emerged from Massachusetts General Hospital in the late 1960s/early 1970s. One of its early users was the US Dept of Veterans Affairs, where it was used to develop what is now known as VistA during the 1970’s/early 1980s. Many other clinical/medical systems were also developed around the world at that time, and, like VistA, many have stood the test of time and are still in use.
At the time of their development, however, there were two key differences from today:
– the Mumps language standards were still developing and, in particular, proper variable scoping did not come about until the mid- to late- 1980s. Prior to that, all variables were globally scoped, and functions did not take a formal argument list. That’s all fixed now;
– in the late 1970s/early 1980s, a unique feature of Mumps as a technology was its ability to support what were otherwise considered impossible numbers of concurrent interactive users on what was, by today’s standards, incredibly underpowered hardware. 32 simultaneous interactive users on a PDP-11/24 (a digital watch probably has more processing power these days!) was not considered unusual for a Mumps system, but at the time you’d have been unlikely to find any other technology capable of such a feat. In order to squeeze out that kind of performance, Mumps programmers learned many tricks and shortcuts. In particular, in order to squeeze the routines into the small amounts of available memory, commands were abbreviated to single letters, multiple commands were strung together onto the same line, and (to other developers’ eyes) arcane techniques such as “post-conditionals” were used to create programs that were incredibly efficient to run, but seriously difficult to read and maintain. However, in those days, even the lowliest hardware was incredibly expensive and, by comparison, developer time was cheap. So the priority was on performance, not maintainability.
It’s a somewhat inevitable fact that once a pattern of development has been established, it’s incredibly difficult to break out of it. By the time the language standard had improved and by the time the balance was shifting to cheap hardware and expensive developer time, applications such as VistA consisted of millions of lines of code, representing tens of thousands of man-hours of development. Re-engineering from scratch would have been impossibly expensive and time-consuming. It’s also a sad fact that once a large body of code has been written that relies on globally-scoped variables that leak in and out of functions, it’s a very difficult to task to tidy it up to be properly and cleanly scoped.
And so back to those damning articles on the Mumps language: they are written by people who have had the misfortune to have to work with and maintain that old “legacy” code, complete with its leaky functions and arcane syntax. The examples they quote are from those VistA (and other) applications. Yes, even a modern Mumps developer, used to the same coding style that any modern developer using any other modern language would now expect, would throw their hands up in horror if they saw such code. The interesting thing is that those developers who have had the misfortune to have to maintain that old Mumps legacy code appear to be blissfully unaware that it’s not a fault of Mumps per se: it’s how it was for sure, but not how it needs to be today.
I’ve actually been in the position to explain this to someone who was initially a great detractor of Mumps as a language. My explanation came as a huge revelation to him, and, when I gave him examples of how modern Mumps could be written, he actually did a complete about-turn and began enthusing about the language in his blog!
So, the case for Mumps is actually very strong. Whilst I sympathise with the clear pain suffered by the authors of those articles about the problems they had when trying to maintain that old legacy Mumps code, it is unfair and just plain wrong to imply that this is symptomatic of a general problem with Mumps as a modern technology.
Sure, in the wrong hands, Mumps development can still go horribly wrong, but that is true of any modern, mainstream language too. In the right hands, it’s possible to write great applications and elegant code that is as readable and as maintainable as in any other language. Sure, you have to learn the syntax and a few quirks, but my experience is that this is true of any language.
I first came into contact with the Mumps database and language in the early 1980s. To this day it’s still one of my favourite languages, and the one in which I’m most productive. These days I’m doing some of the coolest and most modern web stuff with it, and I’ve yet to find any limitations to its abilities to meet the demands of the 21st century.
A case of the Mumps? I’d strongly recommend you get it!