Functional Programming Fail
(Information Technology,Programming)
I just watched a lecture from a fellow pushing functional programming. As a proponent of functional programming, he bemoaned the lack of popularity for it. He displayed the top ten or so most popular languages ranked most to least. None of them were functional languages.
Let me first establish that I’m not a connoisseur of languages. I know a lot of guys who collect programming languages, and delight in the similarities and differences between them. They could be called “programming language hobbyists”. I’m not one of those. I’ve stated this before: I program to get a job done. A programming language is just a tool to get a job done. New tools may be nifty kewl, but when your hammer pounds nails just fine, why change it? I’ve searched for languages which do what I want in any given circumstances, and found them. They are what I use. Other, new kewl languages or paradigms don’t interest me.
Of course, those who adhere to “newer” programming languages will deride me, on the basis that my “old-fashioned” point of view, if widespread, would stifle innovation in languages. And that would be true, if my viewpoint were widely held. But it obviously isn’t. And this also begs the question of how much innovation is really needed in programming languages. For niche applications, a programming language which fits that niche might be a good idea. But for general programming, I question the need for newer and supposedly “better” languages.
I’ll give a sort of metaphor, the placement of a car’s steering wheel. At one time, steering wheels weren’t the preferred way to steer a car. And at one time, steering wheels were centered on the dashboard. Both those trends went away in favor of an actual steering wheel on the left or right side of the car, depending on the country. In a hundred years or so, this hasn’t changed, and likely won’t in the future. A guy named Chris Rutkowski wrote an essay in the early 80s about “architectural stabilization”. This was the idea that, over time, the architecture of an item will stabilize so that it no longer requires innovation. Take the standard computer keyboard. There are variants, but for the most part, the industry has settled on a basic set of alphabetic, symbol, numeric and function keys in a certain arrangement common to most keyboards.
Now back to functional programming. One of the key aspects of functional programming is that functions have no “side effects”. That is, if you call the function with a given set of parameters tomorrow, it will return the same answer as today. In other words, global variables or conditions do not factor in to functions. As far as this aspect of functional programming is concerned, most languages could host functions written this way. But there are side effects of this style of programming.
First, every bit of data a function needs to do its job must be fed to it as parameters. No variables outside the scope of the function can be taken into consideration. This can make feeding these functions tedious. Second, this method of function operation means a higher memory footprint for functions overall. They have to drag all the information around with them at runtime, rather than drawing on environmental or global variables.
Object Oriented Programming (OOP) handles this in a unique way, by storing shared values in the class, and allowing member functions to access them. Such variables can be made opaque to outside code, such that these variables/values are only visible to the member functions. This is one of the great strengths of OOP.
But there’s something more important underlying the architecture of languages which aren’t considered “functional”. Consider the silicon on which your program runs. If you’ve ever dealt with machine code or assembler, or if you’ve ever read technical articles on central processing units (CPUs), you have some idea of the features common to these chips. They have accumulators, and data registers, program counters, address registers and the like. From the earliest Intel 8080 to the latest Intel Xeon CPUs, all have similar architectures, mostly tweaked to improve performance and silicon manufacturing yield.
Assembler is based on the codes which cause the CPUs to function, and strongly reflect the underlying architecture of the silicon. You have “load”, “store” and “add” functions, for example, which do more or less what they say they do. And when you read up on how they make the silicon operate, they make complete sense.
At one time, assembler and machine code (1’s and 0’s) was the only way to program computers. There were no higher level languages. And then higher level languages were introduced. Most of these languages were compiled languages, and you could see the reflection of the assembly language and the silicon in the syntax for these languages. They were built to be easy to translate into assembler or machine code. If you knew assembler, you could more or less code a C “for” loop in assembler and it would work just fine.
Almost all these languages which became mainstream, while their syntaxes varied, still reflected the underlying architecture of the CPU. You could park a bunch of data in a given memory address, and then pass it to a variety of functions as is. In fact, this is often how video is implemented. A block of memory is set up to reflect the screen state, and is operated on by the variety of functions to change the memory. Other functions echo that changed memory to the screen. In other words, screen contents don’t have to be continually computed for every time the physical screen is refreshed. The operating system has a function or functions which simply echo the display memory to the screen.
And here’s where functional programming theoretically falls down. As far as the “functions have no side effects” paradigm goes, this prohibits functions from drawing on any areas of memory outside the stack (whose values are explicitly passed as parameters to the function). The function is effectively cut off from the rest of the variables present, if any. In this way it deviates from the design of all the other popular and widely used languages. It more or less obviates a whole list of CPU instructions which are routinely used by more mainstream languages.
Again, some languages just fit for certain limited applications. LISP, for example, is useful for artificial intelligence programming. Fortran is the preferred language for scientific calculation. APL was/is a language specifically designed to work with arrays. Functional languages are probably workable for certain niches of programming. But mainsteam? Not really. Again, consider why popular languages are popular, and why they were designed the way they were (partially because of the shape of the underlying silicon).
And if you want to know why popular languages resemble each other so much, consider the folks who develop them. No language is ever developed in isolation. Whoever is doing the design already knows a language or two, and is drawing on his experience or knowledge of them to create his new language.
A lot of programming concepts, like functional languages, are dreamed up by university professors who write endless papers about this or that theoretical aspect of programs. And I suppose they are of interest to other university professors and programming language hobbyists. But for those of us who use programming as a tool to get a job done, C, Javascript, Python, PHP and other “typical” languages will do the job just fine, without new advances in programming language theory.
Remember the old saw: “if it ain’t broke, don’t fix it”.