Many of the statements about FP that I see here right now are the same old shit I've heard about Java in mid 00s. You just need to mentally translate some buzzwords, but the essence is the same. Seems like the software industry is just running in circles. Something get hyped, people jump on it, fail, then search for the next bandwagon. Some examples: 1. Endless yammering about low-level correctness.
As if it's biggest problem in software engineering right now. In reality, most domains don't need perfection. They just need a reasonably low defect rate, which is not that hard to achieve if you know what you're doing. Spewing of buzzwords, incomprehensible tirades about design patterns. FP people don't use the term "design pattern" often, but that's what most monadic stuff really is. Much of it is rather trivial stuff once you cut through the terminology.
Contrast this with talks by someone like Rich Hickey, who manages to communicate complex and broad concepts with no jargon. People who talk about "maintainability" of things while clearly never having to maintain a large body of someone else's code. It's the insane, ever-growing level of complexity and the resulting lack of human agency affecting both IT professionals and users.
For me, and this is having grown and maintained some large functional code bases, the fact that your day-to-day code does have an enhanced low-level correctness is exactly what helps you attack what you describe as the major problem: complexity. The way I've dealt with complexity in large code bases is through being fearless about refactoring. Refactoring may not reduce complexity in terms of what the software does, but it reduces the complexity of understanding the code base tremendously by realigning the structure of the code with the actual problems being solved.
Refactoring gets a lot less scary when you have greater confidence in the low level correctness of the code. On your second point, yes, I have found that FP has some, shall we say, interesting jargon.
But I have trouble thinking of succinct names for a lot of FP constructs that are nonetheless useful, such as monads.
A lot of more colloquial terms that come to mind in brainstorm sessions might even undermine understanding by providing a false equivalence. I think the same argument can be made for mathematical notation. In summary I'd turn around your last sentence a bit. Yes, the 1 problem is complexity, but you can reduce complexity significantly by applying correctness and modularity and other programming 'buzzwords'. You can rail against complexity itself, but I think we're probably on the bottom end of a very large complexity slope over the next decades.
So we'll need better and better constructs to deal with it. Most recently I managed a refactoring design process across a multi hundred person org at a large tech company, also needing to include a few other multi hundred people orgs.
It wasn't the first time I've done that. Most of the effort involved was people issues, to your point. At that point you're talking multiple codebases and the complexities become managing transactions, data transformations, and contracts across discrete processes.
I'm not sure how that's germane to the discussion at hand. In fact, to the opposite point, I've found that in multi organization refactors and designs functional programming continues being a useful mine for concepts to simplify thinking around data transformations, immutability, and data contracts. When I read that paper, it was clear that the author's definition of modularity was very different from my own.
When I think about an algorithm like merge sort, mini-max decision trees or other low-level algorithms, the concept of modularity doesn't even enter my head. It doesn't make any sense to modularize an algorithm because it is an implementation detail; not an abstraction and not a business concern. Modularity should be based on high level business concerns and abstractions. The idea that one should modularize low-level algorithms shows a deep misunderstanding of what it means to write modular software in a real-life context outside of academia.
It seems that FP induces confusion in the minds of its followers by blurring the boundary between implementation details and abstractions. OOP, on the other hand, makes the difference absolutely clear. In fact, the entire premise of OOP is to separate abstraction from implementation. Referential transparency is not abstraction, in fact, it goes against abstraction. If your black box is transparent in terms of how it manages its state, then it's not really a black box.
Which business concern of a machine is not an algorithm? An algorithm is a pretty general thing. If modularity works at the lowest levels by induction it works great at the higher levels as well. We've also learned in software engineering that the defect rate is mainly correlated with the code size, ie.
With functional abstraction, the abstractions aren't "leaky" and actually allow you to reduce complexity and forget about the lower level details entirely. This is absolutely wrong. OOP is the one that blurs the meaning between modularity and implementation. Think of it this way. In order to make something as modular as possible you must break it down into the smallest possible unit of modularity. State and functions are separate concepts that can be modularized.
OOP is an explicit wall that stops users from modularizing state and functions by forcing the user to unionize state and functions into a single entity. Merge sort is a good example. It can't be broken down into smaller modules in either OOP or functional programming. The problem exists at a higher level. IN FP mergeSort can be composed with any other function that has correct types. In OOP mergeSort lives in the context of an object and theoretically relies on the instantiated state of that object to work.
So to re-use mergeSort in another context, a MergeSort Object must be instantiated, that object must be passed along to another ObjectThatNeedsMergeSort in order to be reused. Remember modules don't depend on one another hence this isn't modularity, this is dependency injection which is a pattern that promotes the creation of objects that are reliant on one another rather then objects that are modular.
I know there are "Design patterns" and all sorts of garbage syntax like static objects that are designed to help you get around this. However the main theoretical idea still stands: I have a function that I want to re-use, everything is harder for me in OOP because all functions in OOP are methods in an object and to use that method you have to drag along the entire parent object with it.
Modularity in functional programming languages penetrates to the lowest level. Functional programming encourages the composition of powerful, general functions to accomplish a task, as opposed to the accretion of imperative statements to do the same.
With currying, a function that takes four arguments is trivially also four separate functions that can be further composed. The facilities for programming in the large are also arguably more general and expressive than in OOP languages: take a look at a Standard ML-style module system, where entire modules can be composed almost as easily as functions.
I'm not sure I understand you here entirely, but implementation details matter. Is this collection concurrency safe? Is this function going to give me back a null? Is it dependent on state outside its scope that I don't control? Furthermore, when it's necessary to hide implementation details, it's still eminently possible. Haskell and OCaml support exporting types as opaque except for the functions that operate on them in their own module, which is at least as powerful as similar functionality in OOP languages.
Yeah, I've lost you here. Would you mind clarifying? Currying is just another example of poor abstraction. You have a function which returns another function which may be passed around to a different part of the code and then called and it returns another function Abstraction doesn't get any leakier than this. It literally encourages spaghetti code. I despise this aspect of FP that the code ends up passed around all over the place and keeping track of what came from where is a nightmare.
I've written plenty of very short OOP programs. They don't have to be huge to be effective. The reason why you sometimes see very large OOP software and rarely see large FP software is not because FP makes code shorter, it's because FP logic would become impossible to follow beyond a certain size.
A part of the code which has nothing to do with the business domain which that state is about. To make proper black boxes, state needs to be encapsulated by the logic which mutates it. Then don't use it. Not all functional programs force you to use currying and anonymous functions. I'm a functional programmer and I agree that passing around functions as first class can get kind of messy.
Don't do it. Data is data and functions are functions, pass data into the pipeline not functions. But if you pass an object into a pipeline which OOP forces you to do it's 10x worse. Keep in mind that in OOP is essentially Forced currying. A method that returns an object full of other methods is identical to currying except that method isn't returning a single function It's returning a group of functions that all rely on shared state Can you explain why you see currying as an abstraction leakage?
What implementation details does it betray? Quite contrary to my experience with large FP code bases, of which many exist, to be clear.
They are just a lot smaller than what equivalent OOP code would look like, and I challenge you to refute that with evidence. I completely disagree about black boxes and think they are actually a complete scourge on software engineering. In languages with pervasive side effects, this is not possible. I thought my example about being able to call a function to get another function and then passing it to some other part of the code and calling it there was enough to illustrate the kind of confusion and disorganization that currying can cause.
For me, the most important principles of software engineering are: 1. Black boxing in terms of exposing a simple interface for achieving some results and whose implementation is irrelevant.
Separation of concerns in terms of business concerns; these are the ones that can be described in plain language to a non-technical person You need these two principles to design effective abstractions. You can design abstractions without following these principles, but they will not be useful abstractions. Black boxes are a huge part of our lives.
If I want to go on a holiday to a different country, I don't need to know anything about how the internet works, how houses are built or how airplanes work in order to book a unit in a foreign country on AirBnB and fly there.
The complexity and amount of detail which is abstracted is unfathomable but absolutely necessary to get the desired results. The complexity is not just abstracted from the users, but even the engineers who built all these different components knew literally nothing about each other's work.
As a user, the enormous complexity behind achieving my goal is hidden away behind very simple interfaces such as an intuitive website UI, train tickets, plane tickets, passport control, maps for location, house keys.
These interfaces are highly interoperable and can be combined in many ways to achieve an almost limitless number of goals. I couldn't explain to anyone anything about how airplanes work but I could easily explain to them how to use a plane ticket to go to a different country. With programming, it should be the same. The interfaces should be easy to explain to any regular junior developer. I generally think of spaghetti code as code that has unclear control flow e.
GOTOs everywhere, too many instance variables being used to maintain global state, etc. Currying, plainly, does not cause this. Sure, completely possible in ML-family languages and Haskell.
Refer to what I said about opaque types earlier. Separation of concerns in terms of business concerns; these are the ones that can be described in plain language to a non-technical person Again, nothing in functional languages betrays this. Article Contents Abstract. Why Functional Programming Matters. Hughes J. Oxford Academic. Google Scholar. Cite Cite J. Select Format Select format. Permissions Icon Permissions. More Filters. JFun : Functional Programming in Java. A function is a good way of specifying a computation since in each computation the result depends in a certain way on the parameters, and using functions makes a program modular and well-structured.
Watch Out for that Tree! A Tutorial on Shortcut Deforestation. View 1 excerpt, cites background. Advanced Languages for Systems Software. How functional programming mattered. View 4 excerpts, cites background and methods. View 3 excerpts, cites background. The testing of programs is an approach that has been neglected by a part of the users and researchers in the area of functional programming for some time. Lately, it has received a little more … Expand.
Why Functional Programming Really Matters. I'd say that state is actually easier to handle in functional languages. You just need the right abstractions for it. There can be a lot of state contained in this response, but when you use Phoenix, you don't really notice it.
The abstraction makes everything seamless - best web-framework I've used to date. KirinDave on Dec 8, parent prev next [—]. By the same token, whenever this conversation comes up a series of vague and poorly-formed criticisms come up saying, "You can't do low level things" without defining what on earth that means.
Or, perhaps worse, conflating the idea that you must have Haskell in all its massive and perhaps fairly: a bit bloated confusion of ideas. People can and do write high performance code in functional style and with functional tools. People even do it with laziness as a core abstraction.
The main gate to functional languages participating in, say, the Linux kernel is NOT that they are "too slow" or that "laziness makes them too confusing". It's that the Linux kernel is written entirely around the unique weirdness and expectations of C, and only languages based on or descendant to C do well there. It's difficult to treat the core question, "What are the disadvantages of functional programming" in the same way that it's difficult to answer, "What are the weaknesses of OO programming.
Sure, but the usual functional style has intrinsic issues that prevent it from being feasible for writing kernels in general. A kernel especially a microkernel spends most of its time managing state. You can use a functional language as a metalanguage for an imperative DSL as the Atom DSL for constant-space programming uses Haskell , but you won't be writing code that looks remotely functional.
Genuine question as I don't do FP. A team already redid the ext2 filesystem with it. COGENT takes pretty much precisely the approach I described below: it uses domain-specific structures in a functional language to encode verified imperative code.
It's not that they wrote the logic in a fundamentally different paradigm from C, it's that they take care to give their system the information it needs to both generate C code and most of the desired proofs simultaneously.
The functional language is fufilling the role of metalanguage excellently, but little of that ends up in the generated code. The one functional thing that does is pattern matching, since the C equivalent if statements and unions are much harder to verify and use.
Thanks for the insightful reply. I agree that code does look very imperative. Likely due to goal of C synthesis as you said. KirinDave on Dec 8, root parent prev next [—]. Not really? So does every computer program though. The idea that functional languages can't support mutation is a strangely persistent myth even in the face of multiple counter-examples AND 20 years of improvement via research and practical work.
My point is not that functional languages can't support mutation, I'm well aware of the whole gamut from State to F-Star, Eff, and Idris. My point is that you're going to spend almost all of your time explicitly mutating things, using whatever functional language as a "very fine imperative language. But you're still mostly going to shoving bits in specific places based on the result of a shallow pure function applied to bits you yanked from a specific place. On top of that, you're not going to be able to abide the kind of allocations that functions in Haskell, OCaml, etc.
Definitely no lambdas or partial application. Where in this do you see any functional-ness, outside of the fact that you'll probably call your procedures functions? Yes, but not to the degree that Atom's use cases require where the program must allocate all memory ahead of time and therefore know a specific upper bound.
This necessitates deviation from usual practices of any kind, including Haskell's. KirinDave on Dec 8, root parent next [—]. Maybe we can stop having ring0 buffer overflows some day when C programmer pride is sufficiently assuaged. I doubt it though My experience with the linux kernel community is that it is a limitless void of insecurity and infighting.
But yeah, Haskell has a very poor focus on the needs of the "industry" when said industry is focused around extremely tight optimizations. That said, I refuse to confuse a specific example of FP with a traditionally academic and research focus with the discipline as a whole.
That's a dodge. Honest question: what's the problem with this if it offers additional safety and promotes the use of stateless functions? If it all compiles down to similar code, then it's fine. People act like monadic code is not functional, when it is in fact extremely functional code. That's what's funny about all this: imperative programming is expressible succinctly and easily in functional languages. It's not a problem at all, it's exactly the direction I'd like to see things go as well.
It's just that your code is still a stranger in a strange land in these cases. Your proofs will be full of the typical halmarks of functional programing, and for the stuff that is executed at runtime you'll definitely want some pattern matching, but there will be no closures or higher-order functions and limited opportunities for monads, functors, etc.
Would you consider a C program transliterated into a representation of C inside a functional language and then annotated with proofs to be "functional"? My argument is what you get if you write a true kernel no sitting on top of a runtime written in something else is going to look a lot like that would. AnimalMuppet on Dec 8, root parent prev next [—]. Is that a result of the specific language, rather than FP? That is: One could think about building a procedural language that had And from the other side: Does FP require very strong types?
Or Python's? A kernel spends most of it's time managing mutable state. Here's a table of processes. We want it to be an array, rather than a linked list, for efficiency reasons. When a new process is created, we don't want to copy the array, also for efficiency reasons. So we mutate the array. Not really. Why do you think that FP doesn't have tools for this? Have you investigated it? I don't know. If you think that things like ST aren't suitable please say why other than the larger problems with monad transformers, of course.
AnimalMuppet on Dec 8, root parent next [—]. I know that FP has tools for that. But if the problem is primarily managing mutable state, isn't a tools that lets you directly see what you're doing a better fit? Are the FP tools as efficient as the direct, C-style approach? For an OS, that matters. Are the FP tools as easy to reason about correctly especially in a section you're not familiar with? For an OS that's worked on by thousands of people, that matters.
In this context, what is "ST"? And, what are the larger problems with monad transformers? I'm sorry, I decline to continue this conversation further. By using all the imperative, procedural and object-oriented features of Lisp.
Remember, Lisp is a multi-paradigm language. But that is the thing, none of the successful FP languages in the industry are pure FP, rather multi-paradigm, even if functional first.
At least in Linux, the table of processes is implemented as a doubly linked list. I wasn't going to say anything but AnimalMuppet on Dec 9, root parent prev next [—]. Here is cat in brainfuck ,[. I see the parent post mentioned "lazy evaluation. Higher levels of abstraction are harder to translate to efficeint machine code.
Non-strict what you call "lazy" evaluation can make it more difficult to predict when resources are needed. In strict languages, the resources needed to evaluate f are needed exactly at the point where you typed "f ".
With non-strict evaluation, those resources may be needed then, later, or not at all! If those resources happen to be needed when a they are no longer available, or b at the same time as a bunch of other computations need resources, you have problems. That makes total sense. Thanks for the explanation. I'm too much of a noob to know what I'm talking about but so far this series of talks has convinced me FP with strong typing brings something new to the table in the same way that say C brings something over assembly and s expressions in lisp bring something over non s expression languages.
Well, C brings No assignment anywhere. Yes, functional programming matters. It lets you add two things together in C without worrying about allocating a destination operand for the result, whose clobbering won't affect anything anywhere else. This sort of thing in turn makes it a heck of a lot easier to write OS schedulers, drivers, memory managers, codecs, ray tracers, database engines, I absolutely believe the correct way to build software is with language layers.
Ideally using FP or declarative languages where you can. Python gets this partially right by using C libraries for all its performance critical work. Why not? Lisp did it before C was even born. How many device drivers have ever been written in Lisp?
I'm not necessarily advocating C as the best way to write high performance, non-allocating code either. Why, all of them, in a Lisp machine. Probably zero of them, in any non-Lisp machine. Lisp is a fairly nuts-and-bolts language suitable for device drivers, depending on what you include in it. The The basic Lisp evaluation model is close to machine language: Lisp values readily map to words 32 bit, 64 bit, whatever stored in registers, and pushed onto a conventional stack during function calling.
Lisp compilers can optimize away environments: they can tell when some local variables or function parameters are not being captured by a closure and can live on the stack. Lisp can compile to re-entrant machine code. Dynamic memory allocation in contexts such as interrupt time is not off the table.
Similarly, a Lisp interrupt service routine can still cons up cells or other objects, probably in a limited way that can't trigger a full GC, or block for a page fault. Parts of such as system can be written in a Lisp notation for a non-Lisp language. Such as, for instance, a "Lispified" assembly language. Thus the saving of registers on entry into an interrupt can still be notated in Lisp; it's just not the normal Lisp, but some S-expressions denoting architecture-specific machine instructions register to memory, and register to register moves and such.
When the system is built, an assembler written in Lisp converts that to the executable code.
0コメント