I am deeply impressed by the depth and breadth of this language. Algebraic data types, logic programming, mutability, all there from the get go.
Another aspect that I love from their comparison table is that a single executable is both the package manager, LSP and the compiler. As I understand, the language server for Haskell has/had to do a lot of dances and re implement things from ghc as a dance between the particular ghc version and your cabal file. And maybe stack too, because I don't know which package manager is the blessed one these days. Not to shit on Haskell -- it is actually a very fine language.
However, the best feature is a bit buried and I wonder why.
How ergonomic is the integration with the rest of the JVM, from the likes of Java? AFAIK, types are erased by JVM compilers... With the concept of `regions` they have at least first class support for imperative interaction.
Note: With the JVM you get billions worth of code from a high quality professional standard library, so that is a huge plus. That is why the JVM and .net core are IMHO the most sane choices for 90+% of projects. I think the only comparable language would be F#. I would love to see a document about Flix limitations in the JVM interoperability story.
The parent poster is correct. We do monomorphization, hence Flix types are unboxed. For example, a `List[Int32]` is a list of primitive integers. There is no boxing and no overhead. The upshot is that sometimes we are faster than Java (which has to do boxing). The downside is larger bytecode size-- which is less of a factor these days.
Caveat: Flix sometimes has to box values on the boundary between Flix and Java code -- e.g. when calling a Java library methods that requires a java.lang.Object due to erasure in Java.
As a non-functional-programming, c-language-familiar person, the syntax look fabulous. It seems like the first functional language I've seen that makes simple things look simple and clear.
On a language semantics note: the semantics of extending/restricting polymorphic records seem to follow Leijen's approach [0] with scoped labels. That is, if you have a record e.g. r1 = { color = "yellow" }, you can extend it with r2 = { +color = "red" | r1 }, and doing r2#color will evaluate to "red"... and if you then strip the field "color" away, r3 = { -color | r2 }, then you'll get back an original record, r3#color will evaluate to "yellow". Which IMO is the sanest approach, as opposed to earlier attempts of trying to outlaw such behaviour, preferably statically (yes, people developed astonishingly high-kinded type systems to track records' labels, just to make sure that two fields with the same label couldn't be re-added to a record).
I looked and Flix a while ago and found it really interesting - so much so that I wrote an article "Flix for Java Programmers" about it. Might actually be a bit outdated by now.. need to look at Flix's recent development again.
The language has improved a lot in the years since the post. In particular, the effect system has been significantly extended, Java interoperability is much improved, and some syntax have been updated.
Wow what a gold mine your blog is. It’s like a more elaborate and well thought through version of thoughts that have been torturing me for years. Looking forward to reading it all.
Little nitpick. C# was created by Anders Hejlsberg who studied at DTU (Copenhagen). He also implemented Turbo Pascal. Borland was also a company founded by Danes.
In general, programming language theory is pretty strong in Denmark, with lots of other contributions.
For example, the standard graduate textbook in static program analysis (Nielson & Nielson) is also Danish. Mads Tofte made lots of contributions to Standard ML, etc.
> They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Aarhus is an outstanding university. There are a couple of dozen universities in Europe that lack the prestige of Oxbridge but offer high quality education and perform excellent research.
Lineage? Aarhus has a strong academic tradition in areas like logic, type theory, functional programming, and object oriented languages. Many influential researchers in these fields have come through there.
I also think there's a noticeable bias toward the US in how programming language research is perceived globally. Institutions like Aarhus often don't invest heavily in marketing or self-promotion, they just focus on doing solid work. It's not necessarily better or worse, but it does make it harder for their contributions to break through the layers of global attention.
Yes exactly. Aarhus had Martin-Löf, Nygaard, etc. Similarly, INRIA has had many influential researchers as well as OCaml and Rocq. Talent (and exciting projects) attracts more talent. But that doesn’t mean it doesn’t exist in US. Penn, Cornell, CMU, MIT and others have had historically very strong PL faculty. My understanding is due to the nature of grants in US it doesn’t give faculty the same freedom to work on what they choose as in Europe. So you get different research focuses because of that.
Can't find any mentions of typeclasses though, are they supported?
Give me typeclasses and macros comparable with Scala ones and I would be happy to port my libraries (distage, izumi-reflect, BIO) to Flix and consider moving to it from Scala :3
UPD: ah, alright, they call typeclasses traits. What about macros?
UPD2: ergh, they don't support nominal inheritance even in the most harmless form of Scala traits. Typeclasses are not a replacement for interfaces, an extremely important abstraction is missing from the language (due to H-M typer perhaps), so a lot of useful things are just impossible there (or would look ugly).
Flix supports type classes (called "traits") with higher-kinded types (HKTs) and with associated types and associated effects. A Flix trait can provide a default implementation of a function, but specific trait instances can override that implementation. However, Flix has no inheritance. The upshot is that traits are a compile-time construct that is fully eliminated through monomorphization. Consequently, traits incur no runtime overhead. Even better, the Flix inliner can "see through" traits, hence aggressive closure elimination is often possible. For example, typical usage of higher-order functions or pipelining is reduced to plain loops at the bytecode level without any closure allocation or indirection.
Flix does not yet have macros-- and we are afraid to add them due to their real (or perceived) (ab)use in other programming languages.
We are actively looking for library authors and if you are interested, you are more than welcome to stop by our Gitter channel.
> The upshot is that traits are a compile-time construct that is fully eliminated through monomorphization.
So, apparently, I can't re-implement distage for Flix.
I don't mind a little bit of overhead in exchange for a massive productivity boost. I don't even need full nominal inheritance, just literally one level of interface inheritance with dynamic dispatching :(
> their real (or perceived) (ab)use in other programming languages.
Without macros I can't re-implement things like logstage (effortless structured logging extracting context from AST) and izumi-reflect (compile-time refleciton with tiny runtime scala typer simulator).
Sorry to hijack, but since you are involved, can you explain why tail call optimization would incur a run time perf penalty, as the docs mention? I would expect tail call optimization to be a job for the compiler, not for the runtime.
We have to emulate tail calls using trampolines. This means that in some cases we have to represent stack frames as objects on the heap. Fortunately, in the common case where a recursive function simply calls itself in tail position, we can rewrite the call to a bytecode level loop and there is no overhead.
Thanks for explaining that term. That sounds really bad indeed. Maybe this is way too technical, but representing them as stack pointers was unfeasible?
TCO (tail call optimization) is often confused with TCE (tail call elimination), the latter is a runtime guarantee whereas the former is a compiler's best effort attempt to statically optimize tail calls.
Thanks! So you are implying that `TCO :: Maybe TCE`?
I am trying to think of a situation where a functional language compiler does not have enough information at compile time, especially when effects are witnessed by types.
I'm not a compiler dev, but I know that many functional programming languages struggle with this in the same manner if the target platform does not support TCE itself, and therefore require trampolining.
Even though it's called effect, it has almost nothing to do with algebraic effects, which is what this language and others like OCaml 5 have, and so Effect TS is more like Haskell (as it came from fp-ts).
Oh, but you can just transpile it to WASM using e.g. TeaVM [0]. Just add another build step to your bundler or whatever web dev uses nowadays to build apps.
Effect system allows programmers to annotate expressions that can have certain effects, just like the type system annotating type information on them, so compilers can enforce effect rules just like enforcing type rules.
For example for a type system,
let a: Int // this says 'a' has the type Int
a = 5 // compiler allows, as both 'a' and 5 are of Int.
a = 5.1 // disallowed, as 'a' and 5.1 are of different types.
Similarly for example for an effect system,
let a: Int
let b: Int!Div0 // 'b' is of type Int and the Div0 effect.
let c: Int
...
a = 1 / c // disallowed, as '/' causes the Div0 effect which 'a' not supported
b = 1 / c // allowed, as both '/' and 'b' support the Div0 effect.
The effect annotations can be applied to a function just like the type annotations. Callers of the function need to anticipate (or handle) the effect. E.g. let's say the above code is wrapped in a function 'compute1(..) Int!Div0', a caller calling it can do.
compute1(..) on effect(Div0) {
// handle the Div0 effect.
}
The book uses Scala & ZIO but intends to be more about the concepts of Effects than the actual implementation. I'd love to do a Flix version of the book at some point. But first we are working on the TypeScript Effect version.
It looks like "effect" as in impure functions in a functional language? I.e. a new way of dealing with effects (global/hidden state mutations) in a language that makes the pure-impure distinction. I'm not entirely sure.
I thought it was going to be something like contracts or dependent types or something.
No. It is essentially resumable exceptions. You throw an exception saying “I need a MyAlgebraicType” and the effect handler catches the exception, generates the value, and returns execution to the place the exception was called from.
But entirely definable in user code, so an effect is essentially a set of possibly impure operations you can perform (like I/O or exception throwing), and a function that exhibits that effect has access to those operations. Of course the call sites then also exhibit that effect, unless they provide implementations of the effect operations.
>> Flix is a principled effect-oriented functional, imperative, and logic programming language...
>> Why Effects? Effect systems represent the next major evolution in statically typed programming languages. By explicitly modeling side effects, effect-oriented programming enforces modularity and helps program reasoning.
Since when do side effects and functional programming go together?
In Flix all effects are tracked by the type and effect system. Hence programmers can know when a function is pure or impure. Moreover, pure functions can be implemented internally using mutation and imperative programming. For example, in Flix, one can express a sort function that is guaranteed to be pure when seen from the outside, but internally uses a quick sort (which sorts in place on an array). The type and effect system ensures that such mutable memory does not escape its lexical scope, hence such a sort function remains observationally pure as seen from the outside.
Haskell can do the same kind of thing (local mutation), using the ST monad.
It's usage is almost equivalent to using IORefs, except we can escape ST using runST to get back a pure value not in ST, which we cannot do for IO because there is no `IO a -> a`.
There's no requirement to contain ST to a single function - we can split mutation over several functions, provided each one involved returns some `ST a` and their usage is combined with >>=.
FP isn't really about eliminating side effects. Controlled effects are fine. That's what an effect system does.
Avoiding side effects is really just a side effect (pun intended) of older programming language technology that didn't provide any other way to control effects.
Arguably FP really is about eliminating side effects.
The research has sprung out of lambda calculus where a computation is defined in terms of functions (remember: Functional programming).
Side effects can only be realized by exposing them in the runtime / std-lib etc. How one does that is a value judgement, but if a term is not idempotent, then you arguably does not have a functional programming language anymore.
You gotta ask the question: why does FP care about eliminating side effects? There are two possible answers:
1. It's just something weird that FP people like to do; or
2. It's in service of a larger goal, the ability to reason about programs.
If you take the reasonable answer---number 2---then the conclusion is that effects are not a problem so long as you can still reason about programs containing them. Linear / affine types (like Rust's borrow checker) and effect systems are different ways to accommodate effects into a language and still retain some ability to reason about programs.
No practical program can be written without effects, so they must be in a language somewhere.
> No practical program can be written without effects, so they must be in a language somewhere.
Or rather, very few. It is like programming languages that trade Turing-completeness for provability, but worse.
In theory, one could imagine a program that adds 2 matrices in a purely functional manner, and you would have to skip on outputting the result to stay side-effect-free. Yet, it is running on a computer so the program does affect its internal state, notably the RAM in which the result is visible somewhere. One could dump that memory from outside of the program/process itself to get the result of the computation. That would be quite weird, but on the other hand sometimes normal programs do something like that by communicating through shared memory.
It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
(from your link:)
> Both languages allow mutation but it's up to us to use it appropriately.
This is the crux of the problem. If you add a C example to your Typescript and Scala examples, people will throw you stones for that statement - out of instinct. The intent is to prevent accidental misuse. Mutation is "considered harmful" by some because it can be accidentally misused
> It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
Absolutely! When you really dig into it, the concept of an effect is quite ill-defined. It comes down to whatever some observer considers important. For example, from the point of view of substitution quick sort and bubble sort are equivalent but most people would argue that they are very much not.
If you start with lambda calculus you don't have effects in the first place, so there's nothing to eliminate. Lambda calculus and friends are perfectly reasonable languages for computation in the sense of calculation.
A better way to think about general-purpose functional programming is that it's a way to add effects to a calculation-oriented foundation. The challenge is to keep the expressiveness, flexibility and useful properties of lambda calculus while extending it to writing interactive, optimizable real-world programs.
To retain referential transparency, we basically need to ensure that a function provided the same arguments always returns the same result.
A simple way around this is to never give the same value to a function twice - ie, using uniqueness types, which is the approach taken by Clean. A uniqueness type, by definition, can never be used more than once, so functions which take a uniqueness type as an argument are referentially transparent.
In Haskell, you never directly call a function with side effects - you only ever bind it to `main`.
Functions with (global) side effects return a value of type `IO a`, and the behavior of IO is fully encapsulated by the monadic operations.
instance Monad IO where
return :: a -> IO a
(>>=) :: IO a -> (a -> IO b) -> IO b -- aka "bind"
return lifts a pure value into IO, and bind sequences IO operations. Importantly, there cannot exist any function of type `IO a -> a` which escapes IO, as this would violate referential transparency. Since every effect must return IO, and the only thing we can do with the IO is bind it, the eventual result of running the program must be an IO value, hence `main` returns a value of type `IO ()`.
main :: IO ()
So bind encapsulates side effects, effectively using a strategy similar to Clean, where each `IO` is a synonym of some `State# RealWorld -> (# State# RealWorld, a #)`. Bind takes a value of IO as it's first argument, consumes the input `State# RealWorld` value and extracts a value of type `a` - feeds this value the next function in the sequence of binds, returning a new value of type `IO b`, which has a new `State# RealWorld`. Since `bind` enforces a linear sequencing of operations, this has the effect that each `RealWorld` is basically a unique value never used more than once - even though uniqueness types themselves are absent from Haskell.
Lisp always had side effects and mutability, and it's the canonical FP language, directly inspired by lambda calculus. To be fair, before Haskellers figured out monads, nobody even knew of any way to make a FP language that's both pure and useful.
Functional programming à la Haskell has always been about making effects controllable, explicit first-class citizens of the language. A language entirely without effects would only be useful for calculation.
The talk about "purity" and "removing side effects" has always been about shock value—sometimes as an intentional marketing technique, but most often because it's just so much easier to explain. "It's just like 'normal' programming but you can't mutate variables" is pithy and memorable; "it's a language where effects are explicitly added on top of the core and are managed separately" isn't.
enum Shape {
case Circle(Int32),
case Square(Int32),
case Rectangle(Int32, Int32)
}
def area(s: Shape): Int32 = match s {
case Circle(r) => 3 * (r * r)
case Square(w) => w * w
case Rectangle(h, w) => h * w
}
I wonder why not this syntax:
def area(s: Shape.Circle(r)) = { 3 * (r * r) }
def area(s: Share.Square(w)) = { w * w }
def area(s: Shape.Rectangle(h, w)) = { h * w }
area(Shape.Rectangle(2, 4))
The Int32 or Int32, Int32 types are in the definition of Shape, so we can be DRY and spare us the chances to mismatch the types.
We can also do without match/case, reuse the syntax of function definition and enumerate the matches in there. I think that it's called structural pattern matching.
> The Int32 or Int32, Int32 types are in the definition of Shape, so we can be DRY and spare us the chances to mismatch the types
I have to admit I don't see the distinction here in terms of DRYness--they are basically equivalent--or why the latter would somehow lead to mismatching the types--presumably if Flix has a typechecker this would be a non-issue.
I use Elixir now at work and I have used Haskell and PureScript personally and professionally, which both support analogs of both the case syntax and function-level pattern matching, and in my experience the case syntax is often the better choice even given the option to pattern match at the function level. Not that I'd complain about having both options in Flix, which would still be cool, but I don't think it's as big of a benefit as it may seem, especially when type checking is involved.
enum Shape {
case Circle(Int32),
def area(s: Shape): In32 = match s {
Not only I had to write something that the compiler already knows, but I typed a compilation error. The second type definition is there only to make developers write it wrong. It does not add any information.
> The second type definition is there only to make developers write it wrong.
Int32 is the type of the return value for the function (https://doc.flix.dev/functions.html), which is distinct information not implied by the type being passed in (Shape), so I dispute this characterization--the fact that this type is the same type as the parameter given to all of Shape's terms is specific to this example. Furthermore I suspect it would immediately be caught by the typechecker regardless.
While in a language like Haskell you could define this function without a type definition and its type would be inferred (vs. in Flix, see https://doc.flix.dev/functions.html?highlight=inference#func...), regardless I will almost always declare the type of top-level functions (or even non-top-level functions) for clarity when reading the code. I think this is useful and important information in any case.
What if you make a typo and instead of `area` you type `areas` the second time? I also don't see how one is more DRY than the other. If anything in the second example you typed `Shape` and `area` a bunch of times, so to me it's less DRY
// Computes the delivery date for each component.
let r = query p select (c, d) from ReadyDate(c; d)
facepalm. Select should always come last, not first, haven't we learned anything from the problems of SQL? LINQ got this right, so it should look like:
It is a fair point-- the implicit argument being that this allows `c` and `d` to be bound before they are used, and hence auto-complete can assist in the `select` clause. Nevertheless, the counter argument is that the form of a logic rule is:
Path(x, z) :- Path(x, y), Edge(y, z).
i.e. an implication from right to left. This structure matches:
query p select (x, z) from Path(x, y), Edge(y, z).
So the trilemma is:
A. Keep the logic rules and `query` construct consistent (i.e. read from right-to-left).
B. Reverse the logic rules and query construct-- thus breaking with decades of tradition established by Datalog and Prolog.
C. Keep the logic rules from right-to-left, but reverse the order of `query` making it from left-to-right.
We decided on (A), but maybe we should revisit at some point.
I appreciate the desire for consistency and being able to lean on old textbooks and documentation. A couple of considerations since this is a new language where history and precedent should (I think) be less important if it leads to clarity and improved productivity:
1. I think way more people coming to your language will be familiar with SQL and it's problems than with logic programming and Horn clauses.
2. I think many people are now familiar with functional pipelines, where filters and transforms can be applied in stages, thanks to the rise of functional programming in things like LINQ and Java's Stream API. This sort of pipelining maps naturally to queries, as LINQ has shown, and even to logic programming, as µKanren has shown.
3. People don't type programs right-to-left but left-to-right. To the extent that right-to-left expressions interfere with assisting the human that's typing in various ways (like autocomplete), I would personally defer to helping the human as much as possible over historical precedent.
4. Keeping the logic fragment separate from the query fragment (option C) seems viable if you really, really want to maintain that historical precedent for some reason.
The JVM is a state-of-the-art virtual machine with multiple open source implementations, a large ecosystem, and a fast JIT compiler that runs on most platforms. It is hard to find another VM with the same feature set and robust tooling.
I think the problem is that it targets a VM instead of native machine architectures, not the quality of the VM. I also find the times I need to target a VM to be very limited as I'm generally writing code for a specific platform, not a cross platform application. Of course this will vary between developers.
Targeting JVM means not having to roll your own garbage collector.
And bonus, you get a huge world of third party libraries you can work with.
It's been over a decade since I worked on the JVM, and Java is not my favourite language, but I don't get some people's hate on this topic. It strikes me as immature and "vibe" based rather than founded in genuine analysis around engineering needs.
The JVM gets you a JIT and GC with almost 30 years of engineering and fine tuning behind it and millions of eyes on it for bugs or performance issues.
The JVM is a large and complex system with tons of configurable options. If you don't need it, why add all that cognitive overhead when you have perfectly good options that don't. And the benefits you gain are very limited if you aren't integrating with other JVM based systems.
I strongly agree. Java and JVM bytecode may not be our "cup of tea", but it is simply unrealistic to implement any runtime environment with comparable performance, security, robustness, and tooling. The only alternative is WASM, but they are not yet there feature-wise.
The practical problems are slow startup time and high minimum memory usage. Since those are encountered early on in the developer experience, the reaction many have is predictable.
Which is amazing, you can fine tune the performance of the runtime to your heart's content. Or you can just leave them as-is, the default behaviour is quite reasonable too.
Very cool language. The standard library looks mostly sane, although it does have `def get(i: Int32, a: Array[a, r]): a \ r` which means that it must have some kind of runtime exception system. Not my cup of tea, but an understandable tradeoff
Another aspect that I love from their comparison table is that a single executable is both the package manager, LSP and the compiler. As I understand, the language server for Haskell has/had to do a lot of dances and re implement things from ghc as a dance between the particular ghc version and your cabal file. And maybe stack too, because I don't know which package manager is the blessed one these days. Not to shit on Haskell -- it is actually a very fine language.
However, the best feature is a bit buried and I wonder why.
How ergonomic is the integration with the rest of the JVM, from the likes of Java? AFAIK, types are erased by JVM compilers... With the concept of `regions` they have at least first class support for imperative interaction. Note: With the JVM you get billions worth of code from a high quality professional standard library, so that is a huge plus. That is why the JVM and .net core are IMHO the most sane choices for 90+% of projects. I think the only comparable language would be F#. I would love to see a document about Flix limitations in the JVM interoperability story.
__EDIT__
- There is a bit of info here. Basically all values from Flix/Java have to be boxed/unboxed. https://doc.flix.dev/interoperability.html
- Records are first-class citizens.
oh my i just know you're going to love unison
Not in all the cases (it keeps type parameters for anonymous classes) and there are various workarounds.
Also, essentially, it's not a problem at all for a compiler, you are free to render applied type constructors as regular classes with mangled names.
Caveat: Flix sometimes has to box values on the boundary between Flix and Java code -- e.g. when calling a Java library methods that requires a java.lang.Object due to erasure in Java.
[0] https://www.cs.ioc.ee/tfp-icfp-gpce05/tfp-proc/21num.pdf
But if you're interested: https://www.reactivesystems.eu/2022/06/24/flix-for-java-prog...
The language has improved a lot in the years since the post. In particular, the effect system has been significantly extended, Java interoperability is much improved, and some syntax have been updated.
C++, C#/Typescript, Dart, etc all have strong roots in that one small area in Denmark.
In general, I am curious what makes some of these places very special (Delft, INRIA, etc)?
They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Is it the water? Or something else? :)
In general, programming language theory is pretty strong in Denmark, with lots of other contributions.
For example, the standard graduate textbook in static program analysis (Nielson & Nielson) is also Danish. Mads Tofte made lots of contributions to Standard ML, etc.
> They aren't your 'typical' Ivy League/Oxbridge univ+techhubs.
Aarhus is an outstanding university. There are a couple of dozen universities in Europe that lack the prestige of Oxbridge but offer high quality education and perform excellent research.
I also think there's a noticeable bias toward the US in how programming language research is perceived globally. Institutions like Aarhus often don't invest heavily in marketing or self-promotion, they just focus on doing solid work. It's not necessarily better or worse, but it does make it harder for their contributions to break through the layers of global attention.
Can't find any mentions of typeclasses though, are they supported?
Give me typeclasses and macros comparable with Scala ones and I would be happy to port my libraries (distage, izumi-reflect, BIO) to Flix and consider moving to it from Scala :3
UPD: ah, alright, they call typeclasses traits. What about macros?
UPD2: ergh, they don't support nominal inheritance even in the most harmless form of Scala traits. Typeclasses are not a replacement for interfaces, an extremely important abstraction is missing from the language (due to H-M typer perhaps), so a lot of useful things are just impossible there (or would look ugly).
Flix does not yet have macros-- and we are afraid to add them due to their real (or perceived) (ab)use in other programming languages.
We are actively looking for library authors and if you are interested, you are more than welcome to stop by our Gitter channel.
So, apparently, I can't re-implement distage for Flix.
I don't mind a little bit of overhead in exchange for a massive productivity boost. I don't even need full nominal inheritance, just literally one level of interface inheritance with dynamic dispatching :(
> their real (or perceived) (ab)use in other programming languages.
Without macros I can't re-implement things like logstage (effortless structured logging extracting context from AST) and izumi-reflect (compile-time refleciton with tiny runtime scala typer simulator).
But the good news is that the common case incurs no overhead.
I am trying to think of a situation where a functional language compiler does not have enough information at compile time, especially when effects are witnessed by types.
It's a very fun time
[0] https://github.com/konsoletyper/teavm
For example for a type system,
Similarly for example for an effect system, The effect annotations can be applied to a function just like the type annotations. Callers of the function need to anticipate (or handle) the effect. E.g. let's say the above code is wrapped in a function 'compute1(..) Int!Div0', a caller calling it can do.The book uses Scala & ZIO but intends to be more about the concepts of Effects than the actual implementation. I'd love to do a Flix version of the book at some point. But first we are working on the TypeScript Effect version.
https://youtu.be/EHtVADr-x94
I thought it was going to be something like contracts or dependent types or something.
>> Why Effects? Effect systems represent the next major evolution in statically typed programming languages. By explicitly modeling side effects, effect-oriented programming enforces modularity and helps program reasoning.
Since when do side effects and functional programming go together?
(I am one of the developers of Flix)
It's usage is almost equivalent to using IORefs, except we can escape ST using runST to get back a pure value not in ST, which we cannot do for IO because there is no `IO a -> a`.
There's no requirement to contain ST to a single function - we can split mutation over several functions, provided each one involved returns some `ST a` and their usage is combined with >>=.
https://dl.acm.org/doi/pdf/10.1145/178243.178246
Avoiding side effects is really just a side effect (pun intended) of older programming language technology that didn't provide any other way to control effects.
The research has sprung out of lambda calculus where a computation is defined in terms of functions (remember: Functional programming).
Side effects can only be realized by exposing them in the runtime / std-lib etc. How one does that is a value judgement, but if a term is not idempotent, then you arguably does not have a functional programming language anymore.
1. It's just something weird that FP people like to do; or
2. It's in service of a larger goal, the ability to reason about programs.
If you take the reasonable answer---number 2---then the conclusion is that effects are not a problem so long as you can still reason about programs containing them. Linear / affine types (like Rust's borrow checker) and effect systems are different ways to accommodate effects into a language and still retain some ability to reason about programs.
No practical program can be written without effects, so they must be in a language somewhere.
More here: https://noelwelsh.com/posts/what-and-why-fp/
Or rather, very few. It is like programming languages that trade Turing-completeness for provability, but worse.
In theory, one could imagine a program that adds 2 matrices in a purely functional manner, and you would have to skip on outputting the result to stay side-effect-free. Yet, it is running on a computer so the program does affect its internal state, notably the RAM in which the result is visible somewhere. One could dump that memory from outside of the program/process itself to get the result of the computation. That would be quite weird, but on the other hand sometimes normal programs do something like that by communicating through shared memory.
It seems that the notion of side effects must be understood relatively to a predefined system, just like in physics. One wouldn't count heat dissipation or power consumption as a side effect of such a program, although side-channel-attackers have a word to say about this.
(from your link:) > Both languages allow mutation but it's up to us to use it appropriately.
This is the crux of the problem. If you add a C example to your Typescript and Scala examples, people will throw you stones for that statement - out of instinct. The intent is to prevent accidental misuse. Mutation is "considered harmful" by some because it can be accidentally misused
Absolutely! When you really dig into it, the concept of an effect is quite ill-defined. It comes down to whatever some observer considers important. For example, from the point of view of substitution quick sort and bubble sort are equivalent but most people would argue that they are very much not.
The preface of https://www.proquest.com/openview/32fcc8064e57c82a696956000b... is quite interesting.
A better way to think about general-purpose functional programming is that it's a way to add effects to a calculation-oriented foundation. The challenge is to keep the expressiveness, flexibility and useful properties of lambda calculus while extending it to writing interactive, optimizable real-world programs.
A simple way around this is to never give the same value to a function twice - ie, using uniqueness types, which is the approach taken by Clean. A uniqueness type, by definition, can never be used more than once, so functions which take a uniqueness type as an argument are referentially transparent.
In Haskell, you never directly call a function with side effects - you only ever bind it to `main`.
Functions with (global) side effects return a value of type `IO a`, and the behavior of IO is fully encapsulated by the monadic operations.
return lifts a pure value into IO, and bind sequences IO operations. Importantly, there cannot exist any function of type `IO a -> a` which escapes IO, as this would violate referential transparency. Since every effect must return IO, and the only thing we can do with the IO is bind it, the eventual result of running the program must be an IO value, hence `main` returns a value of type `IO ()`. So bind encapsulates side effects, effectively using a strategy similar to Clean, where each `IO` is a synonym of some `State# RealWorld -> (# State# RealWorld, a #)`. Bind takes a value of IO as it's first argument, consumes the input `State# RealWorld` value and extracts a value of type `a` - feeds this value the next function in the sequence of binds, returning a new value of type `IO b`, which has a new `State# RealWorld`. Since `bind` enforces a linear sequencing of operations, this has the effect that each `RealWorld` is basically a unique value never used more than once - even though uniqueness types themselves are absent from Haskell.https://www.youtube.com/watch?v=RsTuy1jXQ6Y
The talk about "purity" and "removing side effects" has always been about shock value—sometimes as an intentional marketing technique, but most often because it's just so much easier to explain. "It's just like 'normal' programming but you can't mutate variables" is pithy and memorable; "it's a language where effects are explicitly added on top of the core and are managed separately" isn't.
I have to admit I don't see the distinction here in terms of DRYness--they are basically equivalent--or why the latter would somehow lead to mismatching the types--presumably if Flix has a typechecker this would be a non-issue.
I use Elixir now at work and I have used Haskell and PureScript personally and professionally, which both support analogs of both the case syntax and function-level pattern matching, and in my experience the case syntax is often the better choice even given the option to pattern match at the function level. Not that I'd complain about having both options in Flix, which would still be cool, but I don't think it's as big of a benefit as it may seem, especially when type checking is involved.
Int32 is the type of the return value for the function (https://doc.flix.dev/functions.html), which is distinct information not implied by the type being passed in (Shape), so I dispute this characterization--the fact that this type is the same type as the parameter given to all of Shape's terms is specific to this example. Furthermore I suspect it would immediately be caught by the typechecker regardless.
While in a language like Haskell you could define this function without a type definition and its type would be inferred (vs. in Flix, see https://doc.flix.dev/functions.html?highlight=inference#func...), regardless I will almost always declare the type of top-level functions (or even non-top-level functions) for clarity when reading the code. I think this is useful and important information in any case.
I think that is multi-methods
A. Keep the logic rules and `query` construct consistent (i.e. read from right-to-left).
B. Reverse the logic rules and query construct-- thus breaking with decades of tradition established by Datalog and Prolog.
C. Keep the logic rules from right-to-left, but reverse the order of `query` making it from left-to-right.
We decided on (A), but maybe we should revisit at some point.
1. I think way more people coming to your language will be familiar with SQL and it's problems than with logic programming and Horn clauses.
2. I think many people are now familiar with functional pipelines, where filters and transforms can be applied in stages, thanks to the rise of functional programming in things like LINQ and Java's Stream API. This sort of pipelining maps naturally to queries, as LINQ has shown, and even to logic programming, as µKanren has shown.
3. People don't type programs right-to-left but left-to-right. To the extent that right-to-left expressions interfere with assisting the human that's typing in various ways (like autocomplete), I would personally defer to helping the human as much as possible over historical precedent.
4. Keeping the logic fragment separate from the query fragment (option C) seems viable if you really, really want to maintain that historical precedent for some reason.
My two cents. Kudos on the neat language!
I, uh, think your math might need some checking :)
And bonus, you get a huge world of third party libraries you can work with.
It's been over a decade since I worked on the JVM, and Java is not my favourite language, but I don't get some people's hate on this topic. It strikes me as immature and "vibe" based rather than founded in genuine analysis around engineering needs.
The JVM gets you a JIT and GC with almost 30 years of engineering and fine tuning behind it and millions of eyes on it for bugs or performance issues.