Theodore Norvell
Memorial University of Newfoundland
New: Updated to Haskell '98.
This short tutorial introduces monads to Haskell programmers. It applies to Haskell '98. Gofer programmers (are there any left?) can read it too; there is a section at the end detailing the differences between Haskell and Gofer around monads and another about the differences between Haskell 1.3, Haskell 1.4, and Haskell 98.
The reader is assumed to have a familiarity with the basics of Haskell programming such as data declarations, function definitions, and lambda expressions. Familiarity with classes and instance declarations will help.
Consider the following function
f1 w a = let (b, x) = g1 w a (c, y) = h1 w x b (d, z) = i1 w x y c in (d, z)
where a, b, c, and d all have the same type, State. In a sense this definition is very similar to an imperative program in which a represents the initial state, d represents the final state, and b and c represent intermediate states. The functions f1 w, g1 w, h1 w x, and i1 w x y can be thought of as imperative routines that transform the state and produce results. The results are like the return values in C programs and are bound to variables x, y, and z in the example. A C routine that corresponds to this would look something like this:
int f1(float w) { const char x = g1(w) ; const double y = h1(w, x) ; const int z = i1(w, x, y) ; return z ; }
All this explicit passing of states around can get a bit tedious, especially when programs are modified. It also makes the program hard to read vs. the equivalent C program.
Note that if we ignore the w, x, y, and z, then g1, h1, and i1 are being composed. Can we define a kind of "composition operator" that allows us to deal with the returned values as well as the state?
The functions that transform the state in the example above all have the type State -> (State, a) for some result type a. Let's generalize over the type of the state and create a type to represent these transforms
type StateTrans s a = s -> (s, a)
Let us define two functions. The first is a kind of composition function.
(>>=) :: StateTrans s a -> (a -> StateTrans s b) -> StateTrans s b p >>= k = \s0 -> let (s1, a) = p s0 q = k a in q s1
The second is a function that turns a value into an "imperative program" that does not change the state, but returns the result.
return :: a -> StateTrans s a return a = \s -> (s, a)
Our original function may now be written with all the shunting of state around hidden by these operators:
f w = (g w) >>= (\x -> (h w x) >>= (\y -> (i w x y) >>= (\z -> return z)))
You could also format this as follows
f w = g w >>= \x -> h w x >>= \y -> i w x y >>= \z -> return z
You should verify that f is equal to f1, if g = g1, h = h1, and i = i1.
(Warning, if you really try to make the above definitions of >>= and return, your Haskell compiler will complain, for reasons that will be explained later. However, you could try changing the names a little, e.g. change >>= to >== and change return to rtn.)
Now Haskell provides a nice syntax to sugar-coat the calls to >>=. It turns out that the definition of f can also be written as
f w = do x <- g w y <- h w x z <- i w x y return z
The "do" is a keyword of Haskell. Any program involving a "do" can be translated to one that doesn't use "do" by the following 4 rules:
1 An expression
do | pattern <- expression |
morelines |
becomes expression >>= (\ pattern -> do morelines)
2 An expression
do | expression |
morelines |
becomes expression >> do morelines
3 An expression
do | let declarationList |
morelines |
becomes
let | declarationList |
in | do morelines |
4. Finally when there is only one line in the do, an expression
do | expression |
becomes expression
In the second rule, we saw the >> operator; this is used when the result of the first operand is not subsequently used. It can always be defined as
p >> q = p >>= (\ _ -> q)
So
do p q
is the same as
do _ <- p q
The "do" notation follows the same conventions as other multi-line constructs in Haskell. All lines must be vertically alligned. Alternatively, you may use braces and semicolons if you prefer "free format":
do {x <- g w ; y <- h w x ; z <- i w x y ; return z }
So far we've seen the monad concept applied to implicitly passing state. However monads can be used to implement several other programming features including: consuming input, producing output, exceptions and exception handling, nondeterminisim. To handle this variety, the basic operations of >>= and return are overloaded functions.
(Haskell requires that overloaded functions be declared in class declarations and defined in instance declarations. This is the reason that >>= and return can not be defined exactly as described in previous sections.)
The class that >>= and return are declared in is called Monad and is itself declared in the Haskell Prelude. That declaration looks like this
class Monad m where (>>=) :: m a -> (a -> m b) -> m b (>>) :: m a -> m b -> m b return :: a -> m a fail :: String -> m a m >> k = m >>= \_ -> k fail s = error s
To make a state transform monad that actually works with Haskell's "do" notation, we have to declare >>= and return within an instance declaration. There is one minor problem; type synonyms can not be used in instance declarations, so we define StateTrans s a as a new type.
newtype StateTrans s a = ST( s -> (s, a) )
and we define the functions return and >>= for this type as follows:
instance Monad (StateTrans s) where -- (>>=) :: StateTrans s a -> (a -> StateTrans s b) -> StateTrans s b (ST p) >>= k = ST( \s0 -> let (s1, a) = p s0 (ST q) = k a in q s1 ) -- return :: a -> StateTrans s a return a = ST( \s -> (s, a) )
We don't need to define the >> operator; it is defined automatically in terms of >>=.
Note these new definitions change the type of functions like our example function f. Indeed f is no longer equal to f1, since its type is different. To apply members of the monad, we define a new function
applyST :: StateTrans s a -> s -> (s, a) applyST (ST p) s = p s
Now applyST f is equal to f1.
Technically each Monad (in Haskell) is not a type, but a "type contructor". In our state transformer example, we have actually declared an infinite set of monads. For each type s, there is a type constructor StateTrans s. Each such type constructor has been declared to be a Monad by the instance declaration above.
A short example shows how the StateTrans monad lets you code in a fairly imperative style.
We will implement a variation on Euclid's algorithm for finding the greatest common divisor of two positive integers.
while x != y do if x < y then y := y-x else x := x-y return x
First we must define a type to represent the state:
type ImpState = (Int, Int)
Next we define some simple state transformers to access and change the state. We use the type () and its sole value, (), when a state transformer does not return a useful value.
getX, getY :: StateTrans ImpState Int getX = ST(\(x,y)-> ((x,y), x)) getY = ST(\(x,y)-> ((x,y), y)) putX, putY :: Int -> StateTrans ImpState () putX x' = ST(\(x,y)->((x',y),())) putY y' = ST(\(x,y)->((x,y'),()))
Now we can write the algorithm with the state squarely in the background:
gcdST :: StateTrans ImpState Int gcdST = do x <- getX y <- getY (if x == y then return x else if x < y then do putY (y-x) gcdST else do putX (x-y) gcdST)
And finally a function to construct an initial state, run the program and discard the final state
greatestCommonDivisor x y = snd( applyST gcdST (x,y) )
This small example only hints at the utility of monads. It would be much shorter to write the algorithm in a conventional functional style. The savings from not having to explicitly pass the state around become larger as the program becomes larger.
The term "monad" comes from category theory, which is a branch of algebra (or, depending on whom you talk to, algebra is a branch of category theory). There is no need to understand the algebra at all to understand the use of monads in functional programming. This section and the next, just touch on the algebra a little bit.
Monads share some similarity with monoids* , with >>= being similar to the monoid operation and the return operator taking the place of the identity member. Specifically we have the following identities
return a >>= k = k a
p >>= return = p
(p >>= j) >= k = p >>= (\x->(j x >>= k)) [provided x is not free in j or k]
These identities hold in the StateTrans monad and ought to hold for any monad we define.
Monoids sometimes also have a zero member and an addition operator.
In Haskell, the zero member of a monad is called mzero and the relevant identity is
mzero >>= k = mzero
In a state transformation monad, mzero might represent an exception. I.e. an indication that the task can not be completed.
When a Monad has a zero member, it often also has an addition operation, mplus, obeying the following additional identities
p `mplus` mzero = p = mzero `mplus` p
p `mplus` (q `mplus` r) = (p `mplus` q) `mplus` r
It turns out that monads with zeros and addition are common enough that there is a library class, called MonadPlus, defined to encompass them.
If mzero represents failure to complete a computation, mplus might represent a way of combining alternative computations such that, if one fails, the other can succeed instead.
We generalize the StateTrans monad, to include a zero element and an addition operation. To do this we must change the type to include the possibility of failure:
newtype StateTransEx s a = STE( s -> Maybe (s, a) )
(The Maybe type constructor is defined in the Haskell Prelude as
data Maybe t = Just t | Nothing
"Maybe" is itself a Monad, but I won't be using that fact here.)
We use Nothing to represent failure, i.e., an exception, and Just (s,a) to represent success with result a and new state s.
We must define >>= and return so that it is a monad. We define >>= so that it propagates exceptions. That is, if p throws an exception, then so does p >>= f .
instance Monad (StateTransEx s) where -- (>>=) :: StateTransEx s a -> (a -> StateTransEx s b) -> StateTransEx s b (STE p) >>= k = STE( \s0 -> case p s0 of Just (s1, a) -> let (STE q) = k a in q s1 Nothing -> Nothing) -- return :: a -> StateTransEx s a return a = STE( \s -> Just (s, a) )
We define mzero and mplus to make StateTransEx s a member of class MonadPlus. mzero will mean throw an exception. mplus will give a means to recover from an exception; mplus p q will mean, execute p, but if an exception is thrown in p, then recover by executing q instead.
Since MonadPlus is not in the Prelude, but rather the Monad library, we must import it from the Monad library (the import declaration must go at the top of your Haskell module).
import Monad( MonadPlus( .. ), guard ) instance MonadPlus (StateTransEx s) where -- mzero :: StateTransEx s a mzero = STE( \s -> Nothing ) -- mplus :: StateTransEx s a -> StateTransEx s a -> StateTransEx s a (STE p) `mplus` (STE q) = STE( \s0 -> case p s0 of Just (s1, a) -> Just (s1, a) Nothing -> q s0 ) applySTE (STE p) s = p s
Now, if we execute p `mplus` q, and p fails, then the computation will be resumed with q. Note that q starts with the state that p started with, not the state that p had reached when the exception ocurred; p and q can be regarded as alternative computations. Although I've called this exception handling, "backtracking" is arguably a more accurate term.
We can use mplus and mzero to implement backtracking algorithms. Consider the classic problem of finding a way to place 8 queens on an 8 by 8 chess board such that no queen attacks another.
First the state
type QState = ([Int], [Int], [Int])
We use three lists to keep track of three sets: the set of occupied columns, the set of occupied south-west diagonals and the set of occupied south-east diagonals. The south-west diagonals are represented by the difference in the column and row number. The south-east diagonals are represented by the sum of the row and column number. There are functions to get and set each of these components of the state:
getCols = STE( \(cols, swDiags, seDiags) -> Just ((cols, swDiags, seDiags), cols) ) getSWDiags = STE( \(cols, swDiags, seDiags) -> Just ((cols, swDiags, seDiags), swDiags) ) getSEDiags = STE( \(cols, swDiags, seDiags) -> Just ((cols, swDiags, seDiags), seDiags) ) putCols c = STE( \(cols, swDiags, seDiags) -> Just ((c:cols, swDiags, seDiags), ()) ) putSWDiags sw = STE( \(cols, swDiags, seDiags) -> Just ((cols, sw:swDiags, seDiags), ()) ) putSEDiags se = STE( \(cols, swDiags, seDiags) -> Just ((cols, swDiags, se:seDiags), ()) )
We don't want to add a column or a diagonal to a set it is already in. To ensure that we don't, we use the library routine guard defined by
guard true = return() guard false = mzero
The following routines fail if the item being added to a set is already in the set:
tryPutCol c = do cols <- getCols guard (c `notElem` cols) putCols c tryPutSWDiag sw = do swDiags <- getSWDiags guard (sw `notElem` swDiags) putSWDiags sw tryPutSEDiag se = do seDiags <- getSEDiags guard (se `notElem` seDiags) putSEDiags se
The next routine attempts to place a queen at a particular spot, failing if the new queen would attack one that is already on the board. (I'm assuming there is no other queen in the same row, so no check is required to see if the row is occupied.)
place r c = do tryPutCol c tryPutSWDiag (c-r) tryPutSEDiag (c+r)
The main algorithm to place queens in each of rows [0..r-1] of a board with colNum columns is:
queens r colNum = if r == 0 then getCols -- Success, return list of columns else tryEach [0..colNum-1] (\c -> do place (r-1) c queens (r-1) colNum )
The tryEach "loop" is a control structure defined by
tryEach :: MonadPlus m => [a] -> (a -> m b) -> m b tryEach [] f = mzero tryEach (h:t) f = f h `mplus` tryEach t f
(We could also define tryEach in terms of the library function, msum, which takes a list of monad members and combines them with mplus: tryEach xs f = msum (map f xs) ).
To find an arrangement on an 8 by 8 chess board we write:
applySTE (queens 8 8) ([], [], [])
The StateTransEx monad provides a limited form of nondeterminism. Computations either succeed or fail, but if they succeed, they succeed but once. For example in
do (p `mplus` q) r
if p succeeds and then r then fails, q will not be given a chance to "execute".
To get nondeterminism of the sort found in languages such as Icon, SNOBOL, and Prolog, we need to allow a computation to succeed more than once. To create such a monad, we replace the Maybe type constructor with the list type constructor:
The type is
newtype StateTransMany s a = STM( s -> [(s, a)] )
I leave it as an exercise to define >>=, return, mplus, and mzero for this monad, and to change the queens example to generate all solutions to the 8-queens problem.
Haskell defines a monad called IO that is used to describe computations that interact with the operating system -- in particular to perform input and output. For example, here is how you can write a function to read a file, printing an error message if the file can not be read
maybeReadFile :: String -> IO (Maybe String) ---- -- Read a file or print an error message and return Nothing. maybeReadFile fileName = catch (do s <- readFile fileName return (Just s)) (readErrHandler fileName) readErrHandler :: String -> IOError -> IO (Maybe String) readErrHandler fileName err = do putStr ("Error reading file " ++ fileName ++ " " ++ show err ++ ".\n") return Nothing
You can see that the IO monad also supports exception handling, though with the "catch" function, not the mplus operator. (mplus would be inappropriate because changes to the world can not be undone!)
In Haskell the main program should be of type IO().
It is often said that pure functional languages can't be used to write interactive programs. At first glance the IO monad seems to contradict this idea. You can think of it this way: When your functional program is executed, it does not interact with the operating system, it merely computes an object of type IO(), which describes a set of possible interactive computations. An interpreter interacts with the environment to make one of these computations happen. The fact that Haskell is a lazy language is key to this, for the set of computations for many applications is infinite, even if each computation is finite. The choice of which computation is needed is governed by the input; thanks to lazyness, only the computation that is actually required is computed.
The list type constructor itself is a simple monad. Its definition is
instance Monad [ ] where xs >>= f = concat (map f xs) return x = [x] instance MonadPlus [ ] where mzero = [] mplus = (++)
(The notation [ ] in the first line of each instance declaration refers to the list type constructor, i.e. the function that maps each type to its corresponding list type.)
The definition of the list constructor as a monad allows one to write such things as
do a <- [1,2,3] b <- [3,4,5] return (a+b)
This evaluates to
[4,5,6,5,6,7,6,7,8] ,
just as does the list comprehension
[ a+b | a <- [1,2,3], b <- [3,4,5] ]
The key to engineering a large software project is to make changes easy..
Monads can be used to make functional programs far more adaptable. You need to insert another processing step? Go right ahead, don't worry about plumbing the state. You need another variable in the state? No problem, just change the underlying state and the basic members of the monad. You need to output messages from your computation? No problem, just change the monad to add an output string. You need to handle exceptions or nondeterminism? No problem, just change the monad.
It might be said that Haskell with monads does not give you much that you won't find in an imperative, nondeterministic language, with extensible, strong, but generic typing, and a powerful applicative expression sublanguage. The problem is there is no such language in common use. Monads make up for many of the drawbacks of Haskell relative to imperative languages, but without giving up any of its strengths.
Monads can be used for parsing. By using the remaining input as state, the StateTrans monad can be used to write deterministic recursive-descent parsers. Better yet, the StateTransEx and StateTransMany parsers can (respectively) be used to create backtracking parsers and non-deterministic parsers (allowing ambiguous grammars). Monadic parsing is the topic of the following paper:
Graham Hutton and Erik Meijer, Monadic parser combinators, Technical Report NOTTCS-TR-96-4, Department of Computer Science, University of Nottingham.
Available on the Web at http://www.cs.nott.ac.uk/Department/Techreports/96-4.html
I like to add an output for error and warning message, to the monad, and the possibility of fatal errors that can not be recovered from. I also use a clearer separation between parsing and lexical analysis than Hutton and Meijer. My own monadic parsing combinators are available on the web at http://www.engr.mun.ca/~theo/Misc/index.html#ParsingMonad
The IO monad and the theory behind it is reported in
Philip Wadler, How to declare an imperative, ACM Computing Surveys, 29(3):240--263, September 1997.
Available on the Web at http://www.cs.bell-labs.com/who/wadler/topics/monads.html.
An extension of the IO monad for use in systems with graphical user interfaces, is the GUI monad that lets Haskell or Gofer interact with the tk library. The GUI monad, many widgets and operators to compose widgets can be found in the TkGofer and TkHaskell systems. See http://www.informatik.uni-ulm.de/pm/ftp/tkgofer.html and http://www.dcs.gla.ac.uk/~nww/TkHaskell/TkHaskell.html.
One of the best introductions to monads is in
Philip Wadler, Monads for functional programming, in M. Broy, editor, Marktoberdorf Summer School on Program Design Calculi, Springer Verlag, NATO ASI Series F: Computer and systems sciences, Volume 118, August 1992. Also in J. Jeuring and E. Meijer, editors, Advanced Functional Programming, Springer Verlag, LNCS 925, 1995.
Available on the Web at http://www.cs.bell-labs.com/who/wadler/topics/monads.html.
The first paper that I know of to deal with monads as a tool in functional programming is
Philip Wadler, The essence of functional programming. Invited talk, 19'th Symposium on Principles of Programming Languages, ACM Press, Albuquerque, January 1992.
Available on the Web at http://www.cs.bell-labs.com/who/wadler/topics/monads.html.
There is a whole lot that could be said about the mathematical theory of monads. It is my belief that most programmers don't need to know much of this theory to reap the benefits of programming with monads. Philip Wadler's papers give an introduction to the mathematics and pointers back to the earlier, more mathematical, literature.
Almost everything in this tutorial applies equally to Gofer. However a few notations are different. I think the following table completely summarizes the notational differences assuming you are using Gofer 2.30 and the cc.prelude prelude.
Haskell | Gofer | |
>>= | replace by | `bind` |
>> | no equivalent? | |
return | replace by | result |
newtype | replace by | data |
MonadPlus | replace by | Monad0 and MonadPlus |
Gofer supports both the "do" syntax (if Gofer is compiled with the right options) and a "comprehension" syntax for monads explained next. In Gofer the so-called "list" comprehension notation is an alternative to the "do" notation, no matter whether you are using the list monad or any other monad. You can write
[a+b | a<-getX, b<-getY]
which means the same as
do a <- getX b <- getY return (a+b)
The advantage of the "do" notation is that it doesn't force you to end with a "return".
In Gofer, each instance of class Monad must be an instance of class Functor. To be in class Functor a monad m must support a function called map of type (a -> b) -> (m a -> m b). The map function lifts an ordinary function to a monad function. For monads, it can always be defined by
map f p = [f a | a <- p]
In Gofer, use of the IO monad requires a special compilation option and inclusion of the file iomonad.gs.
In Haskell 1.3 and 1.4 mzero is called zero and mplus is called ++. There is a separate class MonadZero, declaring the zero. All the classes are declared in the Prelude, so you do not need to import from Monad.
Haskell 1.4 supports the comprehension syntax for Monads (see the Gofer section).
A monoid is an algebraic structure consisting of a set S and an operation * with the following properties
Examples of monoids are the integers with multiplication as the operator, character strings with catenation as the operator, functions with composition as the operator, and programming langauge statements with sequential composition as the operator. Also any group is a monoid and any monoid that has inverses is a group.