## Real Confusing Haskell

I can pinpoint the exact page in Real World Haskell where I became lost. I was reading along surprisingly well until page 156, upon introduction of `newtype`

.

At that my point my smug grin became a panicked grimace. The next dozen pages were an insane downward spiral into the dark labyrinth of Haskell's type system. I had just barely kept `data`

and `class`

and friends straight in my mind. `type`

I managed to ignore completely. `newtype`

was the straw that broke the camel's back.

As a general rule, Haskell syntax is incredibly impenetrable. `=>`

vs. `->`

vs. `<-`

? I have yet to reach the chapter dealing with `>>=`

. The index tells me I can look forward to such wonders as `>>?`

and `==>`

and `<|>`

. Who in their right mind thought up the operator named `.&.`

? The language looks like Japanese emoticons run amuck. If and when I reach the `\(^.^)/`

operator I'm calling it a day.

Maybe Lisp has spoiled me, but the prospect of memorizing a list of punctuation is wearisome. And the way you can switch between prefix and infix notation using parens and backticks makes my eyes cross. Add in syntactic whitespace and I don't know what to tell you.

I could still grow to like Haskell, but learning a new language for me always goes through a few distinct stages:

*Curiosity -> Excitement -> Reality Sets In -> Frustration -> Rage* ...

At *Rage* I reach a fork in the road: I either proceed through *Acceptance* into *Fumbling* and finally to *Productivity*, or I go straight from *Rage* to *Undying Hatred*. Haskell could still go either way.

## 22 Comments

I...go through the same stages, I just never really thought about it long enough to label them.

Technically I'm not amused by your pain. I'm amused by

how you describe itI wrote a couple of questions on ubuntuforums in the same vein.Well, haskell has it's up and downs, as most programming languages do.

I recently started looking at erlang, which is similar to haskell, but reduces some of the pains...

Really nice description of the stages ;)

If you show up on the #haskell irc channel and talk to the people there, much of your frustration can be prevented. Lots of good stuff going on there!

(^.^)/ operator not possible, but:

"The language looks like Japanese emoticons run amuck. If and when I reach the (^.^)/ operator I'm calling it a day."

I'm a Haskell newbie, haven't even started reading the book. After reading the paragraph above and 10 minutes later, I'm still laughing. Thanks for this moment :)

Hi!

I haven't read through that part of RWH (I already knew Haskell quite well when it was written), but perhaps I can give you a hand with the confusion about newtype.

A newtype is somewhere in between data (which defines new datastructures) and type (which defines alias names for existing types) -- it's treated as a distinct type while compiling, but has the same representation at runtime as an existing type. It has the same syntax as a data declaration, but only allows one constructor with exactly one field.

You can ignore newtypes and just think of them as data for now if you want, but you should probably be aware that there is a subtle difference in meaning regarding evaluation and pattern matching, and they're slightly more memory efficient of course.

Another advantage that newtypes have over data, at least in GHC, is that with the GeneralizedNewtypeDeriving extension turned on, essentially any typeclass which is implemented for the original type can be derived for the newtype. So you can write, say:

newtype Dollars = D Integer deriving (Eq, Ord, Num, Integral)

and get a new type which is treated as distinct from Integer while compiling, but which has the same representation as Integer at runtime, as well as allowing all the standard numeric operations.

This sort of trick is handy for helping to ensure that data get used in the manner which is intended, for instance, that dollars don't get mixed up with, say, a count of customers, or some other unit of currency.

Coming from lisp, the type system might seem foreign, but I think that if you use it often, you'll find that it grows on you. Used correctly, it can catch and pinpoint a lot of potentially frustrating bugs before you even start testing your program, which cuts down a lot on time spent debugging.

Hope this helps. If you're still confused, please come and visit us in #haskell on the FreeNode IRC network. There are a lot of people there who like to help beginners. I'm usually lurking around as well. :)

As for the matter of infix operators and other symbolic parts of the syntax, I suppose it's just something that you get used to. Any string of symbol characters (which isn't otherwise reserved) can be an infix operator, and you should usually think of them just like you'd think of any other function -- they're typically defined in the libraries, and you have to look them up and/or read about them to tell what they mean.

However, the three arrows that you mentioned, =>, -> and <- are all special parts of the Haskell syntax. The first, =>, is used to set apart class contexts which restrict type variables from the remainder of a type. The second, ->, is used at the type level to denote function types, and is also used in case expressions as a bit of punctuation to set apart the pattern from the result. The third, <-, is used in generators for list comprehensions, and in do-notation, in each case just as punctuation, but the common idea between the two is that you're binding a variable according to some piece of data (a list in the comprehension case, or a monadic action in the do-notation case).

I don't know if that clears things up, but apart from the case of -> in type signatures, they're just punctuation to help separate things and keep them straight when reading code.

The rest of the operators you mentioned are just library functions. You can look them up if you want to know what they mean. :)

Good luck with Haskell, and be sure to come and visit us on IRC!

Thanks for the detailed clarification. I'll definitely stop by #haskell if I can't figure things out. One reason I wanted to learn Haskell is that I've heard good things about the community.

haskell :: Curiosity -> Excitement -> RealitySetsIn -> Frustration -> Rage -> Either (Acceptance -> Fumbling -> Productivity) (Undying Hatred)

Cryptic -> Overwhelming -> Confusing -> Frustrating -> What the hell is a monad

really? -> What are arrows? -> My head hurtsAs von Neumann said: In haskell you don't understand things. You just get used to them.

(^.^)/ is actually 3 operators in a row, and an unlikely combination to come up, because / is division and doesn't apply to functions (unless you decide to write a fucked up Num instance).

I have had the pleasure to use ((.) . (.)) and <^(++)^> though.

Bob

RWH is a good set of lecture notes, but it's a very poor way of teaching yourself Haskell. After the first 5 chapters or so it steps off a cliff, as you say; the code doesn't match the explanation, and there's no way of find either the correct explanation or the correct code. Pity, because it's otherwise very clearly written, by someone who has obviously taught the language for a while. It just needed someone learning Haskell to proof-read it.

Will

@Tom Davie: I think

`<^(++)^>`

should henceforth be known as the Batman operator.I started trying to learn Haskell with RWH too, and similarly stopped eventually, wondering when it would start to make sense.

Then I checked out http://learnyouahaskell.com/, and a lot of things started to click. It's an amazingly informative introduction!

I second the http://learnyouahaskell.com/ recommendation. Every language should have a website just like it.

I also recommend Graham Hutton's "Programming in Haskell". It covers the basics and I feel the chapters all follow each other pretty well.

Haskell lost me with the IO Monad. Writing pure functions is really fun, anything with IO gave me a bad taste in my mouth. I haven't spent enough time in Haskell, that's really my problem.

One of my favorite things about Lisp is it's lack of syntax. It just seems to make more sense, and it's less I have to remember.

@rzezeski Yeah that's something I love about Lisp too now that I'm used to it. I'll have to look at that site again, seems a lot of people like it.

Here's the problem learning haskell. Every other imperative language that you learn after your first programming language, is just a new syntax and new libraries. The rest is the same. Even functional programming is not a big deal for imperative programmers. It is very easy to grasp map-reduce things.

But haskell is very different. Not because of functional programming, not because of purity or laziness.

Haskell is programming with types. So learning its syntax, its libraries, functional programming techniques will not bring you closer to understanding haskell. The true path to understand haskell lies through Monoid-Functor-Applicative-Arrow-Monad. You have to learn, understand and start seeing these types in things around you. That's when you will grok haskell.

I absolutely agree with Vagif Verdi.

As an imperative programmer I had absolutely no problem switching to F# and its "more functional" approach. Chaining maps, filters and folds together is not that different from imperative programming.

One reason I don't make as many mistakes in functional languages, and especially haskell, is the wealth of loop-constructs (in a loose sense).

In an imperative language there are maybe three loops: while, for, foreach. Whereas in functional languages, you can/have to choose from map, foldl, foldr, scan, zip, filter etc.

Choosing the right loop-contruct for the right task is, what reduces the number of errors. Those constructs are so restricted, that you either achieve the right thing, or can't achieve it at all.

One can easily expand this from higher order functions like map and fold to monads and beyond.

@Vagif Verdi: I think there is a bit of a jump from imperative C-like languages to functional programming. But yeah there is an equally difficult jump from what I know, to the Haskell type system.

Purity, laziness, maps and reduces and folds, these I understand. The type system, not so much. I will concentrate my efforts there.

@rzezeski: It took me a huge headache to understand monads, too. The hardest part is undoing some of the imperative ideas you get in your head. It's actually really simple.

All Monads are really is a way to do imperative style and deal with side effects while still staying true to the rules of pure functional programming.

One of the core unbreakable concepts of Haskell is that every time foo is given the same arguments, it will produce the same output, correct? If you gave yourself enough time, you would probably think up of this system.

Let's use a theoretical function that's like Console.ReadLine() in C#. What you

WANTis a function that takes a () and returns a [Char], aka a string. But since this function needs to return different values, we need an input that will make this function give different output based on the input.We will call this input world. World will represent the universe surrounding our program. As world is never the same when the function is called at two different times, we have achieved exactly what we are looking for!

All the monad system does is hide the world from us while we go on our merry way. That's it! =)

These videos definitely will help (in order). http://channel9.msdn.com/shows/Going+Deep/Erik-Meijer-and-Matthew-Podwysocki-Perspectives-on-Functional-Programming/ http://channel9.msdn.com/shows/Going+Deep/Brian-Beckman-The-Zen-of-Expressing-State-The-State-Monad/ http://channel9.msdn.com/shows/Going+Deep/Brian-Beckman-The-Zen-of-Stateless-State-The-State-Monad-Part-2/

Also, I warn with this video. Many people hate it, some enjoyed it. Regardless, if you have time and wish to, watch it last. Although the zen of stateless state, if understood correctly, explains everything you need to figure the rest out for yourself =) http://channel9.msdn.com/shows/Going+Deep/Brian-Beckman-Dont-fear-the-Monads/

Good luck!

codebliss: Monads are not just about dealing with side effects.

Specifically, the IO, ST and a few other monads are indeed what you describe.

But Monads in general are just containers of computation results that can be combined together in certain ways.

I think you might be missing part of the point of Haskell. A language like Haskell has a rich variety of ways of expressing mathematical constructs. Of course, your goal as a programmer has always been to pick a construct that easily and straightforwardly represents the data structure and algorithms you are trying to model. So Haskell has you covered there. (Admittedly, you need to have some grasp of the math involved to really become fluent).

A lot of the operators you mentioned are designed for one specific purpose: funneling a "way" to express/interpret a data structure in a way that another data structure can use.

You specifically mentioned the (>>=) operator. I won't go into a monad tutorial, but the basic idea behind that operator is that when you have a "monadic action", you can "perform the action", and bind the result of that action to the (monic -- one argument!) function that follows. It is a glorified version of fmap. The important difference between (>>=) (named 'bind') and fmap is that fmap has type fmap :: (Functor F) => (a -> b) -> f a -> f b. Now, if we keep in mind that a monad is necessarily a functor, we can "generalize" bind's type (at least for pedagogical purposes). It has a type (>>=) :: (Functor m) => m a -> (a -> m b) -> m b. (The real type is (>>=) :: (Monad m) => m a -> (a -> m b) -> m b.) The ordering is different specifically because bind is an infix operator.

So here's what I'm trying to get at: You don't need to learn all the operators. Every function is a many-to-one relation, so any construct that relates (potentially) many things to a unique thing is a function. The details of how that function is implemented certainly matter, but you don't really need to dissect other people's favorite constructs in order to put them together. The types are enough. (Once you get into type arithmetic, it pays to learn some of the type arithmetic plumbing though)

And if you are trying to write code from scratch, it pays to figure out a nice, mathematically succinct way to describe the structure you are trying to model, and then look for a Haskell construct that matches the mathematics. You will end up with a nice "normal form" for the construct, which is typically straightforward to modify. Again, you will end up using some weird operators, and again, they will just be joining types together in funny ways that define functions.

(=>) isn't an operator. It is read "implies". For example, the type (>>=) :: (Monad m) => m a -> (a -> m b) -> m b "means" that IF you have an 'm' that is a monad, the function's type is m a -> (a -> m b) -> m b. Actually, Haskell is very close to being two different languages (the language of the type system and the language of the runtime system) that interact very nicely together.

(->) is a weird one. It ought to be a type constructor and a data constructor defined by something very close to "data a -> b = a -> b", at least in principal. But for efficiency's sake, we don't really have access to (->) as a value. So when we define a type f :: a -> b, we are saying that f belongs to the data type I defined above. The tricky thing here is that (->) can also be interpreted to mean "if", because of a deep connection between programming and logic. Basically, a type like f :: a -> b means that IF you have an 'a', f will return a 'b'. The connection between logical implication, proof, function definition, and interpretation goes very deep.

(<-) is a variation on >>= that binds the value taken from a monadic action to a name, locally, in the scope of another computation. Conceptually, it is another fmap, though really it is syntactic sugar. Think "comes from" when you see it.

"Coming from lisp, the type system might seem foreign, but I think that if you use it often, you'll find that it grows on you. Used correctly, it can catch and pinpoint a lot of potentially frustrating bugs before you even start testing your program, which cuts down a lot on time spent debugging."

I started out as a functional programmer in ML, SML, O'CAML, and Haskell (plus, of course, C, C++, and Java). Some years ago, I switched to dynamically typed programming languages and haven't looked back until I had reason to go back to coding in Haskell recently.

It was a disappointment: things haven't improved much. Error messages are still missing the point, performance and optimization are still erratic and lacking, the type system still imposes erratic constraints, and expressing some simple and intuitive constructs requires going into a wrestling match with the type system.

In the end, you need a lot of experience to use the Haskell type system well, just like you need a lot of experience to use a dynamic type system well. And if you lack the experience, you can circumvent the Haskell type system just as much as in a dynamic type system (and usually end up with worse code).

Overall, I concluded, after many years, that static type systems of the form found in functional programming languages just are not worth the hassles.