f :: () -> ()

f = const ()

g :: () -> ()

g = id

Is there a way to write a function that tells them apart? Given an infinite loop, first will stop and the second will hang. But you can do better, give an exception and check if it was thrown:

data TestException = TestException deriving (Show, Typeable)

instance Exception TestException

test :: (() -> ()) -> Bool

test k = unsafePerformIO $ catch (k (throw TestException) `seq` return True)

(\TestException -> return False)

Is this a safe trick?

On the one hand, it looks rather innocently: a function is given some box, and if it opens it, it's caught red-handed. Such function would hang given infinite loop. A function that doesn't touch the box, yet finishes, is const (). Such 'test' is a nice feature to have.

On the other hand, something is wrong, since f is more defined than g, and test f is not at least as defined as test g. This contradicts monotonicity. By giving two exceptions to (+), you can check which argument is evaluated first:

throw A + throw B

That means flip (+) is not the same as (+). Addition is not commutative!

Which of these points is correct?

**The representation of a->b**

Internally, the (->) type is a list of instructions - force a thunk, case, perform a primitive instruction like adding integers. In other words, it's an algorithm. You could conceivably write an algebraic datatype that encompassed all those options.

data Instruction a b = Force Thunk | Return Value...

type Algorithm a b = [Instruction a b]

Haskell has an evaluator that takes an algorithm and runs it step by step.

($) :: Algorithm a b -> a -> b

Having access to internal source code of an algorithm, you can write a "debugger" that stops if the function forces its argument. In a sense, this is what the function test is doing.

Lazy IO uses this trick. When a thunk is forced evaluation is stopped momentarily, and IO is performed.

Still, the denotational semantics argument seems disturbing.

**The distinction**

The solution to the dilemma is:

There are two different ways of interpreting values of type a -> b.

**functions**that assign a value of b to each value of a.**algorithms**that are recipes how to turn a into b.

In Haskell, the function view is used, but in this post I'll use both to illustrate differences. I'll call the algorithm view "operational" and function view "denotational".

Representing algorithms is possible using ADT, as you seen above. Functions are represented using algorithms:

data Function a b = Function (Algorithm a b)

You can turn an algorithm into a function:

evaluate :: Algorithm a b -> Function a b

evaluate = Function

but the reverse conversion is ill-defined - one function has many representations. evaluate turns an algorithm into a "black box".

Think about rational numbers. You represent them as pairs of integers, even though a rational number is not a pair. Then, you write operations like addition, which don't break 'internal consistency'. Not every operation on pairs can be lifted to operation on rationals. Different pairs may represent same rationals. It's the same with functions stored as algorithms.

**Questions:**

The questions focus on differences between algorithms and functions.

1. Can you compare values of type a -> b for equality?

Answer

2. How would you show an algorithm? A function?

Answer

3. How would you read an algorithm? A function?

Answer

4. What abstract models of computation correspond to algorithms and functions?

Answer

5. How does semantic order on values a -> b look like?

Answer

6. Is the following function

f :: (() -> ()) -> Bool

"f g = [return True if evaluation of g () halts within 100 evaluation steps]"

well-defined?

Answer

7. What about this?

lub :: (Bool -> ()) -> ()

lub f = [return () if evaluation of f False or f True halts]

Answer

8. Since a->b in Haskell is seen as function type, not algorithm type, anything dependent of "internals" is ambiguous. How does Haskell deal with it?

Answer

**Summary**

In most languages, 'functions' are algorithms. In Haskell, the emphasis is on functions as in mathematics and referential transparency.

Since Haskell is running on a computer, operational semantics (algorithms) describe how things work on the lower level and you can't do away with them. Things like "debugging", "unsafePerformIO", "measuring time/space complexity of a function", "order of evaluation" are relevant to operational semantics and break the abstraction. Things like "referential transparency" or "functional reactive programming" are valid for functions.

I think this is what makes Haskell different from imperative languages. FP is about functions, not algorithms. This has advantages, like better support for composability - and disadvantages, like more difficult time/space reasoning.

## 1 comment:

Actually your answers rises even more questions. :) What is the difference between "to assign a value" and "to give a recipe"? What is "to show an algorithm/function", "to read an algorithm/function", "a well-defined function"?

"id" is strict while "const ()" is not. We can compare lambda-terms syntactically, with equivalences (beta-, eta-, etc.) or in models so we can obtain a comparison with any precision we want. This comparison may be decidable or not.

Post a Comment