The Importance of Realism in Startups

The Importance of Realism in Startups

I’ve done a lot of video interviews. This is one of my favorite if not my favorite outright.
Screen Shot 2013-08-10 at 6.33.27 PM
It’s only 12 minutes long and if you’re a first-time entrepreneur (or second time, frankly) I encourage you to watch it if for nothing else than to get a sense that your struggles are universal.
TechCrunch interviewed me and asked me to talk about failure. So I spoke for 12 minutes about my own failures. I made many classic first-time mistakes which makes it easier for me to spot when others make similar bad choices.
It serves both as my warning signal of which teams to avoid funding, especially if I perceive they will make critical mistakes often led by hubris or naïveté. An obvious example is if they talk about M&A deals, teams they could just “bolt on” or “doing a rollup in their industry.”
It’s why I will never fund Conference Ho’s. I know that this is driven from an unhealthy ego, self-centrism and lack of regard for running options of one’s startup.
Vomit.
My errors also serve as my source for coaching others teams for more benign mistakes like over-building functionality, over-complicating the product or hiring too senior of people.
I think failure is critical for many reasons. Mostly because it makes us better leaders.
We learn from mistakes. We learn from losses.
In part I felt it was important to let people know that we all have failure and make mistakes.
As I’ve said before, all startups need to realize that every other company still has to see itself naked in the mirror in the morning. Stop reading their press releases or hearing their founder talk about he is crushing it. We all know that people who truly are crushing it rarely talk about it.
Once you realize that we’re all the same, all dealing with the same pressures, fears and struggles – you’ll learn to keep more focused on what you’re doing and not whatever everybody else is doing.
In my “failure interview” with TechCrunch I talked about the biggest stress that really comes from startups – dealing with all the other people with whom you work. Startups are filled with enormously talented people – often product people & engineers. As an industry we’re hardly used to talking openly about feelings or resolving conflicts.
It’s why I believe startup coaches are so important and I wish I knew more great ones. If you have great experiences please leave names in the comments section.

The Success Bias

In the end it’s easy to look back triumphantly at our startup experiences and define every move as heroic.
We of course remember the positive outcomes, the rewards, the press celebrations at key moments or at the finish line. We of course get all of the accolades if at the tape there was a financial pay day.
And with so many acquihires these days you never really know who was financially successful and who was safely landing a plane with no engines.
It’s certainly nice to look at your past accomplishments in your bio.
Yet these only tell the stories from one side of the startup ledger. They are whitewashed falsehoods that mask the struggles. They are only one aspect of the startup experience.
Even along the journey and nowhere near the destination I see many startups with their chests pumped out touting their latest deals, showing off their swish offices funded by millions of venture funding (and not necessarily yet the commensurate business success to afford said offices or perks).
I understand the temptation. In a world with too much tech hype many teams feel the need to constantly spin.
I prefer the opposite.
I prefer realism in startups. It’s part of my stump speech to first-time founders or university students.
Avoid the stupid mistake I made (and talk about in the video):
  • raising too much money too quickly
  • building too many features (a mile wide & an inch deep)
  • getting too much press before we were ready
  • focusing on M&A to fix our problems
  • believing our own hype
Most of the days at a startup are a grind. While you’re in the moment it feels like there are as many failures as their are successes.
Even success feels hollow. I had a friend who was on the front page of the business section of one of the top newspapers in the country while his company was 30 days from running out of cash. And in all seriousness the article prompted his relative to hit him up for money.
Every first-time entrepreneur who has raised millions in VC will know the surrealism of people calling you a millionaire while you are figuring out whether you can really afford to pay for a vacation since your credit card is already a little bit bruised.
We put on our brave faces and turn up everyday hoping that in the end we won’t feel like frauds. In fact, I believe that one of the largest motivators for startups to avoid the ultimate failure is to avoid the humiliation of not having every positive press mention seem like you were a phony.
It’s not that you don’t believe in your ultimate outcome – you have to believe in order to be insane enough to continue the journey against all odds – it’s just that there is nagging self doubt.
There are of course also external factors you can’t control. You think investors will continue to finance you – they promised they would – but you never really know. Until you know.
In the end you don’t always get the answer you had hoped for.

Youth vs. Wisdom

In your youth you have the bravado to face uncertainty with the blind optimism that success is inevitable. In short, you don’t know what you don’t know. I keep coaching an investor friend of mine that this can be a good thing at times and that he shouldn’t be too quickly to discount lack of experience from a team with serious talent and full of ambition.
This “naive optimism” is why I believe younger entrepreneurs are more likely to produce insanely big outcomes.
Yet youth often brings a triumphalism that blindsides entrepreneurs into missing the macro picture.
It is why younger entrepreneurs are more likely to drink their own Kool-Aid, which of course is dangerous.In bull markets many credit themselves with brilliance and industry stewardship when perhaps they are merely riding an ephemeral trend fueled by speculative capital.
Age brings wisdom. Timidity, too. And sometimes cynicism. But age brings perspective. If older entrepreneurs are more cautious it’s because life’s experiences have taught them to be so.
Older entrepreneurs tend to spend cash more wisely, for example. They feel less in a rush to keep up with the Jones’s since they’ve seen a few boom-and-bust cycles and they know it’s a marathon.
I find older entrepreneurs more willing to have pragmatic debates about competition as well. They realize that there is often more to be gained by attacking the existing market structure than each other.
Older entrepreneurs tend to avoid lawsuits where possible. There is less ego. Younger people still like to fight.
And for the most part they shy away from premature press because they know the consequences of getting over one’s skis.

The Case for Realism

I try my best to blog from a realistic perspective because partly because I believe it’s important for people on the journey to have a realistic perspective and not feel ashamed at their progress or performance.
It’s why I wrote one of my most read posts – Entrepreneurshit.
Because I know many people at big & successful companies or at fast-growing startups I know that even they have struggles, doubt, insecurities, fear of failure.
If you knew that it might help you realize that your failures are not so special.
There is always a tomorrow – even after bankruptcy. A second act. A new career if not a humbled one.
That’s why I loved the TechCrunch interview.
It gave me a chance to make sure that wherever you are in your career path you would know that we’ve all been there.
Even if our bio’s don’t mention it.
Failure is ok. It’s not the same as losing or being a loser. It’s a set back.
And it’s how you handle your failures that define you more than anything else.
****
p.s. If any of you were at the Foundry Group rock party in San Francisco where Ryan McIntyre and Seth Levine rocked with many other VCs (David Cremin, David Pakman, who am I forgetting?) then you’ll know how heroic this TechCrunch interview really was. It came with one hell of a hangover that I blame on unnamed LPs (ahem) and on my inability to say no to a challenge (and whiskey). Which come to think of it means I’ve really learned nothing at all in my old age. It came after only a few hours of sleep in which I had a meeting in the morning with an LP who will be smiling if he reads this because he knows that I had to stand for my whole meeting with him in order to get through the meeting.
Posted in Startup Advice

Mark Suster is a 2x entrepreneur who has gone to the Dark Side of VC. He joined Upfront Ventures in 2007 as a General Partner after selling his company to Salesforce.com. He focuses on early-stage technology companies. Read more about Mark.

The Evolution of a Haskell Programmer


The Evolution of a Haskell Programmer

Fritz Ruehr, Willamette University

See Iavor Diatchki’s page “The Evolution of a Programmer” for the “original” (though he is not the author), and also below for the story behind this version.

   (This page has been translated into the Serbo-Croatian language by Anja Skrba from Webhostinggeeks.com. Thanks, Anja, for all your hard work!)
 

Freshman Haskell programmer

fac n = if n == 0 
           then 1
           else n * fac (n-1)
 

Sophomore Haskell programmer, at MIT

(studied Scheme as a freshman)
fac = (\(n) ->
        (if ((==) n 0)
            then 1
            else ((*) n (fac ((-) n 1)))))
 

Junior Haskell programmer

(beginning Peano player)
fac  0    =  1
fac (n+1) = (n+1) * fac n
 

Another junior Haskell programmer

(read that n+k patterns are “a disgusting part of Haskell” [1]
and joined the “Ban n+k patterns”-movement [2])
fac 0 = 1
fac n = n * fac (n-1)
 

Senior Haskell programmer

(voted for   Nixon   Buchanan   Bush — “leans right”)
fac n = foldr (*) 1 [1..n]
 

Another senior Haskell programmer

(voted for   McGovern   Biafra   Nader — “leans left”)
fac n = foldl (*) 1 [1..n]
 

Yet another senior Haskell programmer

(leaned so far right he came back left again!)
-- using foldr to simulate foldl

fac n = foldr (\x g n -> g (x*n)) id [1..n] 1
 

Memoizing Haskell programmer

(takes Ginkgo Biloba daily)
facs = scanl (*) 1 [1..]

fac n = facs !! n
 

Pointless (ahem) “Points-free” Haskell programmer

(studied at Oxford)
fac = foldr (*) 1 . enumFromTo 1
 

Iterative Haskell programmer

(former Pascal programmer)
fac n = result (for init next done)
        where init = (0,1)
              next   (i,m) = (i+1, m * (i+1))
              done   (i,_) = i==n
              result (_,m) = m

for i n d = until d n i
 

Iterative one-liner Haskell programmer

(former APL and C programmer)
fac n = snd (until ((>n) . fst) (\(i,m) -> (i+1, i*m)) (1,1))
 

Accumulating Haskell programmer

(building up to a quick climax)
facAcc a 0 = a
facAcc a n = facAcc (n*a) (n-1)

fac = facAcc 1
 

Continuation-passing Haskell programmer

(raised RABBITS in early years, then moved to New Jersey)
facCps k 0 = k 1
facCps k n = facCps (k . (n *)) (n-1)

fac = facCps id
 

Boy Scout Haskell programmer

(likes tying knots; always “reverent,” he
belongs to the Church of the Least Fixed-Point [8])
y f = f (y f)

fac = y (\f n -> if (n==0) then 1 else n * f (n-1))
 

Combinatory Haskell programmer

(eschews variables, if not obfuscation;
all this currying’s just a phase, though it seldom hinders)
s f g x = f x (g x)

k x y   = x

b f g x = f (g x)

c f g x = f x g

y f     = f (y f)

cond p f g x = if p x then f x else g x

fac  = y (b (cond ((==) 0) (k 1)) (b (s (*)) (c b pred)))
 

List-encoding Haskell programmer

(prefers to count in unary)
arb = ()    -- "undefined" is also a good RHS, as is "arb" :)

listenc n = replicate n arb
listprj f = length . f . listenc

listprod xs ys = [ i (x,y) | x<-xs, y<-ys ]
                 where i _ = arb

facl []         = listenc  1
facl n@(_:pred) = listprod n (facl pred)

fac = listprj facl
 

Interpretive Haskell programmer

(never “met a language” he didn't like)
-- a dynamically-typed term language

data Term = Occ Var
          | Use Prim
          | Lit Integer
          | App Term Term
          | Abs Var  Term
          | Rec Var  Term

type Var  = String
type Prim = String


-- a domain of values, including functions

data Value = Num  Integer
           | Bool Bool
           | Fun (Value -> Value)

instance Show Value where
  show (Num  n) = show n
  show (Bool b) = show b
  show (Fun  _) = ""

prjFun (Fun f) = f
prjFun  _      = error "bad function value"

prjNum (Num n) = n
prjNum  _      = error "bad numeric value"

prjBool (Bool b) = b
prjBool  _       = error "bad boolean value"

binOp inj f = Fun (\i -> (Fun (\j -> inj (f (prjNum i) (prjNum j)))))


-- environments mapping variables to values

type Env = [(Var, Value)]

getval x env =  case lookup x env of
                  Just v  -> v
                  Nothing -> error ("no value for " ++ x)


-- an environment-based evaluation function

eval env (Occ x) = getval x env
eval env (Use c) = getval c prims
eval env (Lit k) = Num k
eval env (App m n) = prjFun (eval env m) (eval env n)
eval env (Abs x m) = Fun  (\v -> eval ((x,v) : env) m)
eval env (Rec x m) = f where f = eval ((x,f) : env) m


-- a (fixed) "environment" of language primitives

times = binOp Num  (*)
minus = binOp Num  (-)
equal = binOp Bool (==)
cond  = Fun (\b -> Fun (\x -> Fun (\y -> if (prjBool b) then x else y)))

prims = [ ("*", times), ("-", minus), ("==", equal), ("if", cond) ]


-- a term representing factorial and a "wrapper" for evaluation

facTerm = Rec "f" (Abs "n" 
              (App (App (App (Use "if")
                   (App (App (Use "==") (Occ "n")) (Lit 0))) (Lit 1))
                   (App (App (Use "*")  (Occ "n"))
                        (App (Occ "f")  
                             (App (App (Use "-") (Occ "n")) (Lit 1))))))

fac n = prjNum (eval [] (App facTerm (Lit n)))
 

Static Haskell programmer

(he does it with class, he’s got that fundep Jones!
After Thomas Hallgren’s “Fun with Functional Dependencies” [7])
-- static Peano constructors and numerals

data Zero
data Succ n

type One   = Succ Zero
type Two   = Succ One
type Three = Succ Two
type Four  = Succ Three


-- dynamic representatives for static Peanos

zero  = undefined :: Zero
one   = undefined :: One
two   = undefined :: Two
three = undefined :: Three
four  = undefined :: Four


-- addition, a la Prolog

class Add a b c | a b -> c where
  add :: a -> b -> c
  
instance              Add  Zero    b  b
instance Add a b c => Add (Succ a) b (Succ c)


-- multiplication, a la Prolog

class Mul a b c | a b -> c where
  mul :: a -> b -> c

instance                           Mul  Zero    b Zero
instance (Mul a b c, Add b c d) => Mul (Succ a) b d


-- factorial, a la Prolog

class Fac a b | a -> b where
  fac :: a -> b

instance                                Fac  Zero    One
instance (Fac n k, Mul (Succ n) k m) => Fac (Succ n) m

-- try, for "instance" (sorry):
-- 
--     :t fac four
 

Beginning graduate Haskell programmer

(graduate education tends to liberate one from petty concerns
about, e.g., the efficiency of hardware-based integers)
-- the natural numbers, a la Peano

data Nat = Zero | Succ Nat


-- iteration and some applications

iter z s  Zero    = z
iter z s (Succ n) = s (iter z s n)

plus n = iter n     Succ
mult n = iter Zero (plus n)


-- primitive recursion

primrec z s  Zero    = z
primrec z s (Succ n) = s n (primrec z s n)


-- two versions of factorial

fac  = snd . iter (one, one) (\(a,b) -> (Succ a, mult a b))
fac' = primrec one (mult . Succ)


-- for convenience and testing (try e.g. "fac five")

int = iter 0 (1+)

instance Show Nat where
  show = show . int

(zero : one : two : three : four : five : _) = iterate Succ Zero
 

Origamist Haskell programmer

(always starts out with the “basic Bird fold”)
-- (curried, list) fold and an application

fold c n []     = n
fold c n (x:xs) = c x (fold c n xs)

prod = fold (*) 1


-- (curried, boolean-based, list) unfold and an application

unfold p f g x = 
  if p x 
     then [] 
     else f x : unfold p f g (g x)

downfrom = unfold (==0) id pred


-- hylomorphisms, as-is or "unfolded" (ouch! sorry ...)

refold  c n p f g   = fold c n . unfold p f g

refold' c n p f g x = 
  if p x 
     then n 
     else c (f x) (refold' c n p f g (g x))
                         

-- several versions of factorial, all (extensionally) equivalent

fac   = prod . downfrom
fac'  = refold  (*) 1 (==0) id pred
fac'' = refold' (*) 1 (==0) id pred
 

Cartesianally-inclined Haskell programmer

(prefers Greek food, avoids the spicy Indian stuff;
inspired by Lex Augusteijn’s “Sorting Morphisms” [3])
-- (product-based, list) catamorphisms and an application

cata (n,c) []     = n
cata (n,c) (x:xs) = c (x, cata (n,c) xs)

mult = uncurry (*)
prod = cata (1, mult)


-- (co-product-based, list) anamorphisms and an application

ana f = either (const []) (cons . pair (id, ana f)) . f

cons = uncurry (:)

downfrom = ana uncount

uncount 0 = Left  ()
uncount n = Right (n, n-1)


-- two variations on list hylomorphisms

hylo  f  g    = cata g . ana f

hylo' f (n,c) = either (const n) (c . pair (id, hylo' f (c,n))) . f

pair (f,g) (x,y) = (f x, g y)


-- several versions of factorial, all (extensionally) equivalent

fac   = prod . downfrom
fac'  = hylo  uncount (1, mult)
fac'' = hylo' uncount (1, mult)
 

Ph.D. Haskell programmer

(ate so many bananas that his eyes bugged out, now he needs new lenses!)
-- explicit type recursion based on functors

newtype Mu f = Mu (f (Mu f))  deriving Show

in      x  = Mu x
out (Mu x) = x


-- cata- and ana-morphisms, now for *arbitrary* (regular) base functors

cata phi = phi . fmap (cata phi) . out
ana  psi = in  . fmap (ana  psi) . psi


-- base functor and data type for natural numbers,
-- using a curried elimination operator

data N b = Zero | Succ b  deriving Show

instance Functor N where
  fmap f = nelim Zero (Succ . f)

nelim z s  Zero    = z
nelim z s (Succ n) = s n

type Nat = Mu N


-- conversion to internal numbers, conveniences and applications

int = cata (nelim 0 (1+))

instance Show Nat where
  show = show . int

zero = in   Zero
suck = in . Succ       -- pardon my "French" (Prelude conflict)

plus n = cata (nelim n     suck   )
mult n = cata (nelim zero (plus n))


-- base functor and data type for lists

data L a b = Nil | Cons a b  deriving Show

instance Functor (L a) where
  fmap f = lelim Nil (\a b -> Cons a (f b))

lelim n c  Nil       = n
lelim n c (Cons a b) = c a b

type List a = Mu (L a)


-- conversion to internal lists, conveniences and applications

list = cata (lelim [] (:))

instance Show a => Show (List a) where
  show = show . list

prod = cata (lelim (suck zero) mult)

upto = ana (nelim Nil (diag (Cons . suck)) . out)

diag f x = f x x

fac = prod . upto
 

Post-doc Haskell programmer

(from Uustalu, Vene and Pardo’s “Recursion Schemes from Comonads” [4])
-- explicit type recursion with functors and catamorphisms

newtype Mu f = In (f (Mu f))

unIn (In x) = x

cata phi = phi . fmap (cata phi) . unIn


-- base functor and data type for natural numbers,
-- using locally-defined "eliminators"

data N c = Z | S c

instance Functor N where
  fmap g  Z    = Z
  fmap g (S x) = S (g x)

type Nat = Mu N

zero   = In  Z
suck n = In (S n)

add m = cata phi where
  phi  Z    = m
  phi (S f) = suck f

mult m = cata phi where
  phi  Z    = zero
  phi (S f) = add m f


-- explicit products and their functorial action

data Prod e c = Pair c e

outl (Pair x y) = x
outr (Pair x y) = y

fork f g x = Pair (f x) (g x)

instance Functor (Prod e) where
  fmap g = fork (g . outl) outr


-- comonads, the categorical "opposite" of monads

class Functor n => Comonad n where
  extr :: n a -> a
  dupl :: n a -> n (n a)

instance Comonad (Prod e) where
  extr = outl
  dupl = fork id outr


-- generalized catamorphisms, zygomorphisms and paramorphisms

gcata :: (Functor f, Comonad n) =>
           (forall a. f (n a) -> n (f a))
             -> (f (n c) -> c) -> Mu f -> c

gcata dist phi = extr . cata (fmap phi . dist . fmap dupl)

zygo chi = gcata (fork (fmap outl) (chi . fmap outr))

para :: Functor f => (f (Prod (Mu f) c) -> c) -> Mu f -> c
para = zygo In


-- factorial, the *hard* way!

fac = para phi where
  phi  Z             = suck zero
  phi (S (Pair f n)) = mult f (suck n)
  

-- for convenience and testing

int = cata phi where
  phi  Z    = 0
  phi (S f) = 1 + f

instance Show (Mu N) where
  show = show . int
 

Tenured professor

(teaching Haskell to freshmen)
fac n = product [1..n]


 

Background

On 19 June 2001, at the OGI PacSoft Tuesday Morning Seminar Series , Iavor Diatchki presented the paper “Recursion Schemes from Comonads” by Uustalu, Vene and Pardo [4]. I attended Iavor’s excellent presentation and remarked that I found the end of the paper rather anti-climactic: after much categorical effort and the definition of several generalized recursion combinators, the main examples were the factorial and Fibonacci functions. (Of course, I offered no better examples myself, so this was rather unfair carping.) Some time later, I came across Iavor’s "jokes" page, including a funny bit called “The Evolution of a Programmer” in which the traditional imperative "Hello, world" program is developed through several variations, from simple beginnings to a ridiculously complex extreme. A moment’s thought turned up the factorial function as the best functional counterpart of "Hello, world". Suddenly the Muse struck and I knew I must write out these examples, culminating (well, almost) in the heavily generalized categorical version of factorial provided by Uustalu, Vene and Pardo.
I suppose this is what you’d have to call “small-audience” humour.
PS: I’ve put all the code into a better-formatted text file for those who might like to experiment with the different variations (you could also just cut and paste a section from your browser).
PPS: As noted above, Iavor is not the original author of “The Evolution of a Programmer.” A quick web search suggests that there are thousands of copies floating around and it appears (unattributed) in humor newsgroups as far back as 1995. But I suspect some version of it goes back much further than that. Of course, if anyone does know who wrote the original, please let me know so that I may credit them here.


 

But seriously, folks, ...

On a more serious note, I think that the basic idea of the joke (successive variations on a theme, building in complexity) can serve a good pedagogical purpose as well as a humorous one. To that end, and for those who may not be familiar with all of the ideas represented above, I offer the following comments on the variations: The first version (straight recursion with conditionals) is probably familiar to programmers of all stripes; fans of LISP and Scheme will find the sophomore version especially readable, except for the funny spelling of “lambda” and the absence of “define” (or “defun”). The use of patterns may seem only a slight shift in perspective, but in addition to mirroring mathematical notation, patterns encourage the view of data types as initial algebras (or as inductively defined).
The use of more “structural” recursion combinators (such as foldr and foldl) is square in the spirit of functional programming: these higher-order functions abstract away from the common details of different instances of recursive definitions, recovering the specifics through function arguments. The “points-free” style (defining functions without explicit reference to their formal parameters) can be compelling, but it can also be over-done; here the intent is to foreshadow similar usage in some of the later, more stridently algebraic variations.
The accumulating-parameter version illustrates a traditional technique for speeding up functional code. It is the second fastest implementation here, at least as measured in terms of number of reductions reported by Hugs, with the iterative versions coming in third. Although the latter run somewhat against the spirit of functional programming, they do give the flavor of the functional simulation of state as used in denotational semantics or, for that matter, in monads. (Monads are woefully un-represented here; I would be grateful if someone could contribute a few (progressive) examples in the spirit of the development above.) The continuation-passing version recalls a denotational account of control (the references are to Steele’s RABBIT compiler for Scheme and the SML/NJ compiler).
The fixed-point version demonstrates that we can isolate recursion in a general Y combinator. The combinatory version provides an extreme take on the points-free style inspired by Combinatory Logic, isolating dependence on variable names to the definitions of a few combinators. Of course we could go further, defining the Naturals and Booleans in combinatory terms, but note that the predecessor function will be a bit hard to accomodate (this is one good justification for algebraic types). Also note that we cannot define the Y combinator in terms of the others without running into typing problems (due essentially to issues of self-application). Interestingly, this is the fastest of all of the implementations, perhaps reflecting the underlying graph reduction mechanisms used in the implementation.
The list-encoded version exploits the simple observation that we can count in unary by using lists of arbitrary elements, so that the length of a list encodes a natural number. In some sense this idea foreshadows later versions based on recursive type definitions for Peano’s naturals, since lists of units are isomorphic to naturals. The only interesting thing here is that multiplication (numeric product) is seen to arise naturally out of combination (Cartesian product) by way of cardinality. Typing issues make it hard to express this correspondence as directly as we’d like: the following definition of listprod would break the definition of the facl function due to an occurs-check/infinite type:
listprod xs ys = [ (x,y) | x<-xs, y<-ys ]
Of course we could also simplify as follows, but only at the expense of obscuring the relationship between the two kinds of products:
listprod xs ys = [ arb | x<-xs, y<-ys ]
The interpretive version implements a small object language rich enough to express factorial, and then implements an interpreter for it based on a simple environment model. Exercises along these lines run all through the latter half of the Friedman, Wand and Haynes text ([6]), albeit expressed there in Scheme. We used to get flack from students at Oberlin when we made them implement twelve interpreters in a single week-long lab, successively exposing more of the implementation by moving the real work from the meta-language to the interpreter. This implementation leaves a whole lot on the shoulders of the meta-language, corresponding to about Tuesday or Wednesday in their week. Industrious readers are invited to implement a compiler for a Squiggol-like language of polytypic folds and unfolds, targeting (and simulating) a suitable categorical abstract machine (see [9]), and then to implement factorial in that setting (but don't blame me if it makes you late for lunch ...).
The statically-computed version uses type classes and functional dependencies to facilitate computation at compile time (the latter are recent extensions to the Haskell 98 standard by Mark Jones, and are available in Hugs and GHC). The same kinds of techniques can also be used to encode behaviors more often associated with dependent types and polytypic programming, and are thus a topic of much recent interest in the Haskell community. The code shown here is based on an account by Thomas Hallgren (see [7]), extended to include factorial. Prolog fans will find the definitions particularly easy to read, if a bit backwards.
The first of the “graduate” versions gets more serious about recursion, defining natural numbers as a recursive algebraic datatype and highlighting the difference between iteration and primitive recursion. The “origamist” and “cartesian” variations take a small step backwards in this regard, as they return to the use of internal integer and list types. They serve, however, to introduce anamorphic and hylomorphic notions in a more familiar context.
The “Ph.D” example employs the categorical style of BMF/Squiggol in a serious way (we could actually go a bit further, by using co-products more directly, and thus eliminate some of the overt dependence on the “internal sums” of the data type definition mechanism).
By the time we arrive at the “pièce de résistance”, the comonadic version of Uustalu, Vene and Pardo, we have covered most of the underlying ideas and can (hopefully) concentrate better on their specific contributions. The final version, using the Prelude-defined product function and ellipsis notation, is how I think the function is most clearly expressed, presuming some knowledge of the language and Prelude definitions. (This definition also dates back at least to David Turner’s KRC* language [5].)
It is comforting to know that the Prelude ultimately uses a recursion combinator (foldl', the strict version of foldl) to define product. I guess we can all hope to see the day when the Prelude will define gcatamorphic, zygomorphic and paramorphic combinators for us, so that factorial can be defined both conveniently and with greater dignity :) .


* KRC may or may not be a trademark of Research Software, Ltd.,
    but you can bet your sweet bippy that
Miranda ™ is!


 

Revision history

  • 20 August 01: added the interpretive version, based on an environment model of a small object language (no, not in that sense of object ...). I’m thinking about re-arranging the order of the examples, so that longer ones that are not part of the main line of development don't intrude so much. I also advertised the page on the Haskell Café mailing list and requested that a link be added to the Haskell humor page. Finally, I have an interesting new example in the works that may actually have some original research value; more on this soon.
  • 14 August 01 (afternoon): added the combinatory version, now the fastest of the bunch, as measured in number of reductions reported by Hugs.
  • 14 August 01 (morning): adjusted the sophomore/Scheme version to use an explicit "lambda" (though we spell it differently in Haskell land) and added the fixed-point version.
  • 10 August 01: added the list-encoding and static computation versions (the latter uses type classes and functional dependencies to compute factorial during type-checking; it is an extended version of code from Thomas Hallgren’s “Fun with Functional Dependencies” [7]).
  • 1 August 01: added accumulating-parameter and continuation-passing versions (the latter is a revised transliteration from Friedman, Wand and Haynes’ “Essentials of Programming Languages” [5]).
  • 11 July 01: date of the original posting.


 

References

  1. Highlights from nhc - a Space-efficient Haskell Compiler, Niklas Röjemo. In the FPCA ‘95 proceedings. ACM Press, 1995 (see also CiteSeer or Chalmers ftp archive)
  2. n+k patterns, Lennart Augustsson. Message to the haskell mailing list, Mon, 17 May 93 (see the mailing list archive)
  3. Sorting Morphisms,Lex Augusteijn. In Advanced Functional Programming, (LNCS 1608). Springer-Verlag, 1999 (see also CiteSeer).
  4. Recursion Schemes from Comonads, T. Uustalu, V. Vene and A. Pardo. Nordic Journal of Computing, to appear (see also Tarmo Uustalu’s papers page).
  5. Recursion Equations as a Programming Language, D. A. Turner. In Functional Programming and its Applications. Cambridge University Press, 1982.
  6. Essentials of Programming Languages, D. Friedman, M. Wand and C. Haynes. MIT PRess and McGraw-Hill, 1994.
  7. Fun with Functional Dependencies, Thomas Hallgren. Joint Winter Meeting of the Departments of Science and Computer Engineering, Chalmers University of Technology and Göteborg University, Varberg, Sweden, 2001 (available at the author’s web archive).
  8. The Church of the Least Fixed-Point, authour unknown. (A little bit of lambda calculus humor which circulated in the mid-1980’s (at least that’s when I saw it), probably from the comp.lang.functional newsgroup or somesuch. Please write me if you know the author or any other citation information).
  9. Categorical Combinators, Sequential Algorithms and Functional Programming, Pierre-Louis Curien. Springer Verlag (2nd edition), 1993.

Free and open-source general-purpose video player

mpv is a free and open-source general-purpose video player. 

mpv is based on the MPlayer and mplayer2 projects which it greatly improves. Learn about the differences with the former projects. 



Cricket



A cheating and misappropriation case was registered against the Himachal Pradesh Cricket Association (HPCA) on Thursday over alleged wrongdoings in allotment of land to the state's sports body, police said. The HPCA said the case was politically motivated.

"The case was registered against the HPCA under sections 406, 420 and 120-B of the IPC (Indian penal Code) for various irregularities. We will reach out to the individuals during the course of investigation," Superintendent of Police (Vigilance) Bimal Gupta told IANS.

Earlier in the day, Chief Minister Virbhadra Singh said "Investigation against the HPCA is in advance stage. Regular FIR (first information report) will be registered shortly."

The HPCA objected to the case and said: "It's aimed at garnering cheap publicity to divert the attention of the people against more pressing and important issues of the people of the state."

On July 26, two Indian Administrative Service officers - Deepak Sanan and R.S. Gupta - were charge-sheeted by the government for allegedly allowing a change in the land-use of village community land for building a residential complex for players near HPCA's cricket stadium in Dharamsala, some 250 km from here.

The HPCA built the complex with 38 rooms, 32 huts and gym just three km from the stadium.

Earlier, the two officers were served show-cause notices over the issue.

The alleged land-use change was approved during the tenure of the previous Bharatiya Janata Party (BJP) government.

The state cricket body is headed by BJP MP Anurag Thakur, who is son of then chief minister Prem Kumar Dhumal and is the current BCCI joint secretary.

Sanan is currently additional chief secretary, animal husbandry, while Gupta is posted as commissioner, inquiries.

The Congress, which at the time of the land change was in the opposition, objected to the land allotment.

The Communist Party of India-Marxist, citing a Supreme Court judgment of 2011 which held the transfer of village community land for private and commercial use as illegal, has also demanded that the HPCA should be evicted from the land.

Source: http://bit.ly/193VvMh

Moto X

The much anticipated smartphone that also takes the title of being the worst-kept secret in recent times is finally here. Well sort of. Motorola today unveiled the Moto X, the first smartphone made after Google announced it was acquiring the company almost exactly two years ago. To be available initially in the US for $199 with a two-year contract and in Canada and Latin America (sorry, no word on India launch but we’d suggest not holding your breath for it), the Moto X aspires to be the iPhone of the Android world. Rather than concentrating on specifications, Motorola claims it is looking at enhancing experiences. The core propositions being a battery that lasts all day, a camera that clicks great photos and a user experience that does not require users to touch the phone to get information.
Rather than going for the most expensive silicon, the Moto X is powered by a custom Qualcomm Snapdragon S4 dual-core processor that has two Krait 300 cores clocked at 1.7GHz and a quad-core Adreno 320 GPU. Along with this, Motorola has added two additional DSPs – one that always listens for “OK Google Now” command to fire up Google Now and another that keeps a track of the phone’s motion to power up the information display or turn on the camera. Motorola calls it the Motorola X8 Mobile processor.


For a touch-free experience, users can simply use the command “OK Google Now” even when the phone is in sleep mode that wakes it up and will initiate Google Now. The phone’s information display also turns on when the user picks it up or takes it out from the pocket to display the time and notification icons. Rather than keeping the main processor turned on all the time, the Moto X uses the two low-power DSPs for these tasks, ensuring there is minimal battery drain.


Then there is the camera, which Motorola claims enables the users to click a photo from the lock screen in the shortest time when compared to rival smartphones. A flick of the wrist signals the phone that a user wants to click a photo and turns on the camera automatically. Motorola is touting a 10-megapixel Clear Pixel (RGBC) camera, which it claims takes better low-light photographs. There is a 2-megapixel front facing camera too.
Surprisingly, despite being a Google company, the Moto X still runs Android 4.2.2. Yes, Google had promised that Motorola will get access to Android at the same time as its other OEM partners but it remains to be seen how long the charade lasts. Thankfully, Android 4.3 has been a minor update and we will get to know for sure whether the wall between Google and Motorola Mobility indeed exists or if it is a mere smokescreen for other OEMs.
Talking about hardware specs (yes, despite focusing purely on experience, the specifications are important too) we are looking at a 4.7-inch Super AMOLED 720p display, 2GB of RAM, 16GB of internal storage, a 2,200mAh battery, Bluetooth 4.0 and Wi-Fi 802.11 a/b/g/n/ac. We have already visited the processor and camera.



Motorola is also offering users over 2,000 customization options where they can choose the back panel (there’s a wooden option too), the accents, engraving and much more. This is limited to the US for the moment and it remains to be seen whether Motorola extends it to other markets as well.
The Moto X marks the beginning of a trend when Android device vendors finally understand that importance of user experience over core hardware specifications. But, as things would have it, we are not sure if users are ready to accept what’s eventually good for them. After all, most pay for the hardware they get and the market value of a smartphone is still pegged on the basis of the hardware it runs rather than its capabilities and the experience it provides. And that is where Motorola probably lost the opportunity. It could have passed on the cost savings of using last year’s hardware to potential customers rather than matching the price tags of today’s flagship smartphones.
View the original article here.

Cricket

For advocates of sports technology, a summer watching DRS doing its best to add controversy to the Ashes, rather than take it away, has been a faith-tester. But the story has called attention to some key lessons – and football needs to learn them.
On Thursday the Premier League launches its new Hawk-Eye system, installed at all 20 Premier League grounds. It's a historic moment: a game-changing shift in the way the laws are applied, and the first significant intervention of technology in our game. In my view it is a brilliant, overdue development – but how we handle it from this point forwards will be crucial.
We set off on this road back in the summer of 2006 when I met Dr Paul Hawkins, the head of Hawk-Eye, to trial a rudimentary prototype system at Fulham's training ground. Even then, his early, scaffolding-rigged system was most noticeable for being unnoticeable: the decisions it made were instant, accurate and unobtrusive, and relayed in a heartbeat to the referee.
The delay since then in rolling it out was down to a frustrating, stubborn lack of vision from the world game's governing bodies – resisting change despite the evidence, and despite the fact that in England alone we were seeing around 10 valid goals ruled out every season.
But now, seven years on, we are finally in a position to be able to show off this "new" technology. The relief will be immense for those officials who go into the opening day equipped with wrist sensors – small devices which will alert them when the whole of the ball has crossed the line. It is as simple as that – no stoppage, no waiting for a third party, no endless replays. And that is crucial.
The Ashes has taught us two things. First, that the use of technology in cricket – and in rugby and tennis – fits stop-start games in a way it could never fit with free-flowing football. Second, a black and white pre-emptive clarity, one which protects the referee's credibility, is essential.
The beauty of the goalline system is that it makes the decision almost before anyone has a chance to react, and without an appeal. In cricket, the ability of players to call for reviews leads to delays and, most crucially, undermines the umpires' authority, with everyone encouraged to doubt them. We cannot follow that path.
Technology can contribute to football in two ways: for matter-of-fact, instant goalline decisions, and for retrospective punishments where players have committed red card offences which have been missed. On every other aspect of the game, for corners, offsides, cards and so on, we need to resist calls for further intervention.
Our referees are the best trained in the world, and they are employed specifically for their well-tested ability to judge and read a game, and to interpret events in a human way. This season they will be doing that backed by one simple piece of kit which takes only one decision – the most crucial in the game – out of their hands, while leaving their authority intact.
They will be better for it, and so will the game.

Source: The Guardian

CAT 2013


Management aspirants can register for CAT 2013 from August 5, Monday

Registrations for the Common Admission Test 2013 (CAT 2013) will begin from today and continue till September 26. Conducted by Indian Institutes of Management (IIMs) to admit students to their management programmes, CAT is one of the most sought-after management exams in the country.

Besides the 13 IIMs, several other B-schools like Faculty of Management Studies, Delhi; Mudra Institute of Communications, Ahmedabad; SP Jain Institute of Management and Research, Mumbai, etc, also consider CAT scores to admit students. This year, CAT will be conducted from October 16 to November 11 across 40 cities. Last year, there were 36 locations.

CAT aspirants can buy vouchers online directly from the CAT website. The vouchers are also available through Axis Bank branches and this year 30 more branches have been added.

"IIMs give students an opportunity to pursue their career through a personally and professionally-enriching experience. During their IIM journey, if they develop the right skill-sets and are able to put their best foot forward then there will be promising career choices. CAT is their first step to achieving their goals," says Rohit Kapoor, professor IIM Indore, convenor, CAT 2013.

Similar to last year's exam pattern, CAT will have two sections. The first section will focus on quantitative ability and data interpretation, while the second will test the verbal ability and logical reasoning in a duration of 70 minutes for each section.

Aspirants can register and schedule for CAT 2013 on the website, www.cat2013.iimidr.ac.in or https://iim.prometric.com. For support and assistance, candidates can call the toll-free candidate care number 1-800-419-0080, Monday to Saturday, 9am to 6pm, from August 5.

To read more stories on management education, visit www.educationtimes.com

Disqus for Web Expert