Validations with TotallyLazy


TotallyLazy is a library that gives you some tools to do functional programming on the JVM and particularly in Java, like: Option/Maybe, Either, persistent data structures, Sequence operations, etc. Unfortunately, even if quite useful and nice to use this library is also practically undocumented (and no, I don't consider raw code as sufficient documentation).

Since I've been using this library for some time now, in my pet projects and also using it in production, I'd like to share a thing or two that I've learned along the way. In particular, I'd like to show you how to do validations using the primitives offered by TotallyLazy.

What does it means to validate?

TotallyLazy model a validation like a function on some data type that can either:

  • Be successful.
  • Be a failure with a list of error messages.

More precisely, Validator is a Predicate<T> with an additional method ValidationResult validate(T instance). A ValidationResult is a type which can be successful or failed, and in the latter case can contain a list of error messages (as Strings).

In Haskell we'd have something like this:

data ValidationResult = Successful | Failed [String]
validate :: a -> ValidationResult

Ok, how do I do that?

In this case, an example can be much more informative that a lot of words. Let's take for example the validation of an email.

An Email in this example is a classic POJO, that I express here as an Haskell record because it's much more concise and the point is not the structure of this class. The only peculiarity of this class is that every field is optional.

data Email = Email {fromAddress :: Maybe String,
	                toAddress :: Maybe String,
					subject :: Maybe String,
					body :: Maybe String,
					host :: Maybe String}

First of all, we want to write a Validator for checking to validate that an optional String field must contain a value:

 * Builds a {@link Validator} that checks that a given {@link Option} contains
 * a value.
 * @param name Name of the datum
 * @return {@link Validator}
private Validator<Option<?>> notEmpty(final String name) {
  return new Validator<Option<?>>() {
    public ValidationResult validate(Option<?> x) {
      return matches(x)
		  ? ValidationResult.constructors.pass()
          : ValidationResult.constructors.failure("Parameter " + name
              + " missing");

    public boolean matches(Option<?> x) {
      return x.isDefined();

We've defined a creation method that builds and returns a Validator<Option<?>> that validates an optional value by checking if it actually contains a value. Note that since Validator is also a Predicate, we define the corresponding #matches() method and use that in our #validate().

Next, we need to define an ad-hoc validation for the host: it's optional (in fact ignored) if the system is configured to use a mock mailer. Let's assume that we a method that says if it's configured the mock mailer or not, and write this validation:

 * Builds a validator that checks that the host is present (unless the SMTP
 * server is a mock).
 * @return {@link Validator}
private Validator<Option<String>> validHost() {
  return new Validator<Option<String>>() {
    public ValidationResult validate(Option<String> host) {
      return matches(host)
		  ? ValidationResult.constructors.pass()
          : ValidationResult.constructors.failure("Host is missing");

    public boolean matches(Option<String> host) {
      return host.isDefined() || mailServerIsAMock();

Finally, we can use this Validators to "declaratively" validate an entire Email value as follows:

public ValidationResult validateEmailParameters(Email email) {
  return sequence(notEmpty("from").validate(email.getFromAddress()),

Note that #notEmpty() is defined in very general terms and can be reused in many more circumstances.

And now?

This is just a quick overview on the validation primitives offered by TotallyLazy. A good next step would be looking at all the fine combinators for Validators.

Horizontal divider

Roy Fielding on Versioning, Hypermedia, and REST

Updated on


Some highlights from this interview to Roy Fielding on InfoQ:

"hypermedia as the engine of application state" is a REST constraint. Not an option. Not an ideal. Hypermedia is a constraint. As in, you either do it or you aren’t doing REST. You can’t have evolvability if clients have their controls baked into their design at deployment. Controls have to be learned on the fly. That’s what hypermedia enables.

The techniques that developers learn from managing in-house software, where they might reasonably believe they have control over deployment of both clients and servers, simply don’t apply to network-based software intended to cross organizational boundaries. This is precisely the problem that REST is trying to solve: how to evolve a system gracefully without the need to break or replace already deployed components.

don’t build an API to be RESTful — build it to have the properties you want. REST is useful because it induces certain properties that are known to benefit multi-org systems, like evolvability. Evolvability means that the system doesn’t have to be restarted or redeployed in order to adapt to change.

there is no need to anticipate such world-breaking changes with a version ID. We have the hostname for that. What you are creating is not a new version of the API, but a new system with a new brand.

Websites don’t come with version numbers attached because they never need to. Neither should a RESTful API. A RESTful API (done right) is just a website for clients with a limited vocabulary.

I had to define a system that could withstand decades of change produced by people spread all over the world. How many software systems built in 1994 still work today? I meant it literally: decades of use while the system continued to evolve, in independent and orthogonal directions, without ever needing to be shut down or redeployed. Two decades, so far.

the initial reaction to using REST for machine-to-machine interaction is almost always of the form "we don’t see a reason to bother with hypermedia — it just slows down the interactions, as opposed to the client knowing directly what to send." The rationale behind decoupling for evolvability is simply not apparent to developers who think they are working towards a modest goal, like "works next week" or "we’ll fix it in the next release".

What we learned from HTTP and HTML was the need to define how the protocol/language can be expected to change over time, and what recipients ought to do when they receive a change they do not yet understand. HTTP was able to improve over time because we required new syntax to be ignorable and semantics to be changed only when accompanied by a version that indicates such understanding.

Software developers have always struggled with temporal thinking.

Horizontal divider

Imparare la programmazione funzionale


(Un amico mi ha chiesto un consiglio su dove iniziare per imparare la programmazione funzionale, ri-pubblico qua la mia risposta leggermente adattata.)

FP in breve

Prima di tutto una breve parentesi riguardo la programmazione funzionale (FP) in generale. Non c'è una definizione universalmente accettata di FP, ma i tratti generali sono abbastanza comuni:

  • Le funzioni sono cittadini di prima classe, manipolabili come valori. E' possibile ovvero passarle come parametri e ritornarle tramite altre funzioni (per esempio, vedi Javascript).
  • Tipicamente le funzioni sono pure, ovvero non ci sono effetti collaterali. In altre parole, il valore ritornato da una funzione dipende solo ed esclusivamente dal valore dei suoi parametri, in modo tale che per uno stesso insieme di valori in ingresso alla funzione, questa ritornerà sempre lo stesso risultato.
  • In molti casi, si lavora principalmente con strutture dati immutabili. Questo significa che, per esempio, modificare una lista significa crearne una nuova (in modo effiente in termini di spazio e tempo ovviamente, utilizzando delle persistent data structures).

Queste caratteristiche hanno tutta una serie di grandi vantaggi in termini di (per esempio):

  • Modularizzazione
  • Composizione
  • Riuso
  • Flessibilità
  • Potenza espressiva
  • Capacità di astrazione
  • Manutenibilità
  • Possibilità di sfruttare facilmente concorrenza e parallelismo

Il tutto ha una solida base teorica nel lambda calculus, un modello computazionale equivalente in termini di possibilità ma diverso da quello di Von Neumann.

Il modello di Von Neumann prevede una memoria condivisa e delle procedure che la alterano. Questo crea essenzialmente un accoppiamento temporale e spaziale tra diverse unità funzionali che, anche se gestito tramite oggetti, rimane una caratteristica intrinseca di questa architettura. Il che rende praticamente impossibile sfruttare facilmente il parallelismo offerto dalle odierne CPU multi-core.

In contrasto, il modello computazionale basato sul lambda calcolo si basa su trasformazioni di strutture dati immutabili da parte di funzioni pure. Il che come dicevo ha tutta una serie di vantaggi, non ultima la possibilità da parte del compilatore/interprete di sfruttare in molti casi automaticamente ed in modo trasparente il parallelismo.

Approccio didattico

Alcuni sostengono che il FP sia qualcosa di "alieno", "difficile", "impratico", "inutile". Cavolate! In realtà è piuttosto semplice ed estremamente utile e potente.

Il problema, piuttosto, è che semplicemente è qualcosa di un po' diverso da ciò che fin'ora è stato l'approccio mainstream. Ma anche il mainstream si sta rendendo conto che il FP è un requisito essenziale per poter lavorare in modo sano con concorrenza, parallelismo e sistemi distribuiti. Basti vedere la sempre maggiore popolarità di Clojure e Scala per esempio, ma soprattutto che anche Java si sta evolvendo in questa direzione. Già nella versione 8 sono state aggiunte le lambda expressions e gli stream. In ambito .NET poi, F# esiste già da tempo come cittadino di prima classe in Visual Studio. Senza contare Facebook che si sta orientando su soluzioni di stampo FP come Flux, basato su React.js e immutable-js.

L'ostacolo principale che ti troverai davanti all'inizio è che si tratta il tutto con una angolazione un pò diversa da quella a cui normalmente si è abituati proveniendo da un percorso "standard" di formazione. In realtà è semplice: c'è molta carne al fuoco certo, ma si costruisce tutto su solide e semplici basi. Perciò uno dei primi consigli che posso dare al riguardo è: sospendi il giudizio, e affronta l'argomento con mente aperta (con Shoshin, per evitare l'effetto Einstellung). Se ti applichi, ti posso assicurare che avrai una lunga serie di momenti "a-ha!!", che ti arricchiranno e miglioreranno professionalmente per il resto della tua vita (oltre a essere tremendamente eccitanti e soddisfacenti ;)

Ok, ma all'atto pratico da dove cominciare? Qualcuno direbbe: Scala! Ma essendo un misto-mare pieno di casi particolari e che tenta di "fondere" il paradigma OO con quello FP, rischia veramente di confonderti le idee. Oltre a non funzionare. Ha un type system più avanzato di quello di Java, ma la sua type inference è abbastanza limitata rispetto quella di Haskell per esempio. Ergo, secondo me più è diverso da ciò che già conosci, più facile è evitare che gli schemi di pensiero abituali ti rendano più difficili le cose.


Haskell for all!

Per queste ragioni, tenderei a consigliarti di iniziare con Haskell. E' un linguaggio funzionale puro e staticamente tipato, che compila in codice nativo (quindi niente JVM nè .NET). Tieni conto che imparare questo stile migliorerà il tuo codice in qualsiasi linguaggio, anche se non utilizzerai Haskell o simili sul lavoro di tutti i giorni.

Il percorso che consiglio è quello delineato da Chris Allen. Per cominciare, ti consiglio caldamente il corso CIS 194 di Brent Yorgey dell'università della Pennsylvania. E' un corso molto completo e ben fatto. Puoi trovare qualche appunto (per ora sulle prime due settimane) sul blog del sottoscritto, ma a cui consiglio di dare uno sguardo solo dopo che hai affrontato il materiale di Brent Yorgey. Non è eccessivamente impegnativo, ma molto utile.

Altro consiglio: fai gli esercizi. Leggere e capire non significa assimilare, per imparare veramente occorre applicare le cose, solo allora ti rendi conto di ciò che sai e cosa no. L'unico apprendimento è quello attivo. Inoltre tieni presente che si parte dalle basi quindi non avere fretta di scrivere web application in quattro e quattr'otto. Tuttavia dalle basi si progredisce abbastanza velocemente, quindi persevera e vedrai ;)

Come ambienti di sviluppo io uso Emacs (vedi istruzioni di Chris Allen anche per questi aspetti), ma potresti anche usare FPComplete: un IDE web-based gratuito (nel cui sito tra l'altro ci sono molte risorse didattiche).

Padroneggiato un pò Haskell, le possibilità sono diverse:

  • Esiste una versione di Haskell sulla JVM: Frege.
  • Web frameworks per Haskell (in ordine di "indagine"):
  • Elm, un linguaggio ispirato ad Haskell per realizzare applicazioni web client-side basate sul paradigma FRP. Vedi la learning roadmap per maggiori dettagli.
  • PureScript è un linguaggio Haskell-like che "transpila" in Javascript. Vedi anche il libro PureScript By Example che mi dicono sia anche una buona introduzione alla FP.

Alternative track: Clojure

Anche Clojure è una buona scelta per imparare l'approccio FP. Ha bene o male tutti gli attributi necessari e fino a poco tempo fa era il mio linguaggio preferito. Di diverso rispetto ad Haskell:

  • Ha il dynamic typing, ovvero ti aiuta meno nella comprensione/manutenzione di una base di codice ma è più flessibile. Puoi darti la zappa sui piedi con più flessibilità insomma :P
  • E' omoiconico, ovvero la sintassi con sui si scrive il codice è identica a quella delle strutture dati. Il famoso detto:

Code is data, data is code.

Pur essendo un linguaggio marcatamente funzionale, un po' per scelta un po' per limitazioni della piattaforma sottostante, è da una parte più semplice di Haskell dall'altra ti assiste meno a tempo di compilazione tramite il type system. C'è core.typed certo, ma non è la stessa cosa. Questa differenza si nota molto a livello di impostazione e workflow: si usa molto la REPL e l'enfasi è sul modificare ed evoltere i sistemi manipolandoli direttamente a runtime (cfr. Smalltalk).

Di interesse la sua concezione del tempo, valori e identità (dello stato insomma) che facilita di molto la concorrenza. Da notare che la STM è presente anche in Haskell.

Clojure gira sulla JVM, ma anche su .NET e transpila su JS, interoperando con naturalezza con la piattaforma ospite.

Come risorse, se vuoi approfondirlo, ti consiglio:

  • (libro) Programming Clojure, seconda edizione. E' un libro introduttivo e conciso che ho trovato molto chiaro e utile. Probabilmente è la risorsa migliore per cominciare con Clojure.
  • (libro) Clojure Programming (si lo so...) E' considerato più o meno universalmente come il miglior libro per "beginner" (è uscito dopo quello del punto precedente). Copre tutto il liguaggio e molti aspetti pratici come programmazione web, uso di DB di vario tipo, testing, ecc. Ma è un mattone di quasi 600 pagine, quindi l'ho messo per secondo.
  • (video) Clojure: Inside Out: corso tenuto da giganti, interessante e utile.
  • (video) LispCast's Introduction To Clojure altra serie di video.
  • (online) 4clojure: valangate di esercizi (anche con Clojure è importate farli!).

Che ho esaminato meno da vicino, ma che ti segnalo comunque:

Dal punto di vista dello sviluppo, io uso Emacs ma ti segnalo anche:

  • LightTable un avvenieristico IDE (scritto in ClojureScript) facile da usare, che ha spopolato su Kickstarter un paio di anni fa.
  • Cursive Clojure: IDE basato su IntelliJ IDEA che sta rapidamente guadagnando adepti e che mi pare abbastanza completo (sta evolvendo velocemente).

Ci sarebbe molto altro da dire, ma per il momento basti questo. Ulteriori aree di indagine se sei interessato:


Personalmente ti posso dire che parecchie delle cose imparate le sto applicando anche sul lavoro. In ambito Java per esempio sto usando molto TotallyLazy: sfortunatamente di documentazione c'è poco, ma ha una tonnellata di roba funzionale molto interessante. Comunque già solo col concetto di Callable e Sequence puoi fare molto e ti abilita un bel po' di idiomi funzionali (anche se non sempre convengono data la verbosità del linguaggio).

Interessante anche da esplorare lazyrecords (degli stessi autori), che fornisce un accesso ai dati funzionale a-la LINQ in ambito .NET.

Lato Javascript ci sono di interessanti da esplorare:

Ci sarebbe tantissimo altro da dire/esplorare/discutere, ma credo di aver già dato abbastanza materiale per ora.

Horizontal divider

CIS 194, week 2: Algebraic Data Types

Updated on


This is a post about week 2 of the CIS 194: Introduction to Haskell course by Brent Yorgey, from the Penn University of Pennsilvania.

This week is all about Algebraic Data Types (ADT), not to be confused with Abstract Data Types (also ADT) which are another topic.

Haskell has enumeration types (like Java, but still less verbose and more intuitive). An example:

data Food = Pizza
	      | Bacon
		  | Salad
	deriving Show

We have just declared a new (algebraic data) type called Food, with three data constructors which are the only values of the type Food.

We can now define new functions on the new data type using pattern matching:

isTempting:: Food -> Boolean
isTempting Salad = False
isTempting _ = True

But in Haskell enumeration types are only a special case of Algebraic Data Types. One common class of ADT is called sum type (a.k.a. tagged union). A simple example of ADT which is not an enumeration is this:

data OperationResult = OK
                     | Error Integer
	deriving Show

Here we see that Error is a data constructor that takes an argument of type Integer. We can construct new OperationResult values using the Error data constructor:

success:: OperationResult
success = OK

failure:: OperationResult
failure = Error 404

OK is a value of type OperationResult (since it's a data constructor with zero arguments), but Error by itself it's not. We have to pass an Integer value to it to build an OperationResult with it.

We've just introduced polymorphic data types. Specifically, we can have type signatures with variables just as we can have function implementations with variables. The difference here is that while in actual code variables are symbols bound to values, in type variables are bound to types of values. In other words, types are actually values in type signatures. We're reasoning on a higher and more abstract level. Take a moment to contemplate this fact.

Formally, in Haskell an ADT is a type with one or more data constructors, each one of them can have zero or more arguments.

A general example that shows how to build values is:

isSafeDiv:: Double -> Double -> OperationResult
isSafeDiv _ 0 = Error 1000
isSafeDiv _ _ = OK

We can also use pattern matching to make decisions based on the structure of the OperationResult value and bind variables to the arguments:

isSuccessful:: OperationResult -> Boolean
isSuccessful OK = True
isSuccessful (Error n) = False

It's idiomatic in Haskell when you have an algebraic data type with a single data constructor, to name it like the data type itself. Example:

data Person = Person String String Int
	deriving Show

This can be done since types and data constructors live in separate namespaces.

Pattern Matching

In general, pattern matching is a way to know what data constructor has been used to create a value of a certain ADT, and to take apart its arguments. Effectively, in Haskell this is the only way to make decisions.

pat ::= _
	| var
	| var @ (pat)
	| (Constructor pat1 pat2 ... patn)	
In order:
  • _ is a wildcard.
  • We can pattern match against literal values (for example: OK).
  • We can pattern match against a pattern, and still bind the entire value to a variable.
  • We can pattern match against a data constructor (even recursively).

It's worth noting that types like Int can be viewed like an ADT:

data Int = 0 | 1 | 2 | ...

Indeed, we can pattern match on its values. But, perhaps obviously, they are not implemented like that in the compiler.

Case expressions

A way (the only one actually) to do pattern matching is by using case expressions:

case exp of
	pat1 -> exp1
	pat2 -> exp2

For example, we could reimplement the isSuccessful function from earlier using a case expression:

isSuccessful:: OperationResult -> Boolean
isSuccessful op = case op of
	                OK -> True
					(Error n) -> False

However it's more elegant to use the first version. Indeed, the syntax for doing pattern matching in a function definition is just syntactic sugar on case expressions.

Recursive algebraic data types

It's interesting to note that ADTs can be recursive. For example, let's define a list of integers:

data IntList = Empty
	         | Cons Int IntList

This definition can be read as: "an IntList is either an Empty one or an Int value followed by an IntList". This kind of definition is quite clear and elegant (see Church encoding). For example:

-- [1,2,3] can be represented as an IntList:
l:: IntList
l = Cons 1 (Cons 2 (Cons 3 Empty))

A recursive data ADT naturally leads to recursive functions. For example, to calculate the sum of all the values in an IntList:

calcSum:: IntList -> Int
calcSum Empty = 0
calcSum (Cons n ns) = n + calcSum ns

So, we've seen so far that type signatures can have variables, and can be recursive. Sounds like we could have a Turing-complete type system... indeed, we have one. Someone even implemented a LISP interpreter that completely runs on the Haskell type system!

That's all for this week. Remember: do the exercises!

Horizontal divider

Introduction to Haskell, week 1

Updated on


A while ago I've finally started to study Haskell, in particular following the CIS 194: Introduction to Haskell course by Brent Yorgey from the Penn University of Pennsilvania. This is the first resource of the curriculum I plan to follow to learn Haskell (thanks a lot to Chris Allen for laying out a path to FP enlightenment :)

I've worked through the first 4 weeks now, but I decided at this point to switch gears and go back to recap what I've learned so far.

Without further ado, let's recap CIS 192: week 1.

What is Haskell?

Haskell (named after Haskell Curry, for his work on combinatory logic and for the Curry-Howard Correspondence), is a lazy, statically typed, pure functional programming language created in the 1980's by a committee of academics. It's very well alive today, as it's one of the most advanced (statically typed) languages out there.

Haskell is:


  • Functions are first-class values. That is, they can be passed to other functions or returned by them like any other values.
  • The computation model is based around evaluating expressions, not executing instructions. In other words, it's not based on the Von Neumann architecture (instructions that operate on a shared memory), but on the lambda calculus (you can think about it in terms of composing functions to transform streams of immutable data).


Every expression is referentially transparent. This means that:

  • Everything is immutable. Every "mutation" is modeled as a transformation, a function that doesn't change the original value but creates a new one.
  • There are no side effects (well, there are: modeled with monads to retain purity, but you don't need to know what a monad is to use them!).

As a consequence of the previous points, calling the same function with the same arguments results in the same output, always.

This approach has a number of very nice benefits, that once you wrap your head around this paradigm you won't give away too easily:

  • Equational reasoning: thinking about the code becomes much more easier. Refactoring becomes a breeze.
  • Parallelism: using multiple cores is much easier when you know that functions are guaranteed not to interfere with each another. There is no shared state!

In general, with static types and pure functions programs become much more easy to maintain, refactor, debug and reasoning about.


In Haskell values are computed only when needed (call-by-need evaluation strategy).


  • It's easy to define new control structures just by defining a new function. Contrast this with languages like Clojure where you need macros to achieve that, or languages like Java where it's basically impossible.
  • It's easy to work with infinite data structures, since values are only computed when needed. You can achieve the same in idiomatic Clojure by using the seq abstraction and lazy-seq, in Java 8 using Streams, etc.
  • It enables a compositional style (we will see it down the road with wholemeal programming, currying and point-free style).

Downside: it becomes harder to reason about the time and space characteristics of programs.


The course revolves around thee key areas:

  • Types
  • Abstractions
  • Wholemeal programing


Static type systems can be annoying, and some of them really are. But this isn't because type systems are inherently annoying, that's because some of them are insufficiently expressive (for example, Java and C++ ones).

A type system (especially the Haskell one):

  • Gives you clarity of thinking and helps you to design and reason about programs. Types become an organizing principle, a precise and powerful tool to think about and express abstractions. Using them, you are able to reason at a higher level, in a systematic way.
  • Is a form of documentation, always in sync with the actual code.
  • Turns a lot of runtime errors to compile time ones. Computers can do complex, repetitive and clearly specified things efficiently: why not delegate to them some of the burden that a human has to carry on while writing software?

Once you start to get the hang of it, a (good) type system becomes an invaluable ally. It feels liberating, since it can help you long before you write the first line of code: it helps you in the design of the system.


In some way, designing and maintaining software is a battle against repetition: you frequently need to take similar things and factor out their commonality (a process known as abstraction).

Haskell gives you a lot of abstraction power: parametric polymorphism, higher-order functions, type classes, etc. Its type systems is also a powerful, methodic and sound tool to think mathematically about them.

Wholemeal programming

Quoting Ralf Hinze:

“Functional languages excel at wholemeal programming, a term coined by Geraint Jones. Wholemeal programming means to think big: work with an entire list, rather than a sequence of elements; develop a solution space, rather than an individual solution; imagine a graph, rather than a single path. The wholemeal approach often offers new insights or provides new perspectives on a given problem. It is nicely complemented by the idea of projective programming: first solve a more general problem, then extract the interesting bits and pieces by transforming the general program into more specialised ones.”

In short, it's about working with abstractions rather than concrete instances of the problem/solution space, with group/types of things instead of with single instances.

Next, Brent Yorgey goes on showing the basic types (scalars and lists), how to define and combine functions, etc. Lots of good stuff.

Horizontal divider

Data types as graphical shapes

Updated on


A very interesting idea that I've just found in a post by Aaron Spiegel: to explain basic FP constructs like map, filter and fold/reduce graphically (nothing new, I've already done that several times), but with an interesting twist: by representing types as shapes. That's a clever trick to better explain that HOFs (Higher-Order Functions).

However, visual languages rapidly become inadequate to express more high level concepts. For example you can sort-of encode algebraic data types using different colors for example. Then maybe you can express options using full or empty shapes. But, as you probably can see, there are limits in this medium and we can reach them pretty fast.

In some sense it's like understanding certain mathematical concepts using geometry (as a visual learner, that's how I really understood integrals), but one of the reasons mathematicians use textual notation is that it's much more concise and expressive.

Horizontal divider

Summary: *-Driven* do not change anything

Updated on


This is a short summary of my takeaways of the article *-Driven* do not change anything by Michał Bartyzel.

  • Mental frameworks (like *-Driven*) need to be intepreted in the light of the appropriate context, and require experience to be applied to unknown ones.
  • These frameworks are usually formed over many years of experience by induction, and adapted to other contexts by deduction.
  • Instead of focusing on context/experience-dependent mental frameworks, invest in developing the foundamentals:
    • Responsibility
    • Encapsulation
    • Composition
Horizontal divider

Interesting links

Updated on


Deliver Working Software Frequently

Before you can run, you need to learn how to walk. This is a good primer on agility. Focus first on delivery.

Until you can deliver, work on delivery. Work on nothing else until then. The rest will come in due time.

I reckon your message broker might be a bad idea.

Message brokers can be a bad idea if treated like "infallible gods", because they aren't. Think about three good design principles for realiable systems:

  • Fail fast
  • Process supervision
  • End-to-End principle

In the end, it isn’t so much about message brokers, but treating them as infallible gods. By using acknowledgements, back-pressure and other techniques you can move responsibility out of the brokers and into the edges. What was once a central point of failure becomes an effectively stateless server, no-longer hiding failures from your components. Your system works because it knows that it can fail.

PostgreSQL 9.4 - Looking Up (With JSONB and Logical Decoding)

PostgreSQL 9.4 is going to be exciting with JSONB support. As David Lesches says:

JSONB has been committed to Postgres 9.4, making Postgres the first RDBMS with rock-solid support for schemaless data

Horizontal divider

On "Enlightenment"

Updated on


The more I think about it, the more I see it. They say that you should expose yourself to different kind of programming languages, and more specifically to different kind of paradigms. The key of that advice is to get exposure (and hopefully proficiency) with different ways of thinking. The more high-level thinking tools you have at your disposal, the more effective and efficient you are at identifying and solving problems.

You become able to really see the problem and build a general solution. You start to deconstruct what you know, the constraints and the problem at hand, and you become able to build general solutions by composing simple things. Systematically. When you start to think in terms of data transformations via map, filter, reduce, juxt, etc. you realize that you are operating on a higher level. You start to really see what the single resposibility principle is all about. And then, you start to see all the accidental complexity that used to slow you down and hide the essence of the solution behind reams of low level details (syntactical or otherwise). When your mind is not distracted by accidental complexity, you can step back and think clearly about what you have and what is really needed.

Unfortunately, this kind of "enlightenment" has some downsides:

  • You become painfully aware that you could do much better and much faster than when you develop with less advanced languages (for example: Java). Here starts the frustration.
  • Your "yet-to-reach-enlightenment" colleagues don't see the way you do. They don't understand (yet). This is the blub paradox at work.

Combine both factors, and you have a recipe for misery. It's depressing seeing friends (and yourself) wasting time with unnecessary complexity.

Square wheels

However, even if it requires work, practice and dedication to reach a master level, I think that it's worth it. I'm certainly not a master, and yet I'm seeing benefits of this learning journey even when I'm not writing LISP code. Eric Raymond was right:

LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot.

So go ahead and learn some Clojure, Haskell and Factor (for example). You don't need to spend a lot of time to get a feel of what's out there. You will be a better programmer anyway, and I think this is a worthwhile goal.

Horizontal divider

On frameworks

Updated on


Frameworks remove your ability to solve your specific problems from first principles. They opt you out of innovation, simplicity & elegance — Steve Purcell

Horizontal divider