Real World Risk Institute – Day 3

I just got back home after an amazing third day of Nassim Taleb’s Real World Risk Institute workshop. Today lots of guest speakers for the most interesting day so far! This is my personal report.

After a first day on black swans and a second day on antifragility, today we spent the full day on complexity. We started with a lecture held by Raphael Douady, that finally took the scene after having attempted to spoil Taleb’s sessions during Day 1 and Day 2. The two are old friends and their relationship is part of the show of these days. The second guest was Joe Norman holding a session on complexity that I loved. It delivered contents I am very familiar with and enabled very interesting discussions among the attendees.

In the afternoon Trishank Karthik, former RWRI attendee, held a lecture on computational complexity, with many reference to Turing, Gödel and Hofstadter allowing me to connect with my past as much as I’d had never expected. The day was closed by Robert J. Frey, with a session on emergent behaviour in agent-based ecosystems.

Among sooooo many powerful insights, the following two are my selection for today’s report.

Reality sometimes is undecidable, often is intractable

Not all the reality we know and use is easy to model and treat in an exact way. To tell the truth, it’s not even possible to model and treat all aspects of reality at all! Concepts like uncountable infinite sets — like real numbers — or the fact that no logical system may be proven consistent by using its own rules — as shown by Kurt Gödel — challenge our capability to map the reality, despite our perception.

This is a subject I have loved for many years and if you are interested I suggest to read Hofstadter’s Gödel, Escher & Bach: An Eternal Golden Braid, one the best books of my life.

What has it all got to do with risk modeling? Let me take you there.

We always decide, sometimes understand

Joe Norman made an interesting point during his session. He presented reductionism as an assumption implying that to understand the world (or any system) we:

  1. Decompose it into parts.
  2. Study the properties of those  parts.
  3. Put the parts back together into a big picture, being this trivial.
  4. Can understand everything through this lens.

This view of the world is often conflated with science itself, while complex systems science acknowledge emergent properties that are not found in any single part of a given system and on which reductionism can shed no light upon.

For example, while water is defined by its molecule (H₂0), liquid water becomes a gas or a block of ice due to a change in how those molecules interact, not a change in single molecules.

Long story short

So, if

  1. reality is largely made of paradoxically undecidable systems or, in a slightly better hypothesis
  2. reality is made of agents that express completely different dynamics whether considered as a whole or as parts, but
  3. we still keep on making apparently intractable decisions any day, evaluate, for example, the cost of a derouting at the supermarket on our way back home to buy an apple

then humans (and animals, to a given extent) must be powerful heuristics engines, capable of using reality in complex environments with incomplete information.

That basically means that we can also treat risk in scenarios that we miss to fully understand at all levels, by operating at the right level of abstraction and intuition.

IYI or cravattato?

In an old article of his, Gabriele Lana made a point writing that

“cravattato” [is] that class of suit & tie people who place themselves between those producing value [and those] who fund [its creation].

[…]

The “cravattato” role has been accurately engineered along years. Here are a few of their characteristics:

  • They seat in a powerful position but usually have no specific skills (see Peter principle.

  • They give order, but always finds someone more responsible than him.

  • They are bureucratic master of puppets.

  • They are master of KPIs.

(“Cravatta” means “tie” in Italian).

Joe Norman, in one of his slides today showed the characteristics of the IYI (Intellectual Yet Idiot):

  • Sees order and assumes top-down design (with a reductionist twist).
  • Doesn’t know the difference between construction and growth.
  • Makes a career by insisting their expertise is the only thing keeping things orderly.
  • Destroys emergent order with a command-and-control plan-oriented attitude.

I had the impression that Joe Norman and Gabriele Lana are talking here of very similar animals…

To people like that I dedicate these Goethe’s idea, yesterday quoted by Nassim Taleb:

Never attribute to malice that which is adequately explained by stupidity.

(Photo by Marek Szturc on Unsplash)

Un commento su “Real World Risk Institute – Day 3”

  1. Interesting how the same IYI role can be described in terms of complexity: some people are (or consider themselves) “too good in the complicated domain” to accept the paradigm shift needed in the complex domain.

Leave a Reply

Your email address will not be published. Required fields are marked *