Over the last few months, questions of not only wealth and finances, but the underpinnings of our entire financial structure, have become paramount in many of our minds.
We — that usually means you and me — and right now means the world collectively — have largely misunderstood the world’s financial structure over recent years. (Those who HAVE accurately understood are not only more secure, but substantially richer by now.)
Most of us are current on “what went wrong.”
Most of us now understand how the financial state of affairs grew so out-of-bounds that “what went wrong” was the only foreseeable action. (Easy to do in hindsight.)
What many of us are seeking now — beyond the next tactical or even strategic step — is a better basis for understanding the world’s financial system for the future. This means, we are re-examining what we are using as a “theoretical base” for national and even global economic modeling.
This is not impractical or far-fetched. In fact, Kurt Lewin is credited with saying “There is nothing so practical as a good theory”.
As a physical chemist, I start from a place of knowing little about economics, economic systems, or world models. However, I know a great deal about modeling large-scale, complex physical systems — this was the subject of my graduate work and dissertation, and underlies each of the patents that I’ve made (all four of them — will put endnotes at the bottom of this posting).
So in the spirit of discourse, let’s start with one presumptive theory, examine it, pull it apart, see where it has strengths and deficiencies, and move on. And we can keep on doing this until we arrive at something that works.
(As an aside — I’m not only a scientist, but also an entrepreneur — so the “best of the best” of my thoughts will be kept for the clients funding my next company. But this blog post lets me share with you the way we would if we were having a discussion at a cocktail party, or after a seminar. It gives me a means of organizing my thoughts supporting the next round of inventions, and you something to read, discuss, and consider. And feel free to email me at alianna1 at gmail dot com or post your comments to this blog.)
Eric D. Beinhocker purports to propose a good theory in his 2006 book, The Origin of Wealth: Evolution, Complexity, and the Radical Remaking of Economics. (If you jump to the Amazon site, you’ll be able to not only “search inside” — but more importantly, read the reviews — which actully are very useful. In particular, Craig Howe does a good job of summarizing the major premises, and — more importantly — A.J. Sutter correctly identifies the book’s major weaknesses.)
But let’s start with a quick, one-paragraph overview. Beinhocker’s main premise is that the equilibrium theory of “traditional economics” is substantively unable to be a useful or predictive model for any level of macro-economic events.
Specifically, Beinhocker states (pp. 42-43, hardcover version):
“By the end of the twentieth century, Traditional Economics was thoroughly dominated by the Neoclassical paradigm with its foundational notions of rational, optimizing consumers and producers making choices in a world of finite resources, and (with the exception of investments in technology) those choices being bounded by decreasing returns. This combination of self-interest and constraints then drive the economy to the Pareto optimal point of equilibrium … the Neoclassical general equilibrium theory of Arrow and Debreau ostensibly aswered the great question of wealth allocation.”
Beinhocker sums up his premise (p. 43) as:
“Nontheless, despite the unquestionably significant impact of Traditional Economics, the unease expressed at the beginning of the chapter remains valid. The economist Werner Hildenbrand once compared general equilibrium theory to a gothic cathedral, of which Walras and his contemporaries were the architects, and the great economists of the twentieth century were the master builders. Unfortunately, as we will see in the next chapter, the cathedral was built on very shaky ground.”
Where Beinhocker’s book (and premise) break down is that he attempts to explain why general equilibrium theory doesn’t apply well to economic modeling, without the use of equations!
I credit an old friend and colleague of mine, Artie Briggs, with consistently referring to mathematics as a “compact notational language.” When Sutter characterizes Beinhocker’s book as being (at least in part) “sloppy and superfluous,” he is referring partly to the content. (I agree largely with Sutter, and recommend his review as even more worthwhile a read than Origins — and it is much shorter!) But part of the problem is trying to explain a complex physical system using words, instead of equations. (The same comment will hold when Beinhocker attempts to describe Complex Adaptive Systems, or CAS.)
In order to make much sense out of an “equilibrium-based approach” to modeling, we need to first understand equilibrium theory. A good, solid year of graduate-level statistical thermodynamics is a pretty good start for this. But then — and this truly is an essential step — a person really needs to go beyond the equations.
We start by writing down the basic Helmholtz free energy equation: A = E-TS
We need to internalize concepts such as “enthalpy” and “entropy,” and even “temperature.” (This is not trivial!) And then, we need to internalize what happens as we do the free energy minimization (which is what gives us equilibrium).
The simplest model that we can get for this is the Ising spin glass model for a system of bistate particles. When we have a simple model, with no nearest-neighbor interactions, we do get a system in which we can find equilibrium points. (The challenge of course is figuring out what elements in the economic world correspond to these “bistate” particles.)
When we introduce nearest-neighbor interactions, we get our first useful model of a phase transition, where a system can go from one free energy equilibrium point to another.
Should we be able to apply spin glass theory (or any similar interpretation) to the economic system at large, we’d have access to a mathematical formalism that DOES describe what just happened: A transition from a highly non-equilibrium but “metastable” state to a true “free energy minimum” (or real equilibrium state).
In short, we have been — over the past several years — at a highly “metastable” state. We have reached the limits at which that metastability would persist. (Sometimes, fluctuations are all that it takes, sometimes, pushing the metastability too far induces a transition.)
We have had a “cascade” in which we have gone from the previous highly metastable state of expanded housing credit to one that is more realistic.
The next questions we should ask ourselves are:
1) Is this new state of economic affairs really the “stable” free energy minimum state, or have we just reached a “temporary” minimum — and are further cascades in order? (For those watching the credit crisis, this is a realistic concern), and
2) If we can apply this kind of free-energy model — this time, backing off the assumption that we are at or will be at a free energy equilbrium — what exactly are we modeling? How are we modeling it? In essence, what meaning are we ascribing to the variables in the system?
In short, we are trying to solve a word problem.
As most of us who have taken (or taught) algebra will recall, the tough part is not so much working out the equation. It is figuring out what the variables are, and how they relate to each other. It is figuring out the meaning of what we are modeling. What is important, what is not?
Enough for today. When I pick up again, two of the topics I’ll address will be the remainder of Beinhocker’s book (Complex Adaptive Systems, or CAS), and — what might be more important — how we can start selecting our independent and dependent variables, and making a useful model of the economic system.
1. System and Method for Evidence Accumulation and Hypothesis Generation, A.J. Maren et al., USPTO 7,421,419, granted 2008; filed April 12, 2006, full patent claims text for Evidence Accumulation and Hypothesis Generation
2. System and Method for Predictive Analysis and Predictive Analysis Markup Language, A.J. Maren , USPTO 7,389,282, granted 2008; filed on Nov. 02, 2005; full claims text for Predictive Analysis and Predictive Analysis Markup Language patent.
3. Knowledge Discovery Method with Utility Functions and Feedback Loops, A.J. Maren & S. Campbell, USPTO 7,333,997, Feb. 19, 2008
4. Sensor Fusion Apparatus and Method, A.J. Maren et al., USPTO 5,850,625, Dec. 15, 1998, filed on 03/13/1997 . Assignee: Accurate Automation Corporation.
Abstract: “The invented apparatus fuses two or more sensor signals to generate a fused signal with an improved confidence of target existence and position. The invented apparatus includes gain, control and fusion units, and can also include an integration unit. The integration unit receives signals generated by two or more sensors, and generates integrated signals based on the sensor signals. The integration unit performs temporal and weighted spatial integration of the sensor signals, to generate respective sets of integrated signals supplied to the gain control and fusion units. The gain control unit uses a preprogrammed function to map the integrated signals to an output signal that is scaled to generate a gain signal supplied to the fusion unit. The fusion unit uses a preprogrammed function to map its received integrated signals and the gain signal, to a fused signal that is the output of the invented apparatus. The weighted spatial integration increases the fused signal’s sensitivity to near detections and suppresses response to detections relatively distant in space and time, from a detection of interest. The gain control and fusion functions likewise suppress the fused signal’s response to low-level signals, but enhances response to high-level signals. In addition, the gain signal is generated from signals integrated over broad limits so that, if a detection occurred near in space or time to a detection of interest, the gain signal will cause the fused signal to be more sensitive to the level of the detection of interest.”
5. Knowledge Discovery System, A.J. Maren et al., USPTO Application (patent pending) 20050278362.
Abstract: “A knowledge discovery apparatus and method that extracts both specifically desired as well as pertinent and relevant information to query from a corpus of multiple elements that can be structured, unstructured, and/or semi-structured, along with imagery, video, speech, and other forms of data representation, to generate a set of outputs with a confidence metric applied to the match of the output against the query. The invented apparatus includes a multi-level architecture, along with one or more feedback loop(s) from any level n to any lower level n−1 so that a user can control the output of this knowledge discovery method via providing inputs to the utility function.”
6. System for Hypothesis Generation, A.J. Maren, USPTO Application Number 20070156720, filed 08/31/2006; published on 07/05/2007.
Abstract: “A system for performing hypothesis generation is provided. An extraction processor extracts an entity from a data set. An association processor associates the extracted entity with a set of reference entities to obtain a potential association wherein the potential association between the extracted entity and the set of reference entities is described using a vector-based belief-value-set. A threshold processor determines whether a set of belief values of the vector-based belief-value-set exceed a predetermined threshold. If the belief values exceed a predetermined threshold the threshold processor adopts the association.”
Related Work: (collection will increase over time)
Principal Investigator or Co-PI on seven Phase 1 SBIR / STTR contracts (DoD/NSF); PI on four Phase II SBIR/STTR contracts.
— 107. Intelligent Agents Using Situation Assessment (Report Abstract), A.J. Maren & R.M. Akita, Phase 1 SBIR for the National Science Foundation.