There can be no other truth
to take off from this—I think, therefore I exist—ie. the Cartesian *cogito*. There we have the absolute truth of
consciousness becoming aware of itself.
Every theory which takes man out of the moment in which he becomes aware
of himself is, at its very beginning, a theory which confounds the truth, for
outside the Cartesian *cogito*, all views are only probable, and a
doctrine of probability which is not bound to a truth dissolves into thin
air. In order to describe the probable,
you must have a firm hold on the true.
Therefore, before there can be any truth whatsoever, there must be an
absolute truth; and this one is easily arrived at; it is on everyone’s
doorstep; it is a matter of grasping it directly.

*—Jean-Paul Sartre*

Applying The Bernoulli Model describes the
process of putting into play an executive risk management and forecasting
system. The Bernoulli Model
re-cognizes the notion of wisdom—and argues that the world is on the cusp of a
monumental paradigm shift due to the imminent fall of the authoritian model and
the rise of portfolio theory in the practical incarnation of The Bernoulli
Model. The
Bernoulli Form elucidates the notion of Platonic Forms and describes how a
motley crew of Forms—including the Delphi, forecasting, integration, utility,
optimization, efficiency and complementary—come together in the portfolio of
Forms of The Bernoulli Model. The Efficient Frontier examines the
notions of God, option theory, portfolio theory, faith, reason and Arab
math-finally arriving at the inescapable conclusion that all roads of sound
decisionmaking lead to the efficient frontier.
The Method of Moments delineates
dimensional deconstruction and reconstruction combined with fractal analysis as
the fundamental method of riskmodeling employed by The Bernoulli Model. Scientific
Management follows the development of relativity from Archimedes to
Einstein—and then takes a parallel line of reasoning in considering the
development of scientific management and portfolio theory.

Applying The Bernoulli Model
describes the process of putting into play an executive risk management and
forecasting system.

A successful business executive is
a forecaster first—purchasing, production, marketing, pricing and organizing
all follow.

*—Peter Bernstein*

An Essay by Christopher Bek

Jewish religion stresses the fact that Scripture can
be interpreted on many different levels.
Christ’s teachings encompassed themes that were already central to
Jewish thought—for example, love and the importance of helping the
unfortunate. But he also taught the
unorthodox thesis that Jewish law could be summarized in terms of loving God
with one’s whole heart. Christ sharply
criticized those who made a great show of their holiness but who failed to show
compassion—a theme again borrowed from the Hebrew prophets. Muhammad (570-632) was a merchant in Mecca
who became the central prophet and founder of Islam. The term Islam derives from slam and means
peace and surrender—namely, the peace that comes from surrendering to the will
of God’s sovereignty. Before Islam the
religions of the Arabic world involved the worship of many gods—Allah being one
of them. Muhammad taught the worship of
Allah as the only God, whom he identified as the same God worshipped by the
Christians and the Jews. And Muhammad
also accepted the authenticity of both the Jewish prophets and Christ—as do his
followers. It is then accordingly clear
that there is but one God.

The Bernoulli Model uses a top-down, strategic
management approach to scientific management.
It combines the processes of forecasting, integrating and
optimization. The act of forecasting
produces not only estimates of outcomes but also estimates of uncertainty
surrounding outcomes. In addition to the
basic closed-form method of integration involving the two-moment normal
distribution, The Bernoulli Model also employs the method of Monte Carlo simulation
along with the four-moment the Camus Distribution in order to generate, capture
and integrate the full spectrum of heterogeneously distributed forecasts of
outcomes and the uncertainty surrounding outcomes. Optimization algorithms search risk-reward
space in order to determine the optimal set of decisions subject to Delphi
constraints. Closed-form optimization
algorithms include linear programming while open-form methods include
hill-climbing and genetic algorithms.

The method of prototyping involves developing a series
of micro-models which eventually become macro-models. In this case these micro-models are designed
for use by the treasurer. The Bernoulli
Model further adds the Delphi program to more specifically define
organizational values—utility theory to translate external values into internal
values—Monte Carlo simulation to integrate heterogeneous risk components—the
Camus Distribution to four-dimensionally represent risk—the Bernoulli moment
vector for tracking forecasts—and an alternative hypothesis to serve as the
loyal opposition to the null hypothesis.
The model also includes very stylish and highly-advanced Excel charts,
VBA code and RoboHelp files.

A financial indicator designates a pointer of
value—eg. VaR—Value at Risk, EaR—Earnings at Risk, CFaR—Cash Flow at Risk,
UaR—Utils at Risk. It is important to
understand that each indicator includes a statistical distribution. Here The Bernoulli Model utilizes the
Bernoulli moment vector—each of which vector contains fourteen elements. Financial indicators are the cornerstone of
risk management. These financial
indicators emphasize what organizations hold of value. The officers and directors designate the
expected financial indicator values as well as the uncertainly or risk
surrounding the chosen values. Theses
values are chosen by the officers and directors using the Delphi program. The financial indicators expand on the
definition of risk and reward.

Financial indicators are the objects by which risk and
reward are measured. Exposure (ie.
Exp—M0) measures the initial exposure to change in value. While the four-moment The Bernoulli Model
provides an expanded definition of risk from the two-moment normal distribution
to the four-moment Camus Distribution (ie. Mu—M1, SD—M2, Skew—M3, Kurt—M4) and
fractal scaling (ie. Frac—M9) provide an expanded definition of risk. Utility theory provides an example of an
expanded definition of reward by changing external market values into internal
Delphi values. For example, a 100
percent external return (ie. Mu—M1) becomes a 50 percent internal return (ie.
VaL—M5) while a –20 percent internal return becomes a –30 percent internal
return. While a 100 percent return is
obviously desirable, undue emphasis on trying to achieve such a result may
produce erratic outcomes and the missing out on more conservative opportunities.

In 1905 Albert Einstein wrote one of the most profound
documents ever written entitled *Special Relativity Theory*. In 1952 Harry Markowitz wrote one of the most
profound documents ever written entitled *Portfolio Selection Theory*. In 1738 Daniel Bernoulli (1700-82) wrote one
of the most profound documents ever written entitled *Utility Theory*—the
full name of the theory being—*The Exposition of a New Theory on the
Measurement of Risk.* The central
theme of *Utility Theory* being that the value of an asset is the utility
it yields rather than its market price.
His paper delineates the all-pervasive relationship between empirical
measurement and gut feel. The utility
function converts external, market returns into internal, Delphi returns.

The Bernoulli moment vector tracks risk and return
forecasts via a fourteen-element vector.
The Markowitz Model uses the mean to represent the forecast or reward
and the standard deviation to represent the dispersion or risk—thus laying the
groundwork for risk-reward efficiency analysis.
The method of moments is a simple procedure for estimating the
statistical moments of a distribution.
The mean is the first moment of a distribution and is calculated as the
average value—and the standard deviation is the second moment and is calculated
as the average deviation about the mean.
The Bernoulli Model also employs an expansion on the method of moments
with the Bernoulli moment vector relating to the aggregate portfolio
distribution. The zero moment in the
Bernoulli moment vector represents exposure.
The Camus Distribution represents the first four moments. The fifth moment is VaL and represents a
utilitarian translation of reward and thus an expanded definition of
reward. The sixth moment is VaR and
represents the confidence level and thus an expanded definition of risk.

Niels Bohr is one of the founding fathers of quantum
theory who also defined the complementary principle as the coexistence of two
necessary and seemingly incompatible descriptions of the same phenomenon. One of the first realizations dates back to
1637 when Descartes revealed that algebra and geometry are the same thing—ie.
analytic geometry. The Bernoulli Model
allows for the separation of the null and alternative hypothesis. This ability to compare paradigms represents
an invaluable feature of The Bernoulli Model.
The Bernoulli moment vector represents the tabular depiction of the
Bernoulli portfolio while the Excel charts represent the graphical form of the
Bernoulli portfolio.

The Delphi program is the overriding guidance system
for The Bernoulli Model. It employs the iterative Delphi method designed
to draw out fundamental values from officers and directors. The
purpose of the Delphi process is to streamline decisionmaking for all concerned. The
Delphi program is named after
the Socratic inscription—*Know Thyself*—at the oracle at Delphi in ancient
Greece. The primary Delphi value
pertains to the confidence level and value for allowable downside risk exposure
of the portfolio distribution—using a financial indicator like VaR.
Secondary Delphi value pertains to the utility translation function and
risk-reward efficiency analysis.

The actuarial valuation worksheet shows the
progression of the financial indicators through the valuation process. The actuarial valuation worksheet shows the
end result of the technical development.
The worksheet contains the Bernoulli moment vector for both the
components and the portfolio. It also
contains advanced Excel charts that are broken down into the null configuration
and the alternative configuration.
Anything that is not made clear from the actuarial valuation worksheet
can be found in the technical analysis worksheet. This worksheet shows the development of the
actuarial valuation process.

The Bernoulli Model is a top-down strategic
management, forecasting and risk management system that is mathematically
accessible to executives. It is designed
for use by the treasurer showing the actuarial valuation Excel worksheet
illustrating a storyboard that demarcates the same six Excel charts for all
organizational financial indicators used in the actuarial valuation
process. The worksheet also presents
both the null and alternative valuation parameters and the null and alternative
Bernoulli moment vectors. The actuarial
valuation worksheet also includes valuation parameters as well as the Bernoulli
moment vectors. All of this leads to a
brand new look at scientific management for the treasurer.

Peter Bernstein once said that risk is no longer bad
news—rather it is a harbinger of opportunity to be harnessed for our
benefit. This essay describes the
process of putting into play an executive risk management, decisionmaking and
forecasting system. Sir James Jeans once
said that God is a mathematician. The
notion of God as a mathematician is in fact consistent with the idea that there
is only on God.

The Bernoulli Model re-cognizes the notion of
wisdom—and argues that the world is on the cusp of a monumental paradigm shift
due to the imminent fall of the authoritian model and the rise of portfolio
theory in the practical incarnation of The Bernoulli Model.

The time has come, the Walrus said, to speak of many
things.

*—Lewis Carroll*

An Essay by Christopher Bek

The word philosophy comes
from ancient Greece and is defined as the love of wisdom. In a recent hockey game between the Calgary
Flames and the Edmonton Oilers—the so-called Battle of Alberta—the twenty-one
year-old millionaire and Alberta native Mike Comrie appeared to be hit with a
high stick by the twenty-three year-old millionaire and Alberta native Derek
Morris, thus giving the Oilers a two-minute powerplay. But the replay showed that Comrie had in fact
faked being hit. Upon seeing this
excellent deception, the venerable Canadian sportscaster Jim Hughson congratulated
Comrie for being wise beyond his years.
In other words, Hughson had, in no uncertain terms, splained to the
children that, in Canada, wisdom and lying are the very same thing.

The Czech Václav Havel once
said that corruption begins when people start saying one thing and thinking
another. In his essay *The Power of
the Powerless* Havel asserts that people can bring down a dictatorial
government nonviolently by simply living in truth. John Maynard Keynes was a British economist
whose ideas continue to profoundly influence government policy to this
day. Keynes amassed a personal fortune
during the 1920s by speculating on the fluctuations of currency exchange
rates. He addressed the problem of
boom-bust cycles that constantly plague capitalism by arguing that government
should increase or decrease spending in accordance with cyclical fluctuations.

Keynes is most noted for the
prophecy made in 1930 as he looked beyond the foreboding presence of the
looming Great Depression—When the accumulation of wealth is no longer of high
social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of
the pseudo-moral principles which have hag-ridden us for two hundred years, by
which we have exalted some of the most distasteful human qualities into the
position of the highest virtues. The
love of money as a possession—as distinguished from the love of money as a
means to the enjoyments and realities of life—will be recognized for what it
is, a somewhat disgusting morbidity, one of those semi-criminal,
semi-pathological propensities which one hands over with a shudder to the
specialists in mental disease. But
beware! The time for all this is not
yet. For at least another hundred years
we must pretend to ourselves and to everyone that fair is foul and foul is
fair—for foul is useful and fair is not.
Avarice, usury and precaution must be our gods for a little longer
still. For only they can lead us out of
the tunnel of economic necessity into daylight.

Einstein once said that God
had punished him for his contempt of authority by making him an authority
himself. During the 1960s the American
psychologist Stanley Milgram performed a remarkable experiment for testing the
obedience to authority of one thousand subjects. An authoritian figure ordered each of the
subjects to administer increasingly painful electrical shocks to a learner
every time the learner either failed to answer or answered a question
incorrectly. The learner, who could be
heard and not seen was, in fact, not actually given the shocks. The punishment began at 15 volts and
increased in 15-volt increments to 450 volts.
In spite of the fact that the learner would scream in pain, beg for
mercy, and finally fall silent at 330 volts—a full two-thirds of the subjects
delivered the final punishment of 450 volts.

Wittgenstein once said that
philosophy is the battle against the bewitchment of our intelligence by the
means of our language. Just as numbers
are a useful fiction leading toward eternal verities in mathematics—so too are
words the-finger-pointing-at-the-moon when endeavoring to understand ultimate
philosophical truths. We could certainly
all agree that society would not have come this far without some form of authority
to guide it. But the question is—At what
point does the authority of government cease to guide us forward and, instead,
choose itself over the people?
Existentialism is the philosophy which asserts that morality must be
determined inwardly rather than from external authority. Consider that the Freudian cognitive model makes
the reality-based ego the decisionmaker who must choose between the internal
values of the inward self or soul and the external authority of the
superego. Gandhi used to speak
disparagingly about systems so perfect that no one would have to be good. But the goal is not to make goodness
obsolete—but to create systems which align internal and external values.

Einstein once said that there
is no more commonplace statement to make than the world in which we live is a
four-dimensional spacetime continuum.
Consider for a moment that both morality and risk management are
similarly exercises in making decisions in the face of uncertainty. From this it follows that we can only know
whether decisions are correct or not retrospectively. But the very definition of authority is that
it is able to make determinations of right and wrong at any time. And the authoritian strategy is dead
simple—bully the ego into making the easy, low-risk, low-reward, short-term
decision of obeying authoritarian rules—rather than allowing the ego time to
reflect on the values of the soul and possibly make the higher-risk and
ontologically higher-reward decision. In
essence, the authoritarian model forces us to remain in the three-dimensional
world because it can neither feed off us nor control us in the four-dimensional
world.

Harry Markowitz developed
portfolio theory in 1952 as a way of constructing optimal portfolios that
maximize reward for given levels of risk.
Markowitz forever linked reward with risk in exactly the same way that
Einstein linked space with time in that both the expected outcome and the
attendant uncertainty of outcome are required to complete the picture. His approach employs matrix algebra to
aggregate risk, and then uses linear programming to determine optimally
efficient portfolio allocations. Just as
Galileo and Descartes laid down the fundamentals of the modern scientific
method for solving problems—so too did Markowitz lay down the fundamentals of
the modern scientific method of four-dimensionally bringing together a varied
set of uncertain elements. But rather
than seeing portfolio theory for the profoundly important roadmap that it
is—the individual components of the model have been incessantly and
pedantically attacked by PhDs and other authorities—like the legions of PhDs at
Enron before the company finally collapsed under the weight of its own PhDness.

Newton once said that if he
had seen further than others, it was because he stood on the shoulders of
giants. In 1670 both Newton and Leibnitz
formulated versions of calculus—that is, the mathematics of motion. The Bernoulli brothers, John and James,
picked up on calculus, spread it through much of Europe, and then set the
roadmap for efficiency analysis by finding the curve for which a bead could be
slide down in the shortest time. John’s
son Daniel set the roadmap for utility theory based on the idea that an asset’s
value is determined by the utility it yields rather than its market price. No less than eight Bernoulli’s made
significant contributions to mathematics.
The Bernoulli Model is founded on the shoulders of giants like
Archimedes, Galileo, Descartes, Newton, Leibnitz, Einstein, Markowitz and the
Bernoulli’s. It is singularly directed
towards establishing balance and efficiency for institutions and individuals
within institutions.

The word stochastic comes
from ancient Greece and is defined as skillful aiming. The Greek Plato once said that a just society
would only be possible once philosophers became kings and kings became philosophers. The Bernoulli Model makes the president the
focalpoint—and provides a coherent enterprise-wide view. It encourages *a priori* definitions of values, opinions and data specifications so as to facilitate actuarial
efficiency—and interactive, nonauthoritian communication between the board and
the executives, and the executives and the operations. The model’s *a priori* conception
offers expandability along a multitude of dimensions including forecasting,
risk-modeling, efficiency analysis and utilization—as well as offering full
accountability and comparability for all risk factors. The Bernoulli Model also employs the
state-of-the-art, four-moment Camus distribution—which in turn sets the roadmap
for the realization of the vast untapped potential of simulation-based
optimization.

FS Northrop once said that if
one makes a false or superficial beginning, no matter how rigorous the methods
that follow, the initial error will never be corrected. It is said that addiction stems from the
inability to conceive of the future.
Simply put, the addict never knows his own soul. Alberta is the richest province in the
richest country in the world—yet a remarkable portion of the provincial revenue
comes from video lottery terminal machines—ie. VLT machines. As the wise and venerable premier of Alberta,
Ralph Klein, responded when asked the question as to when he would leave
government—Once the government is operating like a well-oiled machine.

The Bernoulli Form elucidates
the notion of Platonic Forms and describes how a motley crew of Forms—including
the Delphi, forecasting, integration, utility, optimization, efficiency and complementary—come
together in the portfolio of Forms of The Bernoulli Model.

In 1952 a young graduate
student named Harry Markowitz studying operations research demonstrated
mathematically why putting all your eggs in one basket is an unacceptable strategy
and why optimal diversification is the best one can do. His revelation touched off an intellectual
movement that has revolutionized Wall Street, corporate finance and
decisionmaking of all kinds. Its effects
are still being felt today.

*—Peter Bernstein*

An Essay by Christopher Bek

On 14 December 1900 Max
Planck (1858-1947) told his son that he had just made a discovery as important
as that of Newton. Planck revealed why
we are able to stand so close to a fire without being overwhelmed by
radiation. He realized the fact that
energy is transferred in discrete packets or *quanta *defined by Planck’s
constant puts a size restriction on escaping energy units thus causing a
traffic jam. In 1925 Planck’s constant
formed the basis of quantum theory—which is the natural law of matter and
explains the periodic table and is the foundation of electronics, chemistry,
biology and medical science. In 1905
Albert Einstein (1879-1955) produced three papers—*The Photoelectric Effect*,
which applies Planck’s quantum concept to light—*Brownian Motion*, which
delineates the stochastic process and is the basis of all riskmodeling—and *Special
Relativity*, which is the natural law of spacetime. In 1906 Planck wrote to the unknown Einstein
and acknowledged the greatness of his discoveries. In 1915 Einstein adapted the curved geometry
of Bernhard Riemann (1826-66) as the underlying *a priori* Form for
general relativity. Special relativity
refurbished Newtonian physics in respect of uniformly moving bodies traveling
in straight lines—and general relativity upgraded special relativity to account
for bodies traveling at varying speeds along curved lines. On 28 May 1919 Sir Arthur Eddington led an
expedition to the island of Principe off the coast of Africa to photograph an
eclipse of the sun. Analysis revealed a
warping of spacetime consistent with general relativity thereby providing *a
posteriori* validation. Planck stayed
up all night awaiting the results while Einstein slept like a baby. When asked what he would have done had the
results not confirmed his theory, Einstein responded by saying—Nothing, for the
good Lord must have errored.

The Greek Plato’s (427-347
BC) theories of knowledge and Forms holds that true or *a priori*
knowledge must be certain and infallible, and it must be of real objects or *Forms*. Thales and Pythagoras laid the foundation for
Plato by founding geometry as the first mathematical discipline. Mathematics is the systematic treatment of
Forms and relationships between Forms.
It is the science of drawing conclusions and is the primordial
foundation of all other science. The
Greeks synthesized elements from the Babylonians and Egyptians in developing
the concepts of proofs, axioms and logical structure of definitions—which is
mathematics—which when combined with *a posteriori* validation allows us
to arrive at *a priori* knowledge.
While Thales introduced geometry, it was Pythagoras who first proved the
Pythagorean theorem which establishes *a priori* knowledge that the square
of the hypotenuse of a right-angle triangle is equal to the sum of the squares
of the two sides. Both Einstein’s
relativity in 1905 and my theory of one in 2001 make use of the Pythagorean
theorem as their underlying *a priori* Form. Relativity derives its *a posteriori*
validation from the 1887 Michelson and Morley experiment while the theory of
one gets its *a posteriori* validation from the 1982 Aspect experiment.

The term *a priori*
refers to a four-dimensional mathematical essence while *a posteriori*
refers to a three-dimensional commonsense existence. Essence is the true kernel of a thing while
existence simply refers to the sheer fact that a thing is. The soul is an essence while the ego merely
exists. William Barrett wrote in his
1958 book *Irrational Man* that the history of Western philosophy has been
one long conflict between existentialism and essentialism. Jean-Paul Sartre (1905-80) defined
existentialism as the philosophy for which existence precedes essence. Conversely, essentialism asserts that essence
precedes existence. The problem is that
precedence is a temporal operator and essence is outside time—meaning that the
notion of precedence here is meaningless.
It is the age-old problem of the chicken and the egg. As a fundamental attitude, The Bernoulli
Model is existential—but based on a portfolio of empty Forms—with the faith
being that the application of the model will animate the Forms thus realizing
the essence of model.

Existentialism
is based on the self-verifying Form of the Cartesian *cogito*—I think, therefore I exist. The Bernoulli Model employs the self-verifying Form of the Delphi method which is an
iterative process intended to draw out executive values—and is named after the
Socratic inscription on the oracle at Delphi—*Know thyself*—which is of
course equivalent to the Cartesian *cogito* and also equates to the
objective function from operations research.
The basic Delphi value pertains to allowable downside risk exposure for
the portfolio distribution.

The statistical distribution
is one of the most beautiful Forms for the reason that it represents both the
forecast of outcomes as well as the expected uncertainty. Advanced forms of forecasting of The
Bernoulli Model include intertemporal riskmodeling—which is able to accurately
represent time-series data like energy prices and foreign exchange rates
characterized by contemporaneous and intertemporal dependencies. The approach deconstructs historical data
into signal, wave and noise—each of which is then forecast separately.

Integration is the process of
aggregating or bringing together forecasts of outcomes and uncertainty. The closed-form method of integration
involves matrix algebra and applies strictly to the two-moment normal
distribution. The Bernoulli Model also
employs the open-form method of Monte Carlo simulation with the four-moment
Camus distribution in order to capture and integrate the full spectrum of
heterogeneously distributed forecasts.

Daniel Bernoulli (1700-82)
founded utility theory by writing a paper entitled *Exposition of a New
Theory on the Measurement of Risk*—with the theme being that the value of an
asset is determined by the utility it yields rather than its market price. His paper, one of the most profound ever
written, delineates the all-pervasive relationship between empirical
measurement and gut feel. The Bernoulli
Model employs utility theory by adjusting market returns to more accurately
represent internal values.

Optimization is part of
operations research that originated in World War II when militaries needed to
allocate and deliver scarce resources to operations. Optimization algorithms search either
cost-function or risk-reward space to determine the optimal value for the
objective function subject to Delphi constraints such as allowable downside risk exposure. Local search algorithms include linear
programming and hill-climbing algorithms while global search algorithms include
genetic algorithms.

In risk-reward space the
process of optimization is carried-out for every level of risk with the result
being the construction of the efficient frontier. A similar process in cost-function space is
known as data envelopment analysis. The
Bernoulli brothers, James (1654-1705) and John (1667-1748), set the roadmap for
efficiency analysis by finding the curve for which a bead could be slide down
in the shortest time. The efficient
frontier has come to form the bedrock of portfolio theory since its
introduction in 1952 by Harry Markowitz.

Niels Bohr (1885-1962) defined
the complementary principle as the coexistence of two necessary and seemingly
incompatible descriptions of the same phenomenon. One of its first realizations occurred in
1637 when Descartes revealed that algebra and geometry are the same thing. In 1860 Maxwell revealed that electricity and
magnetism are the same thing. In 1915
Einstein revealed that gravity and inertia are the same thing. The ability to contrast paradigms presents
the invaluable feature of the complementary perspective.

The agency problem is the
pervasive predicament whereby agents select against organizations. The first realization arose between tenant
farmers and landowners. A business manager
who invests in marginal projects so he can reaps the benefits of being the
manager of a larger portfolio is selecting against the organization. The risk measuring concept of VaR originated
because traders were playing the game of heads-I-win-tails-you-lose and
exposing organizations to huge risks.
When gambles went south the traders simply moved on. The problem now is that traders are gaming
VaR. Owing to its mathematical basis—The
Bernoulli Model is made virtually impenetrable to the agency problem.

The Harvard Business Review
publication began in 1922 with the intention of connecting fundamental economic
and business theory with everyday executive experience. The Bernoulli Model represents a systematic
realization of that very mandate. The
approach frees executives from political gridlock and offers the ability to
either skim the surface of decision analysis or drill-down and examine the
inner workings of decisions and asset valuations. It affords a comprehensive overview and
provides unequivocal confidence allowing executives to sleep like babies
knowing what Einstein knew—when the math is good the math is good.

The Efficient Frontier
examines the notions of God, option theory, portfolio theory, faith, reason and
Arab math-finally arriving at the inescapable conclusion that all roads of
sound decisionmaking lead to the efficient frontier.

God is a mathematician

*—Sir James Jeans*

An Essay by Christopher Bek

The French mathematician and
philosopher Blaise Pascal (1623-62) originated option theory with his famous
wager regarding the questions of existence and ultimate nature of God. His argument came during the Renaissance in
response to those unwilling to believe in God strictly on faith and authority. Pascal argued that living a simple life which
seeks to understand God represents the option premium which then allows for the
possibility of salvation should it turn out that God does exist. Some critics have argued that God might well
reserve a special place in Hell for those who believe in Him on the basis of
Pascal’s wager. But in fact the exact
opposite is true. Those who believe in
God strictly of the basis of faith are setting themselves up for failure for
the reason that their conception of God is based on a static snapshot that is,
by definition, not subject to reason.
The Devil is the one who seeks out those who blindly follow. A true God most certainly wants to be
constantly challenged by both faith and reason.
Kevin Spacey tells us in the 1996 movie *The Usual Suspects* that
the greatest trick the Devil ever pulled was convincing the world he doesn’t
exist. And now we know the second
greatest trick the Devil ever pulled was convincing the world we can know God
by faith alone.

Option theory is the
decisionmaking methodology whereby decisions to invest or not are deferred with
the purchase of options. For example, a
drug company might enter into a relationship with a university for the purpose
of gaining access to research projects.
Analyzing the strategic value of such projects is difficult as the
result of the prolonged developmental phase of pharmaceuticals as well as the
complexity involved in predicting the future market. Relationships are structured such that the
company pays an up-front premium followed by a series of progress premiums
until the company chooses to either purchase the research at an agreed-upon
price—or discontinue the progress premiums, thereby forfeiting any future
option-to-purchase rights.

Risk analysis originated in
1654 when Pascal and another mathematician named Pierre de Fermat solved the
problem of how to divide the stakes for an incomplete game of chance when one
player is ahead. The problem had
confounded mathematicians since it was posed two hundred years earlier by a
monk named Luca Paccioli who coincidently also introduced double-entry
bookkeeping. Their discovery of the
theory of probability made it possible for the first time to make decisions and
forecasts based on mathematics. Just
like questions of the existence and nature of God, the serious study of risk
originated during the Renaissance when people broke free from authoritian
constraints and began subjecting long-held beliefs to philosophic and
scientific enquiry.

The Greek Thales (625-546 BC)
launched philosophy and mathematics after having amassed a fortune by first
forecasting bumper olive crops and then purchasing options on the usage of
olive presses. According to Plato
(427-347 BC) true or *a priori* knowledge must be certain and infallible
and it must be of real objects or *Forms*.
Mathematics is thus the systematic treatment of Forms and relationships
between Forms. It is the science of
drawing conclusions and is the primordial foundation of all other science. Saint Augustine (354-430) carried forward
Greek thought from the failing classical world to the emerging medieval,
Christian world—a project that came to be known as the medieval synthesis. For twelve hundred years the flame of
philosophy and science lit by Augustine burned ever so lowly under the
agonizing oppression of the Church.
Copernicus published *On the Revolution of Celestial Orbs* in 1543
mathematically proving the theory of heliocentricity. And then by inventing and using of the
telescope, Galileo (1564-1642) was able to provide the empirical validation of
heliocentricity—for which the Church sentenced him to life in prison. The French philosopher and mathematician René
Descartes (1596-1650) shared Galileo’s views and envisioned the masterful
strategy of presenting these revolutionary ideas to the Church in such a way
that the Church believed the ideas were their own. His heroic plan succeeded and the philosophic
and scientific Renaissance of the seventeenth century was born.

While the Church was jumping
up and down on everyone’s head for over a millennium, Arab mathematicians like
Muhammad al-Khwârizmî (780-850) were carrying the ball in founding *algebra*
and *algorithms.* An algorithm is
the procedural method for calculating and drawing conclusions with Arabic
numerals and the decimal notation.
Al-Khwârizmî served as librarian at the court of Caliph al-Mamun and as
astronomer at the Baghdâd observatory.
Both the terms *algebra* and *algorithm* stem from the God, *Allah*. According to Arab philosophy, mathematics is
the way God’s mind works. The Arabs
believe that by understanding mathematics they are comprehending the mind of
God. In fact the core of their religion
lies with the belief that people must submit to the will of God—meaning
mathematical arguments.

The Latin version of
al-Khwârizmî’s work is responsible for a great deal of the mathematical
knowledge that resurfaced during the Renaissance. In fact, the notion that mathematics and God
are the same thing was adapted as the foundation for the Renaissance by
thinkers like Descartes, Pascal, Fermat, Newton, Locke and Berkeley. Then, in what John Stuart Mill called the
single greatest advance in the history of science, Descartes conceived analytic
geometry by synthesizing Greek geometry with Arab algebra. The significance of this founding of modern
mathematics is best understood in light of the fact that mathematicians from
that point forward had two complimentary and fundamentally different ways of
viewing the same Forms. Einstein first
introduced relativity theory in 1905 as a simple set of algebraic equations,
yet the theory was ignored until four years later when Minkowski presented a
geometric view of relativity as characterized by the four-dimensional spacetime
continuum.

In addition to founding
modern mathematics, Descartes also found modern philosophy by tearing down the
medieval house of knowledge and building again from the ground up. By employing the method of radical doubt,
Descartes asked the question—What do I know for certain?—to which he concluded
that he certainly knew of his own existence—*cogito, ergo sum*—I think,
therefore I exist. Based on the natural
light of reason, Descartes formulated his famous Cartesian method which is—Only
accept clear and distinct ideas as true—Divide problems into as many parts as
necessary—Order thoughts from simple to complex—Check thoroughly for
oversights—And rehearse, examine and test arguments over and over until they
can be grasped with a single act of intuition or faith. Descartes rightly believed his method would
guarantee certain and infallible knowledge.
Initially, one faithfully or intuitively senses truth, which is followed
up by constructing rational arguments and then intuitively capturing completed
arguments. In other words, faith leads
us to reason and then reason leads us back to faith.

In 1952 a twenty-five
year-old graduate student named Harry Markowitz studying operations research at
the University of Chicago strung together three algorithms—forecasting,
integration and optimization—ie. method of moments, matrix algebra and linear
programming—in developing portfolio theory as a way of constructing optimally
efficient portfolios that maximize reward for a given level of risk—with the
efficient frontier being constructed by optimizing for all levels of risk.

In 1690 the Bernoulli
brothers set the roadmap for efficiency analysis by finding the curve for which
a bead could be slide down in the shortest time. The Bernoulli Model upgrades the three
algorithms of The Markowitz Model—forecasting, integration and
optimization—with—intertemporal riskmodeling and decision trees, Monte Carlo
simulation and the Camus distribution, and genetic and hill-climbing
algorithms—and adds the Delphi process, utility theory and the complimentary
principle. The approach essentially
provides an efficiency workshop for realizing the vast potential of The
Cartesian Method.

The Delphi process identifies
first-order values that rise above cost-benefit such as allowable downside risk
exposure. The second-order objective is
to ensure portfolio risk-reward efficiency.
The efficient frontier represents the best that one can do in terms of
maximizing expected reward for each level of expected risk. It depicts the panoramic fruition of the
highest forecasting and decisionmaking intelligence for the organizational
portfolio. And while the end result is
sufficient enough reason for conducting the exercise in the first place, the
process of going through the analysis is often worthwhile in and of itself.

Starting from the realization
that the very definition of the word religion means a reconnection with
reality—we know that most organizations, religious or otherwise, rest on
unchallenged preconceptions. The whole
point of applying option theory and following through on the efficient frontier
is a recognition of the fact that not only situations but our conception of
situations changes as we go. To think
like a mathematician then is to—as Socrates rightly asserted—follow the
argument wherever it leads.

The Method of Moments
delineates dimensional deconstruction and reconstruction combined with fractal
analysis as the fundamental method of riskmodeling employed by The Bernoulli Model.

There is no more commonplace
statement than the world in which we live is a four-dimensional spacetime
continuum.

*—Albert Einstein*

An Essay by Christopher Bek

In 1975 the Polish
mathematician Benoit Mandelbrot posed the question—How long is the coastline of
Britain? Appealing to relativity,
Mandelbrot pointed out that it depends on one’s perspective. From space the coastline is shorter than to
someone walking because on foot the observer is exposed to greater detail and
must travel farther. According to
Mandelbrot, when the shape of each pebble is taken into account, the coastline
turns out to have infinite length. He
proposed a system for measuring irregular shapes by moving beyond integer
dimensions to the seemingly absurd world of fractional dimensions. Mandelbrot used a simple procedure involving
the counting of circles to calculate the fractal dimensionality. The coastline of Britain has a fractal
dimension of 1.58 while the rugged Norwegian coastline is 1.70. Coastlines fall in between one-dimensional
lines and two-dimensional surfaces. In
the three-dimensional world the fractal dimension of earth’s surface is 2.12
compared with the more convoluted topology of Mars estimated to be 2.43.

A fractal is a mathematical
Form having the property of self-similarity in that any portion can be viewed
as a reduced scale replica of the whole.
Fractal structures exist pervasively throughout nature because theirs is
the most stable and error tolerant. The
fractality of clouds is evidenced by the fact that they look the same from a
distance as they do up close. Mountains,
snowflakes, lightning, galaxy clusters, earthquakes and broccoli are just a few
of the naturally occurring phenomena that exhibit fractal qualities. The power of fractal analysis lies in its
ability to capitalizes on self-similarity across scale by locating an eerie
kind of order lurking beneath seemingly chaotic surfaces. And not only is fractal analysis scalable
across applications but, owing to its mathematical foundation, it is also portable
between applications.

While the coastline conundrum
involves using fractal analysis to measure the complexity of geometrical
shapes, the British hydrologist Harold Hurst (1900-78) employed fractal risk
analysis in managing the Nile river dam from 1925 to 1950—with the goal being
the formulation of an optimal water discharge policy aimed at balancing
overflow risk with the risk of insufficient reserves—a job description not
unlike that of a treasurer’s. Hurst
initially assumed the influx of water followed a random walk—although he
abandoned that assumption in favor of a more robust fractal process. A random walk or Brownian motion is a
statistical process that has no memory and is represented by the normal
distribution. The fractal process is a
superset of the random walk where the fractal dimension ranges from zero to
one—with a value of 0.5 being the normal distribution. Hurst’s work on the project led him to
examine 900 years worth of records the flood-weary Egyptians kept. Capitalizing on self-similarity, he analyzed
data under all available time-scales from phenomena including river and lake
levels, sun-spots and tree-rings in calculating a fractal dimensionality of
0.75.

In normal science a
singularity is a breakdown in spacetime such that the laws of physics no longer
apply. Typical examples of singularities
include the big bang, black holes and one divided by zero. What physicists like Stephen Hawking who
developed the concept of singularities failed to realize is that a breakdown in
spacetime is just another way of saying a boundary of spacetime. Thomas Kuhn (1922-96) was a physicist and
historian concerned with the sociology of scientific change. In his 1962 book *The Structure of
Scientific Revolutions* he defines the term paradigm shift as a
transformation taking place beyond the grasp of normal cognitive
abilities. Scientists apply normal
scientific methods within a paradigm until the paradigm weakens and a shift
occurs. Most people eat up normal
science with a big spoon, but do everything possible to avoid the intense
metaphysical pain of paradigm shifts.
Hawking once said that a singularity is a disaster for science. But what he should have said is that a
singularity is a disaster for normal science—but normal for singularistic
science.

The range of the fractal
process maps isomorphically to a family of distributions known as the fractal
or stable Paretian—named after Vilfredo Pareto (1848-1923). There are explicit expressions for three
fractal distributions—the Bernoulli (ie. coin toss), the normal and the
Cauchy—corresponding to a fractal exponent of zero, 0.5 and one
respectively. The Bernoulli, named after James Bernoulli (1654-1705), converges
to the normal distribution when the number of coins becomes sufficiently
large. The Cauchy, named after Augustine
Cauchy (1789-1857), is interesting in that it possesses undefined moments—thus
making it singularistic. The first four
moments of a statistical distribution are the mean, standard deviation,
skewness and kurtosis—with kurtosis being a measure of both pointedness and
length of tails. The extremely long
tails of the Cauchy give rise to its undefined moments. The mean never converges because a value
sampled from the extreme of the tails shifts any previously established
mean. The Cauchy is related to the
normal in that it is a normal divided by a normal. And one can easily see this in Excel by
simulating a normal sample with the formula =normsinv(rand()). If the simulated denominator is very close to
the mean of zero—then the value of the Cauchy shoots off to the moon.

I developed the four-moment
Camus distribution—named after Albert Camus (1913-60) for his desire to be the
perfect actor—as a one-size-fits-all distribution to model the full range of
the fractal process. So whereas the
basic Bernoulli has a kurtosis of zero, the normal has a kurtosis of three and
the Cauchy has infinite kurtosis—the Camus with a fractal dimensionality of
0.75 has a kurtosis of six. What the
Camus does is interpolate between the Bernoulli and the normal or the normal
and the Cauchy—depending on the fractality.
The normal distribution with its fractality of 0.5 translates into
scaling according to the square-root-of-time.
Going from a one-month valuation period to a one-year valuation period
under a normal assumption results in a scaling factor of 3.5—ie. 12^0.5—while a
similar calculation with a Camus distribution and a kurtosis of six produces a
scaling factor of 6.4—ie. 12^0.75. The
rationale being that with higher kurtosis comes the greater potential for
larger jumps.

The Hurst Model example
involves components characterized by intertemporal dependencies and The
Markowitz Model represents portfolio analysis involving components
characterized by contemporaneous dependencies.
The Bernoulli Model is a superset of both that includes intertemporal
riskmodeling as an approach representing data characterized by both intertemporal
and contemporaneous dependencies—such as energy prices and foreign exchange
rates. The word stochastic comes from
ancient Greece and is defined as skillful aiming. While the basic stochastic process is the
random walk, intertemporal riskmodeling expands along a multitude of moments
and dimensions. The random walk process
bifurcates into the Camus distribution and a mean-reverting process. And rather than being a static number, the
mean is itself a process composed of long-term signal and short-term wave
elements. The final element of noise is
determined by the distribution parameters including standard deviation,
kurtosis and correlation—which are themselves mean-reverting processes—known as
garch—also composed of signal and wave elements. In summary, the intertemporal riskmodeling
process deconstructs historical data into correlated signal, wave and
noise—each of which is separately forecast—and then reconstructs within a Monte
Carlo simulation environment in order to produce the forecasted portfolio
distribution.

The Markowitz Model uses the
mean to represent the forecast or reward and the standard deviation to
represent the dispersion or risk—thus laying the groundwork for risk-reward
efficiency analysis. The basic method of
moments is a simple procedure for estimating distribution parameters. The mean is the first moment of a
distribution and is calculated as the average value—and the standard deviation
is the second moment and is calculated as the average deviation about the
mean. Intertemporal riskmodeling simply
expands on this basic concept. The
Bernoulli Model also employs an expansion on the method of moments with the
Bernoulli moment vector (ie. BMV) relating to the portfolio distribution. The zero moment in the BMV represents
exposure and is simply the intuitive concept of initial value exposed to
change. The fifth moment is VaL and
represents a utilitarian translation of reward and thus an expanded definition
of reward. The sixth moment is VaR and
represents the confidence level and thus an expanded definition of risk.

The word Renaissance means
rebirth and was used to describe the era following the medieval period lasting
from the fourth to the sixteenth century.
It was René Descartes (1596-1650) who broke the logjam by founding
modern philosophy, modern mathematics and the Cartesian coordinates—all based
on his belief that one should formulate a simple set of rules and follow
them. The method of moments represents a
simple set of rules with the potential for advanced forecasting and risk-reward
efficiency analysis. Self-similarly, the
BMV is the new Cartesian coordinates of the four-dimensional space-time
continuum. One might then pose the
question—How long until the logjam breaks and the scientific management
Renaissance emerges?

Scientific Management follows
the development of relativity from Archimedes to Einstein—and then takes a
parallel line of reasoning in considering the development of scientific
management and portfolio theory.

Give me one fixedpoint and I
will move the earth.

*—Archimedes*

An Essay by Christopher Bek

Archimedes (287-212 bc) was the profoundly practical Greek
genius of mathematics and physics who foreran many modern scientific
discoveries. He is ranked along with
Gauss and Newton as one of the three greatest mathematicians of all time. The calculation of a sphere’s volume and the
realization of Archimedes’ principle—ie. Any object immersed in a fluid is
buoyed up by a force equal to the weight of the fluid displaced—count among his
many greatest triumphs. It could even be
argued that his claim—Give me one fixedpoint and I will move the
earth—metaphorically represents the starting point for all of science. The need for a fixed starting point can also
be found in ancient folklore—For want of a nail, the shoe was lost. For want of a shoe, the horse was lost. For want of a horse, the rider was lost. For want of a rider, the battle was
lost. For want of a battle, the kingdom
was lost.

The Greek Thales (624-546 bc) launched geometry as the very first
mathematical discipline. Thales in fact
made his monumental contribution to mathematics after having amassed a personal
fortune by first forecasting bumper olive crops and then purchasing options on
the usage of olive presses. The Greek
Euclid (350-300 bc) carried on
from Thales by developing Euclidean geometry as described by five simple
axioms. Euclid also made immeasurable
contributions to mathematics with his landmark thirteen-volume masterpiece *Elements*.

The French mathematician René
Descartes (1596-1650) wrote the brilliant *Discours
de la méthode *as a philosophical examination of the scientific method. An appendix of the text presents the
revolutionary idea of geometry as a form of algebra—ie. Cartesian
coordinates. After Descartes, the German
Mozart-of-mathematics Carl Gauss (1777-1855) laid the foundation for
non-Euclidean geometry by proving that additional geometrical systems exist in
which only four of the five Euclidean axioms hold—and by arguing that there is
no *a priori* reason for space not to
be curved.

The Italian physicist and
astronomer Galileo (1564-1642) laid down the fundamentals of the modern
scientific method by developing a comprehensive, empirical approach to solving
problems. He was sentenced to life
imprisonment at the age of sixty-nine for supporting the Copernican view that
the earth revolves around the sun—a view which had the effect of destroying the
notion of the earth as a fixedpoint. Sir
Isaac Newton (1643-1727) invented calculus, established the heterogeneity of
light, and formulated the three laws of motion and the universal law of
gravitation. Both Galileo and Newton
asserted, as far as relativity was concerned, that space itself was the
universal frame of reference within which the freewheeling of stars and
galaxies could occur.

In 1881, two Americans,
Albert Michelson (1852-1931) and Edward Morley (1838-1923) performed a
monumentally important experiment which established, beyond a doubt, that the
speed of light is invariably fixed at 186,284 miles per second—regardless of
relative motion. In 1904 the Dutch
physicist Hendrik Lorentz (1853-1928) formulated a group of algebraic
transformations relating to electricity.
Mathematicians use transformations as tools for revealing fundamental
underlying properties that remain invariant under transformation.

The Michelson-Morley
experiment presented a problem in that, according to Newtonian physics,
velocities are additive, thus contradicting the invariance of lightspeed. The young Albert Einstein (1879-1955)
resolved the dilemma in 1905 with his special theory of relativity by revealing
that space and time are variable, interrelated quantities. In paralleling Newtonian physics, Einstein
theorized that the laws of nature are the same for all uniformly moving
bodies. But unlike Newtonian physics,
which only concerns itself with mechanical laws, special relativity also
accounts for the behavior of light and other electromagnetic radiation. Einstein dismissed the separate notions of
space and time by replacing them with the combined, fixed notion of
spacetime. In other words, according to
special relativity, space and time are actually manifestations of each
other. Space becomes time and time
becomes space at speeds approaching lightspeed in accordance with the
transformations of Lorentz.

At its inception, special
relativity was little more than a set of algebraic equations that made only a
modest impact. It was not until 1909
when Hermann Minkowski (1864-1909) presented a geometric interpretation of
relativity—as characterized by the four-dimensional spacetime continuum—that
the scientific community took notice.
Ironically, Minkowski was also Einstein’s university professor and had
described Einstein as a lazy dog who never bothered with mathematics at
all—which makes sense given that Einstein sought elemental conceptual pictures
first before considering mathematical complexities. Einstein eventually warmed up to the idea of
geometrization, and presented general relativity in 1915 as a geometric
representation of special relativity that incorporates the concepts of gravity
and curved space. With special
relativity, Einstein refurbished Newtonian physics in respect of uniformly
moving bodies traveling along straight lines.
General relativity then upgrades special relativity so as to account for
bodies traveling at varying speeds along curved lines

Einstein had become
mathematically adept by the time the foundation of quantum theory was laid in
1925. But unlike relativity, which
concerns itself with the very large, quantum theory is directed towards
understanding the very small. In fact,
the great revelation of quantum theory is that atomic phenomena are indeterminate—thus
making it a statement of probability.
But Einstein did not believe in cosmic risk, as evidenced by his claim
that God does not play dice. And so in
dismissing the notion of risk, Einstein actually overlooked the makings of a
cosmic fixedpoint. He eventually strayed
from his original conceptual approach and spent much of the last thirty years
of his life withdrawn in the world of obscure mathematics and twisted
geometries.

Harry Markowitz developed
portfolio theory as a way of constructing optimally balanced portfolios that
maximize reward for given levels of risk.
Markowitz forever linked reward with risk in the same way that Einstein
linked space with time in that both the expected outcome and the attendant
uncertainty of outcome are required to complete the picture. Markowitz wrote a fourteen-page paper
entitled *Portfolio Selection* as a
graduate student at the University of Chicago in 1952—and then shared the Nobel
Prize for economics with two others in 1990.
His original approach employs simple matrix algebra to aggregate
risk—and applies linear programming algorithms to then determine optimal asset
allocations. In many ways the
discoveries of Einstein and Markowitz parallel each other. Both men were unknown mid-twenty year-olds
who wrote brief, modest papers with far reaching implications. The main difference so far being that we have
yet to realize the potential for portfolio theory.

While Galileo and Newton
believed space itself to be the fixedpoint of the cosmos, most companies today
subscribe notionally to the market as capitalism’s fixedpoint. Yet a closer look at the market reveals it to
be mostly neurotic and chaotic—and anything but fixed. And so if companies genuinely wish to acquire
a fixed, internal decisionmaking frame of reference, they must resolve to
undertake the scientific work necessary to properly implement portfolio
theory. While Einstein lost the holy
grail of physics, unified field theory, in the math—companies today are losing
the holy grail of capitalism, shareholder value, in the balance sheet. The great challenge now lies in learning to
transform the entrenched accounting mindset into the new scientific management
mindset.

Consider how the notion of
connecting two personal computers with modems a few decades ago has managed to
become the all powerful internet. Then
consider how corporate officers and directors can begin rolling out scientific
management by first nailing down scientific management conceptually in their
minds—never forgetting that science is not a spectator sport. Scientific management is firstly an exercise
in seeing the forest for the trees—and secondly an exercise in seeing the trees
for the forest. Portfolio theory gives
us the overall perspective of the forest, while scientific management tools
like Monte Carlo simulation, intertemporal riskmodeling and advanced
optimization algorithms have the potential, in the hands of scientists, to
provide us with an accurate representation of the trees.