THE AIM & GOAL OF ALL MODELING:  CLARITY IN NAVIGATING RISK & OPPORTUNITY

CLARITY IN NAVIGATION REQUIRES CLARITY OF OBSERVATION REQUIRES CLARITY IN MODELING …

What we know of life, whether that life is by divine design or material evolution or both, is framed largely by the sense of sight—by natural interaction of sunlight and the molecular material that forms the retina of a human eye. The eye is situated in a head for neural proximity to a brain to make mental sense of immediate visual input for the purpose of navigation. The head is placed on a mechanical body in touch with the ground with a sensitivity equal to that of sight, balancing for autonomous navigation in its environment, foraging for chemical stores of energy needed to sustain and reproduce the body, brain, and of course, the eyes, in a continual process of minding itself. Within this sensory spectrum mediating between the hot plasma of the sun and the cold dark soil of the planet, through an innate mix of instinctive, intuitive, and logical cognitive capacities, the mind naturally curates as it generates a variety of mental models of the individual’s interaction with and within its environment to aid in its natural, social, and subjective navigation. 

The cues for such navigation are mixed. Emotion—as the word implies, a mechanism for elicit motion or evincing a change in state—is an instinctive capacity, motivated to look for natural and social cues in our environment in response to perceived risk and opportunity. Intuition is largely skewed toward an understanding of such motivation in other social beings like ourselves and therefore seeks cues to reliably gauge the intent behind the motivation of others. Logic is largely skewed toward understanding the natural form of things and the processes of their interaction in the environment and therefore tends to minimize the element of intent involved, even in social navigation, looking instead for reliable material cues for navigation. 

When neither motivational intent nor inertial cause & effect is readily understood, the default is to rely on instinctive bias—on intuitive faith in interpreting essentially natural material events as intentionally motivated or on logical reason in interpreting the trajectory of socially motivated events as an effect of blind inertial forces. In the default extreme, faith becomes a religion and reason becomes a science, each as dogmatic models for human navigation. In truth, only reliance on an enlightened combination of faith and reason can maintain balanced continuity in an effective charting of the chasm between inertial and intentional change.

UNDERSTANDING THE SCIENCE AND FAITH OF AXIOMATIC MODELING

Among a scientific mix of these models, the standard model of particles as a quantum field theory and general relativity as a spacetime field theory have emerged to provide the most valid understanding, based on the successful application of modeling for the purpose of navigation. Missing is a clear understanding of the coupling of these models—of quanta & spacetime—of interaction of energetic particles in a theater of inertial form and dynamic process in which human interaction has been able to flourish. 

In an instinctive attempt to unify understanding, particle physics has logically turned to a theoretical examination of increasingly finer levels of energetic particle interactions, pursuing strings at the Planck scale, while general relativity has intuitively turned to the heavens with increasingly powerful telescopes, hoping to view the cosmos at its beginnings. Yet from such logic and intuition, instead of clarity we are stymied by an inability to access space at the Planck scale for verification of such particle structure and, with the latest reports of the James Webb Space Telescope, to access time at a redshift of early galactic structure for verification of a Big Bang.  

Conceptual emphasis on a high energy environment for observing and understanding the genesis of inertial structure—quanta from particle collisions & galaxies from big bang nucleosynthesis—is echoed in the pursuit of an economically viable approach to fusion. While the Planck scale and the redshift horizon may be beyond observation, the sun has been close and accessible for investigation and understanding its nature. It is therefore logical to apply the high energy of magnetic and inertial confinement to emulate the natural effects of solar gravitational confinement for the intended technological development of fusion in future power generation.

THE LOW ORDER–HIGH HEAT AND HIGH ORDER–LOW HEAT OF ENTROPIC STATES 

Looking at this from a thermodynamic perspective, it is natural to correlate high energy, or heat, with order, so that cooling becomes associated with entropy as disorder with an increasing unavailability of useful energy for work. The surface of the earth is at greater entropy than the core of the sun, which will continue to burn hydrogen for several billion more years. In a sea of dense hydrogen gas, measured by the availability of energy for work per mass of particles involved, the potential for continued fusion of hydrogen to helium can be seen as a low entropic state, even though the sea of dense hydrogen gas as fuel for fusion might be seen as highly entropic in terms of its general uniformity, in terms of its apparent lack of intrinsic order. 

We might posit that the human brain, with the ability through collectively coordinated understanding and effort to release the hidden energy of the atomic nucleus—currently as fission, potentially as fusion—is of many orders of magnitude less entropic with respect to the dust of the earth’s surface, than the temperature at the core of the sun is to that of the earth. In fact, developed human consciousness, represented in the mind and brain, is of less entropy—therefore more ordered—than the elemental and molecular configuration of the solar core. Materially, physically, this is because the human ecosystem, like most of the ecosystems of life in general, is by necessity only operational in a well-ordered range of condensed matter, a range of ostensibly greater entropy than the sun, measured by heat and temperature alone. Given the well-ordered differentials of galactic spacetime–quanta, sun–earth, and brain–navigational environment, the living universe is of much lower entropy today than the quark soup from which all these dichotomies are currently modeled as appearing.

The potential to see and touch and otherwise sense change in one’s environment, and thereby intentionally form models of material nature in an intention–developed mind—as indicated by the ability to harness the power of nuclear fission—is only possible in a human being of low–heat entropy, working in an equally low temperature environment of highly evolved order. Instinctively, intuitively, or logically, we would not expect such human nature to be found—as materially evolved or divinely created—in either the vast voids of a cold spacetime with little particle interaction or in a solar interior with high heat of particle interactions. The potential for a manifestation of human nature requires a Goldilocks zone for its unfolding, for its evolution.

The seminal point is that the existential fact of an emerging human cognitive capacity with intentionality, witnessed in the increasingly ordered complexity of the living environment, is proof that the capacity for that emergence is inherently necessary as an essential—a potential—as a determinative factor of sentience, though such potential in itself does not appear to suffice for that emergence. Such emergence requires the activation of other necessary conditions—among them the presence of hospitable ambient conditions for life in form and process—to understand those conditions as both necessary and sufficient.  The eye and the brain and the body in a hospitable environment are all necessary, but it takes an understanding of the purposeful intent of one’s place in the environment to recognize its sufficiency, where intent of a conscious being is the focused aim and purpose is that being’s targeted goal of interaction in the environment. While intentional aim at a purposeful goal indicates sufficient capacity, reaching the goal requires tenacious work in an environment of sufficient objective conditions and, sometimes, agency of unknown providence.

WAVES AS STOCHASTIC INTENT & DETERMINISTIC INERTIA IN CONDENSED MATTER MODELING

It is mathematically logical and defensible to form a condensed matter model of physical phenomena, unifying general relativity and quantum field theory, one that is axiomatically deterministic and not based on random free parameters. This is not said as a disputation of stochastic determinism at certain stages in a model of quantum interactions. It is a disputation of the notion that any empirical model—mentally developed through the human capacity to recognize well-connected purposeful order at every turn in nature—can make logical sense without acknowledging as axiomatic the equally human capacity to understand a priori purpose and intent.

From this perspective, the principal, essential, and too-oft unrecognized axiom of any logical argument is the principal of continuity. Whenever we encounter a discontinuity or a boundary or an asymptote, it is generally a good idea to cogitate further to determine whether that apparent discontinuity is in fact a point of inflection, a path of least action, or a co-ordinate singularity rather than a terminus beyond which there is a meaningless void. For me, the necessary and most effective way to approach such a thinking process is to assume an inherently timeless, continuous potential in a field of observation and interaction that is initially devoid of any ‘thing’ yet still has the capacity to assume and morph into any form and process I might want, to satisfy my imagination. 

My personal though certainly not unique way of pursuing this process is through the mathematical, geometric, topological, and innate capacity of the graphic imagination in 3D modeling using the logic of wave mechanics in a variety of oscillatory forms that can define the discreteness we observe in rest mass quanta without severing the continuity we observe as the leptonic emission of beta decay and the electromagnetic transmission of photons. This can be done while understanding the limits of Heisenberg uncertainty and the Pauli exclusion principle, defining wave activity on a wave potential modification of the density tenets of general relativity.

From a study of classical complex wave mechanics applied to quantum events; from a simple but effective analysis of Newton’s gravitational law; from an inherently continuous but variable inertial density in the spacetime of general relativity, quantizable through localized stress and strain under large scale dilation; in various monographs we derive a quantum basis of gravity as the convergence components and the spin and electromagnetic moments as the curl components of a dual tensor oscillation under isotropic stress. This emergent form is recognized as a baryonic particle, the neutron, for which the moments of maximum inductive and capacitive torque provide the quark phenomenology of the standard model. Being unstable in isolation, it decays under ongoing stress into the stable proton and electron.  This model effectively unifies general relativity and quantum field theory. This model of the generation of rest mass quanta as an emergent function of the Hubble rate, as linked at UniServEnt.org, answers most of the principal conundrums raised in current physical models.

THE COULOMB FORCE UNDERSTOOD AS A WAVE FORCE — COLD FUSION? … PERHAPS

Except for the addendum to monograph (3) in the Introduction and Cover Letter, this modeling was essentially completed by the spring of 2007 with the addition of further development of quantum gravity and the quantum metric for the neutron as a quantum sink or black hole. When understood, this model offers salutary possibilities for energy and nanotechnological development, including the theoretical potential for a scalable development of LENR, now generally seen as an example of a pathological science after the failure to replicate the reported results of cold fusion by electrochemists Martin Fleischmann and Stanley Pons in 1989.

The coulomb force as now understood is generally modeled as the interaction of severable ‘virtual’ point charges free to move within a field, requiring application of exceedingly high energies of lasers or electromagnetic fields in a manner to effect inertial confinement and alignment of the deuterons for purposes of fusion. When that same coulomb force is modeled as the stress force of discrete waves in a continuum and not as a body force of point-like particle interactions—where the strong force is understood as a function of that same wave—the modulated confinement, alignment, and inelastic collision of deuterons without gamma radiation in bound deuterium atoms which have been infused in a palladium lattice becomes understandable, where the palladium and deuterium atom have the same electronegativity on the Pauling scale at 2.20.  This is a well-ordered rather than a stochastic process producing fusion as detailed in the enclosed material and in links to the YouTube video of the process. 

I did not come across the continued online interest in research into cold fusion until 2015, when my modeling of condensed matter wave-based baryonic mechanics offered a different theoretical approach for understanding the coulomb force. In the process of researching the documentation of LENR, I centered on the nature of the palladium lattice, where palladium in group 10, period 5 of the periodic table is unique with its 5s04d10 configuration in contrast to what might be anticipated as 5s24d8.  The free 5s & filled 4D shell, coupled with the common Pauling factor for both deuterium and palladium as depicted in the video, provides a rationale for further research on the referenced modulation in light of this modeling and the continued reporting by independent researchers of anomalous heat and helium production without gamma radiation.

JWST AND THE PREDOMINANT NUCLEOSYNTHESIS OF HYDROGEN & HELIUM WITH NO BIG BANG

In the later 1990’s, I began the above outlined analysis of physical phenomena. This research culminated in the last update with a 2021 addendum to monograph (3) pursuant to a thread by Stacy McGaugh, Department of Astronomy at Case Western Reserve and Robert A. Wilson, Emeritus Professor of Pure Mathematics, Queen Mary University of London. This addendum states the exactness, within the standard uncertainty, of the following equation involving the rest masses of five fundamental particles; the neutron(n), proton(p), electron(e), tau(τ), and muon(μ); the first three recognized as fundamental within the domain of condensed matter physics, the last two as transitional states, principally in the study of plasma physics. 

(1)                    5n = 3p + e + τ + μ

Equation (1) is stated by Robert Wilson as prior art without attribution. The analytical parsing of (2), developed in the addendum, is seminal as far as I know. (2)A adds the missing mass of beta decay, Δm, on the right, necessary to equal the mass of the neutron, n, on the left. (2)B repeats this addition of e + Δm twice on the right to equal the mass of two neutron. At (2)C it follows necessarily that the unstated values added to (2)A and (2)B must be subtracted from the remaining particle masses of (1) in order for the third parsed statement to be valid. 

(2) A.         1n = 1p + e + (Δm)

(2) B.         2n = 2p + (2e + 2Δm)

(2) C.         2n = τ + μ – (2e +3Δm)

The missing mass, Δm, represents a complex value as an amplitwist, a conformal amplification and twist in a manifold as developed in the work of Roger Penrose and Tristan Needham. The short quantum-wave explanation of (2) is this: 

  1. Conservation of spin angular momentum initially found in the neutron, n, requires a spin flip of either p or e or ½ in opposite directions of both. The ‘missing’ mass-energy of this flip is bound up in the amplitwist of Δm and does not register as a body-force interaction of any observed particle.
  2. The mass-energy of beta decay, (e + Δm), is the combined mass-energy shown here, where e is the residual mass-energy resulting from the isotropic stress force operating on the neutron at the time of that decay. It is readily shown that beta decay results from a differential stress force, dfe, on the neutron wave boundary, which divided by the wave speed squared is equal to the differential inertial density at that boundary as a wave strain, recognized as the Hubble rate, dimensionally considered a strain over time. Electron mass, e, is then a correlated—gauged—measure of the Hubble expansion/dilation rate.
  3. The mass-energy of the τ and μ are respectively, much greater than and much less than the mass-energy of a neutron, but together equal the mass-energy of 2 neutron plus 2 Hubble strains and 3 amplitwists. The τ and μ are extremely short-lived leptons, perhaps transitional bosons, that contribute the energy of 3 twists in response to the 2 Hubble differentials to produce 2 neutron, which based on this parsing, come in pairs. (2)C can be modeled as a quantum spallation in pairs from the boundary of pre-quantized concentrations of inertial source density occurring in regions of spacetime dilation. With (2)B, this pair production explains both the observed Hubble rate and the predominance of protium, deuterium, and helium found in the universe without recourse to a big bang singularity for explanation.

The modeling of this addendum foreshadows the observational data from the James Webb Space Telescope reported earlier this year. This data indicates a maturity of spiral galaxies that are not readily understood by the current timeline of big bang modeling. As understood, such maturity requires the existence of a gravitational component to such galaxy formation to be in effect at a time prior to the occurrence of a big bang, a modeling contradiction. 

OTHER POTENTIAL BREAKTHROUGHS

While I have yet to investigate it in any detail, the work of Mike McCulloch of the University of Plymouth, in which he references quantized inertia, features the Unruh effect in explaining observed galaxy rotation curves without resorting to dark matter as responsible for those observations. His explanation offers a theoretical possibility of technological development, including use as a means of transport. Some of the apparent axioms of his thinking imply an inherent inertial potential of spacetime and resonate with my own.

FROM PHYSICS TO POLITICAL ECONOMY — STOCHASTIC INERTIA & DETERMINISTIC INTENT

While my early interest was in math and the physical sciences, especially astrophysics, my degree work was in economics in the late 1960’s at Duke. I learned much of the skill and approach to technical investigation from my father, who was a professional electrical and structural engineer.  Starting in 1948, he worked for the TVA, then in the semi-conductor and aerospace industries, before entering industrial design & construction as a private contractor in the mid 1960’s. After graduation from college, I worked with him in the industrial design-build business until his passing in 1980.

In the early 1990’s I took this skillset into the risk management and catastrophe property insurance claims business as an independent adjuster. This work can be lucrative and, being episodic in nature, freed up time for various creative projects using an engineering approach to reverse-engineering problem solving. Leading up to and in the aftermath of the 2008 recession, I switched investigative interest to political economics for an analysis of the effect of various policy initiatives on US domestic and global economic growth and structural change over the past 50 years. 

THE GOLDEN MEAN IN VALUING CONSUMPTION, PRODUCTION, & HUMAN CAPITAL

The first of two unpublished investigations is a study entitled ‘The Browser Economy – An Analysis for determining the Optimization of Investment and Consumption Allocations according to their Valuation in a Market Economy’, and is linked on the UniServEnt website as listed in the cover letter.  A copy of the ‘The Browser Economy – Executive Summary’ of this research is included with the UniServEnt.org monographs. The intent of both studies is to gain a better understanding of the dynamics behind sector structural changes to provide a guide for policy recommendations and a gauge for instituted policy effectiveness.

In this study, we find that for a given level of liquidity, as quantified by expenditures on final consumption and on capital goods and services, where capital expenditures include both public and private sectors, an optimum equilibrium ratio of 0.618… as the golden mean, Φ, exists for

(1) final consumption over total expenditures

equal to that of

(2) capital over final consumption expenditures.

Conditions favorable to overall economic growth, meaning a rise in the general standard of living, are indicated by ratios somewhat below the optimum for (1) and above that figure for (2).

Examination of World Bank data for the period 1970 to 2013 shows a ratio range for the world economy of a few percentage points below (1) and for the OECD nation average of a similar range, before rising above (1) in 2009. Some notable economies trending several points above the target for this duration are Greece, Mexico, and until 2004, Brazil and India. The U.S. trend rose above (1) in 1982 during the Reagan administration with the implementation of supply side policy and has risen gradually—except for most of the Clinton tenure when it stabilized—to a current level of approximately 7 points above the mark.

More significant in this study is the inclusion of an accounting for human capital, 30% market and 70% non-market valued, in the domestic accounting of the US economy. The current absence of quantitative accounting for both MHC & NHC, and the resulting denigration in the public square resulting from not maintaining and enhancing the value of that human capital out of its own store of human value, results in the absurdity in the US and elsewhere of absolutism in fiscal valuation of money over humans.

The study shows that human capital is:

  • 10 to 20 times the value of real capital
  • 100 to 200 times the thin veneer of financial capital
  • 100,000 times the per capita value of monetary gold.

Similar accounting is presumed to be valid in the rest of the world economies.  

WEIGHTED ERGODICITY OF POSITIONED DECISION-MAKING IN ECONOMIC MODELING

In 2020, after the start of the Covid pandemic, I became aware of the work of Ole B. Peters, the founder of the London Mathematical Laboratory. He is also an external professor with the Santa Fe Institute, with a PhD in Physics, who has segued in his understanding of statistical mechanics to a study of ergodicity economics, for which ergodic theory first developed in the study of thermodynamics. 

The ergodic condition states that some measurable value, if averaged over an extended timeframe or lifetime of an individual microstate(ma), studied either as an individual element or a grouping of related elements, will be equal to the average value of the entire macrostate(M) at any single point in time. The ergodic condition is therefore generally held to be conservative. For such to be true there must be an interconnectedness such that an increase in one half of M must be offset by a decrease in the other half; an increase in M of a unit, m1, must be offset by a decrease in some other unit, m2, or in the sum of units, m2 to mn. On the other hand, if all mn are increasing over time—perhaps some more than others—the value of M and its average will increase; the ergodic condition can still be true but questions if it is still conservative. Either the values of ma are inflationary, or M is not a closed system; if not, the increase in the valuation of the units of ma are due to their qualitative growth, either as an inherently open function of ma or conditionally as a function of the openness of M. 

A link to this study, ‘A Critique of Neoliberal Economics Part I – Quantitative Analysis & Assumptions, Capital as Power Position in Ergodic Economic Modeling’ is in the cover letter. Using US Federal Reserve 4Q 1989 & 4Q 2019 household income and net worth data, I have applied ergodic modeling to stocks and flows in checking for the effect of weighted decision-making based on the focused rationality of microstate decision-makers and/or on the hierarchical position of those decision-makers. However, rather than numerical value generation by an ‘infinite’ number of flips as an unconnected extant microstate with extrapolation to the macrostate, this modeling consists of a branching series of flips of a coin, where the result of each flip is a stage for the next iteration of flips, so that 5 iterations result in an evolved macrostate of 32 microstates starting with a single microstate flip. Five iterations are sufficient to establish a proof of concept of this approach for understanding the fundamentals of a system’s dynamic. 

Subject to this methodology, which is essentially Bayesian, this analysis confirms by inference the weighting as being rationally focused, measured as an overall percentage for the macrostate while being individually weighted according to the evolved hierarchical position of each microstate. It points to the vapidity of economic theory which models each decision-making microstate as if it was poised in the marketplace with omnipresent position, omniscient market knowledge, and omnipotent control of monetary value. This linked study starts with the observable fact that every microstate has hierarchically limited market position, limited but more or less focused knowledge of that market, and limited ability to control the money, including the limited ability of government to control the value of money despite its omnipotence at the printing press.

CLARITY IN MODELING — ONTOLOGY VS EPISTEMOLOGY VS PHENOMENOLOGY

Modeling is a natural aspect of thinking that reduces and condenses an experience to its essentials in a mental map for the purpose of mental, social, or physical navigation. In a two- or three-dimensional representational form or process, using graphics, symbolic language of mathematics & set theory, and toy animations & mockups, conscious direction and documentation of the thought process improves understanding on the part of the thinker and dialectical communication of that understanding with others. Such natural modeling finds expression as formal philosophical thinking, where the purpose of formality is to insure logical clarity to the modeled thought.

Modeling that is ontological starts with stating what IS OBJECTIVELY TRUE in the mind of the modeler. It proceeds from that truth as an axiom to the development of a system of logic that embodies that truth. A hiker in the woods sees what looks like a bear in the distance, thinks, “THAT’S A BEAR”, and reroutes accordingly. Models that are instinctive, traditional, or are taught and learned by rote tend to be ontological. Such modeling often acts AS IF it is based on a position of omnipotence in defining the system.

Some modeling is epistemological and starts with stating what APPEARS TO BE TRUE, including the means and methodology of consistent validation of that appearance. It proceeds with what would follow or be derived from knowledge based on that consistent appearance and separates the conclusions from experimental data into what is objectively true, appears to be true, appears not to be true, is objectively not true, or is unknown. The same hiker in the woods sees what appears to be a bear in the distance, thinks, “THAT LOOKS LIKE IT MIGHT BE A BEAR,” reviews the possible paths for a closer look, determines if possible whether or not it is a bear, and reroutes accordingly. Models that update with experiential data grounded in a valid methodology tend to be epistemological. Such modeling often acts AS IF based on a position of omniscience in viewing the system.

Then there is phenomenological modeling. It starts from the modeler’s perception that all investigation of phenomena is based on the SUBJECTIVE EXPERIENCE of the modeler. The responsibility for recognizing and evaluating a phenomenon falls on the skill and experience of the individual in determining whether an existential phenomenon is objectively true/false or apparently true/false; it also adds the perspective of determining if a phenomenon is essentially true/false with respect to being existentially true/false. Thus, an essential false will always be existentially, objectively, false, but can still be apparently true. An essential truth can be existentially, objectively or apparently, true or false. Our hiker, having spent many years as a naturalist in the woods and never seen a bear, sees movement in the distance, thinks, “FROM MY EXPERIENCE, THIS IS WORTH A CLOSER LOOK,” being subconsciously aware that, ‘I AM ON THE ISLAND OF MAUI, WITH NO PREDATORY ANIMALS’, understands there is potential opportunity and no known essential risk to further investigation, reviews the possible paths for a closer look, and reroutes accordingly. My approach; such modeling acts AS IF based on a position of omnipresence in pursuing ontic and epistemic truth.

This indicates the importance of understanding how an axiom is phenomenologically stated as to its essential versus existential truth and how that axiomatic validation is made. Certain truths of medieval Christianity upholding the essential value of human life, promulgated along with models of natural processes as ontic truth, such as the literal six-day creation of an earth centered universe, were essentially destined to collide with the epistemological framework of the scientific method at the dawn of the age of European exploration. 

Now the epistemological scientific project has run its course and into the fundamental question of determining if conscious intent is an epiphenomenal function of natural selection as an apparent truth or is an objective expression of an essential truth about the inherent intentional nature of life. Contending with well-meant fear & ignorance, it must decide whether intention is a form of self-delusion—and if so, why—or a subjective capacity to initiate and direct change in a fundamentally counter-entropic project toward increasing order in a condensed matter ecology as required by life, sustained at a well-ordered distance by the fusion furnace of the sun.

Philosophical thinking, from ontological, to epistemological, to phenomenological, as with the biological forms and processes of the ecosystem, has evolved to provide increasingly objective and subjective clarity for navigating the risks and opportunities of life in which we all find ourselves. The technological success of scientific clarity in political and economic innovation brings us to a new inflection point requiring the same degree of clarity in understanding human intentionality for an implementation of enlightened policy.

CONTINUITY OF INERTIA & INTENT — WHAT TIES PHYSICS & ECONOMIC MODELING TOGETHER

MODELING PHYSICAL PHENOMENA AS INERTIAL

Physics as physical phenomenology exists as a study to differentiate inertia and intent—objective forms from subjective interactive processes. Notably started by the work of Galileo Galilei, refined by Rene Descartes, and codified by Isaac Newton in his three Laws of Motion, this historical project was motivated by the work of Nicolaus Copernicus, born in 1473, with the heliocentric model of the solar system, an inflection point in the understanding of celestial mechanics. It successfully refuted the common mechanical knowledge asserted by Aristotle 2,000 years before, that an inertial body—a body having the property of mass—will come to rest unless it continues to be forced along its path of motion. It implied that the motion of the planets did not require the application of continuous intentional force by a god to stay in motion. It was the start of the scientific project of using experimental methodology to amend common knowledge of life with quantitatively verifiable and technologically implementable mechanical models of physical understanding by intentionally removing subjective intention from the experimental frame of reference and instilling the objective inertial forms and processes of the inertial frame with a functional purpose as an embodied axiomatic logic.

It is gravitationally induced friction that requires action be maintained continually or intermittently in the form of work on a body in contact with the surface or atmosphere of the earth to keep it in motion. Work is the product of the force exerted by an external source in moving the inertial body times the distance over which that force is applied; such work is a measure of the energy expended in doing that work. Such work since antiquity has been associated with the intentional capacity of some sentient, animated being—a domesticated animal, a human, free or slave, or a god, spiritual or corporal. Galileo, with his quantitative experimental study of bodies rolling down an inclined plane, correctly deduced that if the elements of gravity, friction, or other unrecognized interaction were removed from a model, once set in motion said bodies would continue in a straight line of motion indefinitely without any additional impulse or work required. It followed from Galileo’s experimental work that an essential intent of the setup must be to close a model by removing any extraneous or unrecognized element of working intent from the mechanism being studied in order to leave only inertial interactions as understandable, reproducible elements of the model for technological application. 

We know that Aristotle was qualitatively correct in his observations within that philosopher’s historical frame of reference—on the surface of the earth, work is required to keep heavy things in motion—but his lack of quantitative investigation hampered qualitative conclusions that would have allowed technological development. Descartes’ refinement at the start of the scientific revolution was to self-consciously branch this study with ‘Cogito, ergo sum’ into two disciplines as physics and metaphysics, where physics became an empirical study of the field of space and time through experiential observation of physical forms and their interactive processes and metaphysics became a mental imaging study of recognition and manipulation of thought forms and processes embodying purposeful axiomatic logic and innate ideas, human and divine. 

To synopsize Descartes, because ‘I think, therefore I am’ able to observe and recognize innate mental purpose represented in material forms independent of myself, to navigate and to choose if, when, and in what manner to interact with those forms represented in the world. Building on Galileo’s work, Descartes produced a law of inertia stating the natural capacity of an inertial body to remain at rest or to stay in motion at a constant velocity in a straight path unless acted upon by an external force that causes the position or velocity to change. As stated, such external body force could be administered by interaction from another material, inertial body or dynamically by a living, animated being. It could also be mediated by a number of extended fields as a strain inducing stress force.

Newton’s three Laws of Motion were a culmination of this study in terrestrial and celestial mechanics in which:

  1. The first law restates the law of inertia of Galileo and Descartes. 
  2. The second law dimensionally quantizes a body force as the product of the mass of one of these bodies and its change in velocity as an acceleration, positive or negative, over the duration of that interaction.
  3. The third law states that the interactional force from one body will be equal in magnitude and opposite in direction to that of the second body.

When applied to celestial mechanics, the laws of motion produced Newton’s Gravitational Law. Further development used Descartes’ notion of vortices of various sizes filling all space in modeling of a gravitational field as an extension of substantial flows or by others as a medium of transmission from a concentrated body source according to Newtonian mechanics with a field potential that exists even in the absence of any receiver of that force. In this latter case the field potential might be continuous as a wave bearing stress or quantized as a property of a particle, or both, gauged by the potential of the field. To my understanding, Newton’s concept of particulate matter was not well developed, and the composition of celestial bodies was taken as the work of God, not evolved from particle aggregation due to gravity as now modeled. The field concept carried over naturally to the development of electromagnetism with Faraday, Maxwell, and others in the 1800s, where the notion of an electrical charge as an anion or cation was modeled before that of the electron or proton as the corresponding fundamental electrical charge carrier, with the beginnings of quantum mechanics. 

With the start of the 1900s, building on the work of Bernhard Riemann’s curved manifold geometry and the notion of a geodesic as equivalent to a straight-line inertial path in flat spacetime, and following a Galilean-Cartesian metaphysic, Albert Einstein removed the notion that a gravitational force is operating on a body moving on a geodesic path. In general relativity, spacetime itself is a continuous field devoid of inherent inertial properties; instead, it is the presence of mass–energy that bends an inherently flexible spacetime to mathematically produce the curvature of observed geodesic paths of bodies through that spacetime. However, there is no mechanical explanation of how mass–energy particles couple with the spacetime field to generate the curvature recognized as a gravitational force, unless we are to reverse the time–causality picture to show mass–energy of particles as wave curvature resulting from a bending of inherently flexible spacetime.

Contemporaneously with Einstein, many empirically minded individuals developed quantum mechanics using probabilistic mathematics in a wave–particle duality, eventually growing the standard model of inertial interactions into a quantum field theory, again without understanding the essential inertial connection to the spacetime of general relativity. Significantly, if matter understood as mass-energy in the fundamental units comprising all evolving living and supporting forms is essentially inertial in nature, how does one explain on a quantum level the composite animated, purposeful presence of living individuals connecting to space and interacting over time in navigating the risks and opportunities of the biosphere. The several billion–year project of terrestrial biological evolution, of increasingly ordered life forms and processes arising amid the high entropy of condensed matter, is ostensibly driven by an inherently counter-entropic survival intent, having expended a tremendous amount of work in overcoming the inertia of stellar dust in the process. 

The semi-millennial project of scientific inquiry in differentiating inertia from intent is all but complete. The spacetime field of human observation and interaction is now scientifically modeled as a repository of inertially discrete, interacting, charged particles. Both intergalactic spacetime and collider quanta are modeled as devoid of intent except in the understanding of the axiomatic properties acronymically programmed into the models of JWST and CERN. The only thing left to do in completing this project is to model the gravitational force/curvature that links these quanta together. The linked monographs in this writing clearly show that gravitational force/curvature are expressions of an emergent quantum mechanism as localized rotational oscillations producing rest mass in response to dilation/contraction stress and strain. The spacetime of general relativity is the sole source of those quantum forms as a continuum field of elastic inertial density having a wave bearing gauged lattice potential, and its dilation/contraction is the sole source of that quanta’s power and energy. 

Also needed is a better understanding of the way quanta are biomechanically bound together as coherent, self-replicating life forms to constitute an individual intent for navigating the phenomenology of risk and opportunity found in the natural world. That understanding of individual conscious intent cannot be found in an understanding of a cosmos whose primary principle is unmotivated quantum or general relativistic inertia. With a nod to Gödel’s two incompleteness theorems; physical phenomenology states in the first theorem (1) proof, that is, reification of the completeness of the quantitative analysis of inertial properties is not possible from within such an inertially modeled system regardless of inertial consistency, meaning the reality of the intentional capacity of human beings cannot be based on the inertial properties of the system in which it is studied, and in the second theorem, which essentially says the same thing as the first from a different perspective, (2) an inertial system observed to be quantitatively, inertially consistent cannot be proven, that is, reified as an inertial capacity of the system alone. Such reification can only be performed by the subjective aspect in a phenomenological modeling of a system, AS IF that aspect was the peripatetic focus of a soul on the inherently monistic material, mental, and spiritual capacities of nature, without prejudice to the phenomenal nature of such soul, i.e. whether it is essential and relatively immortal or existential and inherently mortal. 

The point is not that the applicability of quantitative logic and set theory is constrained to a study of inertial, material systems. In fact, it is applicable to any field of study, including aggregate structures involving intentional beings in physical and economic modeling. Quantitative modeling at some point comes down to counting things in a qualitative set and so to adding or taking them out of such a set; a little imagination gives us the option of grouping things in a set into subsets by dividing a larger set into smaller groups or by multiplying a small set to form a larger set. Simple arithmetic numerical manipulation becomes so facile it is easy to forget that an adjective numeral without a qualitative noun to define and operate on a set is meaningless. Basic arithmetic logic reminds us you can only add like things together without changing the nature of a set. Apples plus oranges remain neither just apples nor just oranges, but fruit. Multiplication is different; you can only multiply or divide different things. Linear meters one way times linear meters another way are square meters. This is meaningful if we don’t equate one linear meter with one square meter. But apples times apples are what, square apples? Hardly meaningful. 

From the efforts of quantum investigations of the last century, as fundamental stable rest mass particles of condensed matter, protons and electrons and the release of the dynamic interactional potential of the coulomb force are readily understood to be the result of a decay process from a more fundamental form of rest mass, the unstable neutron. Yet without the inertial instability of the neutron to enable the nuclear aggregation of protons within a related electron orbital configuration, the qualitative diversity of elemental material responsible for molecular structure and required for biological forms and energy transformation and utilization processes would never have occurred. This understanding is not a result of seeing the emergence of order from random inertial interactions. Given the predominance of hydrogen in the universe as the fundamental unit from which all other elements are comprised, it is a result of recognizing the deterministic mechanism of simple harmonic motion in nature as the dilation driven neutron which provides structural stability to all forms of matter and thereby to composite molecules with a variety of qualitative properties as selectable components for a multitude of evolving, living intents.

Again, echoing Gödel with respect to incompleteness and consistency in an understanding of proof, where proof is a conscious recognition of a quantitative conformance of an observed inertial condition with its modeled standard quality, there are two threads of continuity in this evolution. First is a continuity of increasingly ordered inertial forms and processes in an evolution along a spectrum of ordered intent, starting with a sea of protium and deuterium plasma at one end and culminating, so far to date, in the DNA of a condensed matter ecosphere as a macrostate at the other, a process that is inherently incomplete. Second is the individualized continuity of instinctive survival intent of a microstate that is naturally self-modeling and consistently (1) recognized in living beings, consciously or subconsciously, as a separate self, (2) identified with the inertial composite complexity of each individual as ‘their’ body, and (3) utilized in counter-entropic, leveraged work, specially fitted within a system of symbiotic forms, each representing a risk or opportunity to and from others in the biosphere, a system that is consistent with but not ‘provable’ to another self that is self-recognizing, identifying, and intentionally utilitarian. It is these two axioms of continuity, as applied independently to the entire biosphere and as an individual focal point of awareness among a multitude of such foci, each a dynamic switching of navigational aim of a living being while moving toward a goal along a variety of spectra between inertia and intent. 

REVERSE ENGINEERING MODELED INTENTIONALITY

To use a phenomenological game metaphor for GR and QFT, this is like opening a game board of 8 x 8 squares painted with alternating dark and light squares to play a game of chess and finding it furnished with 2 x 15 playing parts from a backgammon set that has more than enough boardmen for a game of checkers. Sure it will work, but if you grew up on checkers, what do you do with the three extra men for each side? If you grew up on backgammon, where do you place the men on the board to set up the game and how do they move? What if you grew up playing chess with the whole board in play and were expecting to find a hierarchy of players?

Neither the gameboard of general relativity nor the playing parts of quantum field theory of themselves direct the play of the game. That would require playing parts with an ability to intentionally aim at a purposeful goal on a well understood field of play. As it is, play is the prerogative of neither the board nor the boardmen, but of the players, though they are constrained in the play by both. Ontic assumptions of boardman placement on the board in setting up the game and epistemic assumptions concerning movement and areas of play may work well. In the case of checkers, it might reveal that the idea is to get to the other side of the board and take all the opposition players captive in the process while ignoring the reserved half of the area in the lighter squares of the board, as would readily be the case if a phenomenological approach to the question was pursued. 

The board is obviously intended for a type of game play and the uniformity and size of the boardmen—neatly fitting into the square borders—are obviously designed to function on the board. The extra boardmen? Spares? In the case of chess, for someone with an understanding of natural, social hierarchy, phenomenological intuition might suggest the potential intent to evolve into a system of pawns and rooks and knights and bishops and queens and kings with full range of the board —except of course in the case of the theologically and scientifically trained bishops who are each constrained by their indulgences to non-interacting domains.

MODELING ECONOMIC PHENOMENA AS INTENTIONAL

Political economy as an economic phenomenology exists as a study that reverses the order from that of physics in differentiating human intent and inertia. In this case, intent is the socially self-aware process of acquiring, producing, distributing, and utilizing the resources needed to sustain and reproduce human life through an understanding of the life processes of the biosphere. Inertia is the background of natural and developed resources and traditional systems, for use chiefly as they are understood to be required and intended to provide for the support of those processes.

The theoretical advances in physics and their technological application, starting around the time of Copernicus and Galileo, facilitated, and were facilitated by, the impetus for commerce and trade. Starting with the Portuguese a few years after the birth of Copernicus, John II of Portugal sent his explorers, Bartolomeu Dias and Vasco da Gama, south along the coast of Africa with rapacious purpose in search of navigational information for the quickest route to the valued commodities of Asia, for slaves as reproducible productive value—but with unimagined intelligent capacity to assert their own intentions in time—and for gold as money, at the time the most inertially dense measure of material value. 

It is worth noting that prior to the publication of his work on a heliocentric solar system, Copernicus published what is reported to be the first work on the quantity theory of money. This theory states that the nominal price level in an economy is directly related to the quantity of money in circulation. Hence, there is no inherent unit value of money as an equivalence measure for setting the purchase price of a basket of commodity goods or services. Over the short term in a market economy, the cost of producing those commodities will reflect their current ‘real’ pricing, but over time the selling price will reflect the quantity of the money in supply. As the quantity of money increases, ceteris paribus, the price of a basket of goods and services increases and a unit of value relative to the purchase price of those commodities decreases. The inverse is true over time; as the quantity of money decreases, the prices of commodities decrease and the unit value of money increases. 

What should be clear from this logic is that the quantity of money as financial capital is not an inherent measure or determinate of a community’s productive value. It is not metaphorically applicable as aggregates of energy for work in physical modeling, as with the number of BTUs in a barrel of oil or a watt-hour of electricity. Productive value or productivity is the human capacity to produce the goods and services required for living, qualitatively so if the energy, health, and happiness needs of the community are customarily or otherwise properly maintained. Assuming a stable money supply, productivity is a combined measure of the human and real capital of a community, both of which require a commensurate level of collectively recognized intent, coordination, skill, and need via beneficial technological investment. Financial capital exists only as a socially recognized monetary methodology for allocating real goods and human services in the sustainable production of intermediate and final consumable goods and services. 

Monetary methodology evolved principally for that purpose, starting as a method of agreeably accounting and allocating in an agrarian non-market community for day-to-day differences between individual & family production and unmet consumption needs. Verbal agreement to lend and borrow with the expectation of recompense in the near term, in time added the use of tokens of satisfaction and eventually durable coinage in circulation for work performed and for trade outside the immediate community. In time, fractional reserve banking produced a predominance of bank note circulation over precious metal coins, so the money in circulation came to represent a preponderance of debt over credit in the accounts of banks and on the ledgers of private enterprises involved in such banking. In the aggregate, some monetary accounts represent real capital and stock for things already produced, of indeterminate physical depreciation and worth, and some represent accounts for things that have yet to be produced and may never be produced or be fungible, with an economic impact on the community that has yet to be imagined. Financial assets are only as valuable as the current transactional intent of the asset owner operating within the intended agency of a governing authority. And as we know, banks fail, and when they do the insurer of last resort is the community.

While the intent of most monetary authorities may be to provide stability to the value of a unit of currency, a stability in the flow of that currency is also a concern. In a traditional feudal economy, except for the elites, money was a minor factor in the quotidian production, distribution, and consumption patterns of a community, but with the rise of global market economics, money has become an existential concern for everyone, and traditional patterns have all but vanished, particularly in the cities. For those whose employment skills are in surplus in the labor market, compensation is reduced over time to a commodity level, and savings vanish accordingly. Without a viable safety net, disruptions in the supply of money or its valuation produce liquidity and supply crises as experienced in the current Covid pandemic. 

There is nothing new in this knowledge. The work of Adam Smith in 1776, Karl Marx in 1867, and John Maynard Keynes in 1936 in different ways point to the importance of understanding the free exercise of individual intent in the production, distribution, and consumption of goods and services, through acknowledging the control over the producers, distributers, and consumers by parties positioned to direct the flow and value of money. Of significance is that such crises are portrayed as a result of maleficent or indolent intent on the one hand or as a lack of resources or collective inertia in dealing with their causes on the other, as if sufficient money in the right hands—or the left—would solve the problem. 

But money is not the problem. The WAY money is controlled in flow and value by the positioned parties is the problem. In the US, taxation on income is used as a cudgel for partisan bickering in the current implementation of monetary and fiscal policy, fighting over a pool of money that the right sees as private and the left sees as public in its origin, when it is a mix of debt and credit in both private and public accounts. Both then complain of inflation when the liquidity surfeit of supply chain disruptions are compounded by meeting the unemployment demands for costs of living from a number of public and private sources, including ARM HELOCs, all of which are further compounded by rising FED rates designed to cool down full employment, when what is needed is investment in addressing supply chain issues and in technology affecting growth needs in real and human capital. This includes pursuing the technological potentials of modern monetary policy and a universal basic income as a citizenship dividend of human capital in balancing liquidity needs while maintaining collective interest and engagement in the overall productivity and quality of life.

CONCLUSION AS TO WHAT TIES PHYSICS & ECONOMIC MODELING TOGETHER

An attempt to form a fundamental theoretical connection between the disciplines of physics and political economics for my own satisfaction has been framed by my understanding of history—from trying to look back in time at the nature of various macrostate conditions. Making history, like spinning yarns and weaving fabric for a tapestry or costumes, is a forward-facing process, partially patterned by the colorful, textured weft of individual microstates, fully constrained by the durable warp of the macrostate through which the individuals weave their lives. The storied fabric may not be executed according to any intended pattern, but each weft is purposeful in selecting the fibers it deems most suitable from whatever resource is available at the time to clothe its role in the body politic with a serviceable covering that will last. Likewise, the warp that provides the ecological, political tension required to support the weft as it unwinds over time from the warp beam is purposely drawn, as judged by the invariant continuity in that tension. As for the microstates, presumably little introspection is spent by most individuals envisioning an engineered ‘model’ of operation of the loom. As long as the durability and interesting color and texture of the cloth serves the intended function of the fabric, whether the material of the fibers in the yarn is defined primarily by the color and texture of the weft or by the economic good and social durability of the warp to the fabric is generally of interest to only a few.

When the material is not so durable or comfortable, nor the fit so functional, history becomes retrospective. Then more individuals are inclined to reverse-engineer a model of the cloth and the loom to understand separately the material nature of the fibers, the method of production and distribution of the fabric, and the purposeful function for which the material was cut, fitted, and sewn. Physics, particularly quantum physics, concerns itself with a technological understanding of the material nature of the fibers. Economics concerns itself with harvesting the fibers and with the production and distribution of the fabric. Politics, or the body politic itself, which for millennia has been organized and managed by individuals in varying degrees of hierarchies in church and state positions, concerning itself with determining how the suit is cut, fitted, and sewn, has only tangentially addressed a need for understanding the engineered structure of the loom. This began to change, in the 15th century when these positions became increasingly filled by individuals with academic and commercial credentials, though as in the case of church and state, not always with individuals of the stated or reputed qualifications. 

Geographically born, navigational modeling of commercial and academic experience has specialized these general disciplines further over the past six centuries, with increasing refinement in mapping the investigative scale and operational breadth of the resulting innovations. This specialization has evidenced a significant decrease in the hierarchical position of church and state with its perceived traditional wisdom in deciding how to clothe the body politic, with a shift of influence to a revolution of scientific decision-making in the academic and commercial vanguard. With such technological specialization comes a related lexicon of each subject development with its own set of assumptions and axiomatic understandings and collegial deference to the expertise of other fields. In time, the university becomes divided into the humanities and the sciences, where each excels, if not in a form that is readily recognizable to the other; or to those outside the academy, to either.  

The concept that ties together these studies of political economy and physics for me is a fundamental element commonly misunderstood in most current economic theories and missing from most interpretations of the standard model of particle interactions. It is the essential element of human need and purposeful agency, generally unrecognized but continuously present in social and natural interactions. 

It is this unrecognized essential continuity that provides a grounding for the recognized existential ties connecting all economic decision-makers in a political economy, connecting all rest mass particles in an observed universe of differentiated plasma and condensed matter for a variety of higher, well-ordered purposes even if the intent is yet unimagined, and connecting the social and the material disciplines of modeling the phenomenal world in its laws of invariance and conservation. It is the axiom of continuity as an individual and group, moving from a sleepful rest of inertia to the full awareness of a working intent. 

Inertia is materially formed as rest mass particles of simple harmonic motion in physics and socially incorporated as the basic necessary routines of life, routines which are in themselves somehow insufficient alone for a full appreciation of the promises of economic and cultural tradition. In turn, intent evolves materially and socially in specialized form and process, spiritually and instinctively aimed in a logically understandable mechanism of increasing systemic order amid apparent environmental entropy, toward an intuitively recognizable goal of flourishing as individuals in a collectively organized and directed, life enhancing ecosystem. In the pursuit of opportunity in this ecosystem, logic must embrace wisdom over ignorance, particularly when the ignorance is intentional. In avoiding the risks perceived in the recognized inertial habits and unknown intimidations of our fellow human beings, intuition counsels love and understanding, without giving leave of caution, over dwelling and mongering in fear. 

There is always a need for clarity in navigating the changing risk and opportunity of life. The need is heightened by accelerating change that has been implemented by technological insight and now appears to be calling for further material and social innovation on an increasingly disruptive scale in a search of better, cleaner sources of energy and other resources for a sustainable quality of life. Such is the motivating intent, the aim and goal of modeling. 

This piece started as an expression of my independent, quantitatively structured investigation into physical and economic phenomena through my experienced understanding of the fundamental human natures of inertial, formal, and intentional capacity in dealing with our world of risk and opportunity while pursuing the satisfaction of basic human needs. In addition to food and shelter, chief among these needs are the technologically generated requirements of supplying energy and environmental sustainability, the holy grail of which is to access the abundant energy of the sun cheaply, without the deleterious effects of having that process occurring at arm’s length; this means fusion at room temperature. My enlightened understanding of physical processes indicates that it is worth pursuing.

I may be wrong about the prospects for palladium catalyzed cold fusion as an economically scalable energy technology, but I also know that no one to date has approached the subject with this integrated understanding of what constitute quantum interactions, and until others with the necessary technical skill are willing to help with the necessary theoretical vetting based on that understanding, I believe we will be hard press to garner the necessary experimental interest.  

I trust that you will find appropriateness in this discussion and will know how best to proceed with its vetting and disclosure. I thank you for your time and welcome any well-defined, innovative work in addressing our energy needs. 

Sincerely,

Martin Gibson

PS to all: I apologize for any problems with the website navigation due to ongoing upgrades, including those from attempts to monetize and then in part to demonetize the site. The UniServEnt site is a self-supported effort. I have attempted to make the material readily accessible online while asking a small fee for print access to all except the first monograph, ‘Unification’, which is set to offer a free print to all at https://uniservent.org/pp02-printing/ . In addition to the material referenced here, there are links on the site to other work. If you have questions, please let me know at the contact information above.