General Thoughts on Epistemology IV: The Difficulty of “Faith”

A man praying at a Japanese Shintō shrine.

First, I don’t want to step on anybody’s beliefs, but, well, here we go. -Brian Regan

Given an understanding of the world based on the scientific method—what I call the “wager” process—there are some tricky things that happen when we try to reconcile the concept of “faith.” In what follows, I will explain why the term is unintelligible unless it is used to describe a situation in which one believes something while ignoring evidence to the contrary.

To reiterate what I have explained in previous entries, the “wager” theory claims that coming to a belief always involves a kind of wager. When a wager is made in gambling, risk is adopted that the wager will fail. The wager is then confirmed or denied when the cards (or what have you) are revealed. In the same way, holding a belief adopts the risk that it will be disproven, and it is confirmed or denied by physical or logical tests.

The Oxford Dictionary defines “faith” as “complete trust or confidence in someone or something.”  The Stanford Encyclopedia of Philosophy breaks it into three broad categories: affectivecognitive, and volitional. The affective component refers to the psychological state or attitude often denoted in the phrase “losing one’s faith.” The cognitive component gets at the epistemic nature of the concept, suggesting that it is an actual cognitive faculty by which we can come to “know” things. It is perhaps most accurately categorized as a kind of belief. The volitional component adds the notion that one can choose whether or not to have “faith”—to accept this special kind of belief. Most of what I will address here are the linguistic aspects of the cognitive component.

There is a basic definition given by the Hebrew bible that has a number of translations. The English Standard version is as follows: “Now faith is the assurance of things hoped for, the conviction of things not seen.” (Hebrews 11.1) Within every translation, two recurring elements of “faith” that appear to be essential components are “hope” and a sense of certainty that goes beyond mere belief. First, combining “hope” with “certainty” does not appear to give us any semantic tool that we don’t already have or refer to some mental state that we cannot pinpoint otherwise. Moreover, the two concepts appear to be at odds. Saying that I hope x will occur usually requires that I am uncertain that x will occur. It is often a conscious hedging away from certainty. As aspects of “faith,” they also run into issues on their own.

“Hope” is primarily a stronger form of “desire”. When we hope for something to be the case, we desire for it to happen. If “faith” is another type of desire—a stronger version of “hope” perhaps—then it wouldn’t serve any epistemic purpose whatsoever. My desire for something to happen or be true has nothing at all to do with attempting to discover the actual outcome.

On the other hand, “certainty” does imply an epistemic relation. Use seems to vacillate between having absolute conviction and having a mere strong conviction. “Faith” does appear to have an absolute quality about it, since having only a strong conviction doesn’t quite cut it as a substitute for having faith in most circles. If “faith” is more of an absolute conviction, however, then it begs a series of questions: By what standard is it absolute? And is it ever logically permissible to have such a conviction? And if/when it is permissible, is “faith” a reasonable vehicle to get there?

I haven’t yet seen an acceptable set of answers that suggests “faith” is a helpful and coherent tool in epistemology. Even if one can muster up an adequate description and justification of “absolute” conviction, we are still left with showing how “faith” works and making the case that it is something reasonable to use.

The problem is that there really is no good explanation for how it works. “Faith” is considered quite mysterious, intentionally so. And because reasonable use is contingent upon there being at least some tenable understanding of its function, the chances that faith and reason coincide are slim at best.

There are some attempts to do this, however. One way would be to identify faith as a component of every belief and that it somehow accounts for the gap between our natural state of ignorance and the actions we take. According to this model, every action we take requires an extra push to get us there from our beliefs. But “faith” doesn’t appear to provide any explanatory power here. And once we add the parts we need to describe the basic epistemic process—placing proposition-based wagers and using actions and “intuition pumps” as tests to confirm or deny them—it becomes unnecessary to add any more components.

We could attempt to identify the concept as the same exact thing as making a wager, but for “faith” proponents, this would probably be undesirable, for the element of certainty would be lost in this case. A bet (much like scientific theory) is something that holds strongly to the assumption that its failure is within the realm of possibility. Unless I am mistaken, it is clearly the opposite for matters of faith.

Another attempt would be to note that “faith” is more applicable when deciding whether or not to trust another human being, the thought being that we are so complex and unpredictable, that the only viable option is to take some “leap.” If so, then it would imply that faith and rationality are quite separate, since the assumption is that we cannot predict what will happen with reason.

However, the “wager” theory still acts as an intelligible substitute for this even if we cannot reasonably “predict” what will happen. One can effectively bet on what someone will do and hope that they do it. Nothing more is needed. To use “faith” as a replacement for these two operations still seems to fly in the face of the  certainty requirement. To tack it on as an extra piece of the puzzle seems to either be unintelligible or create a contradiction (being certain and uncertain simultaneously).

There is still one definition for “faith” that fits neatly into this narrative, and that is “to believe a proposition despite evident contradictions.” In this case, “faith” would not be something that is part of the proper epistemic process. It would describe a decisively irrational function by which one may believe certain propositions to be true while ignoring conflicting test results for reasons external to the epistemic process. Although employing “faith” could not be rational on this account, it may still have some positive psychological effects for various people. And it may even bring more happiness to those lives than “proper epistemology” would. Even so, I would not subscribe to it. But that’s just me.

General Thoughts on Epistemology III: We Are All Cosmic Gamblers

English: The French Gambling Aristocracy

English: The French Gambling Aristocracy (Photo credit: Wikipedia)

As a preface to this post, it is crucial for me to communicate that I am setting aside the discussion about “knowledge” as my primary pursuit. The reason, as I explained in part II, is that the use of the word has become muddled. My position is that philosophical skepticism about “knowledge” of the external (physical) world attempts to solve a presently unsolvable problem, and therefore, “knowledge” may not be the best term to use when attempting to describe how we gather and store data in our brains. Instead, I will focus on the interaction between human beings and the external world on its own terms. Hopefully what follows is indicative of this.

A while ago I wrote about Pascal’s Wager, contending that he removes the essential components from what it takes to justify a belief. His argument was that, when looking at belief in God in a cost-benefit analysis, it is more beneficial to choose to believe in God. I reject his presumption that a person can rationally choose to believe something exists while disregarding all evidence for or against its existence. However, his idea that it is useful to equate believing and wagering is worthy of consideration. My suggestion is that the process of making a wager is the best model to describe what is going on when we are deciding that something is the case. When we claim to have a belief, faith, trust, or even knowledge, what we ultimately have is a form of bet. This does not answer the question of how we ought to come to a belief, but my inclination is to say that this theory will help explain what these kinds of claims are, fundamentally.

There are a few “armchairish” observations sometimes taken for granted that hint at my suggestion being a good one: Assuming that humans all function the same in the following ways, 1) we use observations and cognitive processes to form beliefs; 2) we take action based on our beliefs; 3) sometimes mistakes, ambiguity, and/or external factors outside of our control cause us to be incorrect in/about our beliefs; and 4) replacing the word “belief” with “wager” in (1)-(3) results in a fairly coherent progression of thought.

Put simply, when we decide that P is true given the data we have, we are simultaneously placing a wager on P. In other words, we do not only think it is true now, but we are betting it will continue to be true as time goes on. Then, as we go about life taking action according to the wager, the proposition is tested by observations xy, z, etc. If these observations appear to connect logically and/or causally with P (if everything appears consistent), then we confirm it to some degree, and the wager is not changed. The risk adopted by the bet is that the proposition may be proven to be incorrect, and there may be some undesirable consequence as a result of the action(s) based upon the proposition.

This process is related to Jonathan Haidt’s psychological theory involving processes of the mind. He came to the conclusion that people have an initial reaction to some stimulus that consists of a snap-judgement. Then, what follows are a series of rational thoughts that he says “supports” the initial judgment. I agree with this general theory, but if we also add that it is possible for the rational thought to deny the initial judgment, the theory has an even wider application.

The nature of the game is that whenever we consider some question, we have an open field of possible truths that is narrowed whenever we rule things out based upon testing, observation, and logic. This thought isn’t new. It is reminiscent of the scientific method, which dates back to the Renaissance and early Colonial Era when it began to take shape. Most of the modern iterations of the scientific method assume that any theory is open and liable to change, not only because of more efficient and useful language, but also because the pool of data changes over time. A “wager,” I suppose, could be most analogous to a “hypothesis.” But hypotheses are more consciously contrived, and my goal for the “wager” is to be broad enough to refer to unconscious behavior, in addition to all matter of predictions, theories, and conjectures.

Note that merely “placing a bet” says nothing of my conviction in the outcome, the quality or quantity of information that is taken into account, the level of consciousness with which I make the bet, or the time-frame of my test. I may be unconsciously believing something ridiculous, or I could be making a detailed evaluation of a claim’s plausibility. Both would involve a wager of some form—a decision to hold some proposition to be true or at least to act as if it were true.

One might also point out that there are important distinctions between different kinds of wagers, namely temporal ones. There are wagers about what will happen (predictions), about what has already occurred (beliefs), and about that which is ongoing. Suppose my childhood friend and I see a squirrel darting through street, and he says, “I bet that squirrel will get hit by a car!” Our inclination would be to label this as a prediction, since the event has not yet come to pass. In continuing the story, suppose I were to respond, “I do not believe that squirrel will be hit by a car.” It seems odd at first that I would use the word “belief” for what should be another prediction.

The reason someone might make such a mistake, I think, hints at a deeper underlying theory like the one I have proposed. If both beliefs and predictions are forms of a wager, then there is an inherent predictive aspect in both of these terms. The substitution of “belief” for “prediction” is made because the character of “belief” has an inherent predictive quality about it, which makes it easily mistaken for a prediction. However, it still does not have the same set of qualities that “prediction” has. There is still room for distinction.

And the distinction is this: A prediction is a consideration of what has not yet come to pass; a belief considers what is ongoing or has passed. In my theory, they both still fall into the “wager” category. I can still comfortably replace both “I believe” and “I predict” with “I bet.”

The immediate worry that should arise is that it seems like I would be committed to saying that predictions, wagers, and beliefs are all the same thing. In response, I would say my claim is that wagers are broad enough to encompass all of these kinds of terms, and there is a predictive element to a wager. However, that element refers to a “first-order” qualification that beliefs, predictions, propositions, and the like contain in common an expectation of continued confirmation. And confirmation can only occur at points in time after a wager is made, regardless of its kind. Note that because people are continuously acting or not acting, each action must carry with it an implicit set of bets. Since this is always the case, it doesn’t seem that “beliefs” can be abstracted from “wagers.”

Secondary qualifications would make up the distinctions between the different kinds of wagers. For instance, expanding upon the distinction I made earlier: the content of a belief must be considering an ongoing or past phenomenon; the content of a prediction considers a future phenomenon.

To conclude, when we form beliefs and predictions, we are making bets on what we think is accurate. The discussion about whether we can verify knowledge globally is a bunch of bunk. It is pretty clear that sometimes we make mistakes, but our goal is to seek truth regardless. And ultimately, we all play the game. We are all cosmic gamblers.

General Thoughts on Epistemology II: Global Philosophical Skepticism

nihilism

nihilism (Photo credit: stungeye)

The various forms of philosophical (p.) skepticism in the study of epistemology seek to question, deny, or limit the categories of what is possible to know. While it is useful to point out that we don’t have the data to confirm or deny some claims, many skeptics go so far that they would be forced to live a lifestyle that is inconsistent with their beliefs. I argue that questioning the “existence” of knowledge as a whole probably serves no purpose, since either answer will have no bearing on the everyday criteria for making decisions and taking action.

General p. skepticism questions the prospect of “knowledge.” There are several schools of thought that can be found on the family tree of p. skepticism. The extreme form is called epistemological nihilism, in which all “knowledge” is denied. There is also epistemological solipsism, a theory stating that “knowledge” about the external world is impossible, but “knowledge” about the agent’s mind is possible. ‘Global’ p. skeptics claim that they hold no “absolute knowledge,” while ‘local’ p. skeptics question specific types of knowledge. My target is the multifarious forms of global (g.) p. skepticism including nihilism and solipsism, and there are three main issues with these positions that I will touch upon.

First, this debate has been prolonged by a failure to give proper attention to semantics. The quotations around the word “knowledge” in my explanation are present because I have difficulty pinpointing exactly to what g.p. skeptics are referring when they use the term. It wouldn’t be just to completely fault the deceased thinkers who would have benefitted from access to modern advances in neuroscience. Nevertheless, “knowledge” remains ambiguous even without modern scientific perspectives.

It could be that “knowledge” really is “Justified True Belief.” Or it could be some abstract thing we achieve when we fully “understand” such and such. Perhaps it could be as simple as a subconscious observation that we sense in our environment. Ever walk into a room with, say, blue wallpaper, but hardly pay it any mind? How conscious does one have to be in order to know that the wallpaper is blue?

A g.p. skeptic might respond, “Exactly. The point is that we have no clear idea of what knowledge is.”

But they forget that human beings are the authors of language. We get to decide exactly to what “knowledge” refers. Any label without a referent is a floating abstraction. The question should not be, “What is knowledge?”, but rather, “How should ‘knowledge’ be defined?” The question should not be “Does knowledge exist?”, but rather, “Is ‘knowledge’ a useful term, given its definition?”

The second issue is that some skeptics must act inconsistently with their beliefs in order to interact with their environment. Nihilists believe that there can be no “knowledge” about the external world, and therefore, it cannot be verified. Yet, if they want to do anything, they must act upon information they receive through their senses. If they believe nothing they receive should be characterized as “knowledge,” then fine. The discussion becomes semantic. Otherwise, unless the nihilist sits quietly until death (or is deaf, dumb, blind, etc.), then the beliefs he/she holds will be violated.

As justification, I propose this thought experiment: Try to think of just one instance in which a normal, conscious human being can act physically without being aware of something in the physical realm.

Finally, there is an almost comical response to g.p. skeptics from a philosopher named G.E. Moore. He essentially says (not an actual quote), “Look. Here is my hand. I am perceiving a human hand right now. That is a fact that I ‘know.’ If you do not think I know it, then let’s say that I am acquiring sense-data. One sense-datum that I perceive is the surface of my hand.”

Although he employs an intriguing way of engaging g.p. skepticism, if we take the analysis one step further, we arrive at an important point. It is that perception of the so-called “external world” is the default state of human beings. There are all these lights and sounds and feels going on that we report on and verify through language. “Stuff” happens with such frequency that there is not really any reason to deny knowledge of the external world, especially on the basis that we could be wrong about individual bits of knowledge. (Proving something false logically necessitates the truth of the falsehood). The burden of proof is on the denier of clear and obvious evidence that literally surrounds us at all times.

To reiterate, challenging “knowledge” as a useful term is not a poor challenge, but questioning the concept while presupposing an ambiguous meaning is problematic. Going too far with g.p. skepticism will result in an inconsistent lifestyle. Acting in accordance with one’s sense-data regularly, while globally denying that very same data is an incoherent position. Therefore, the only option by default is to accept the external world as a given. Even settling for an agnostic position, abstaining from belief in the external world, may conflict somewhat with taking action. As I like to say, half-seriously, in all its Objectivist glory, “Reality doesn’t care about your nuanced opinion.”

General Thoughts on Epistemology I: A (Mostly) Platitudinous Introduction

The School of Athens

Everything I would like to discuss in the realm of epistemology may require a little background, especially for those who are new to this area of study. In addition, it was drilled into my brain that I ought to prove that I know a little bit about something before I enter the discussion, so it is perfectly permissible for the reader to see this as part of my self-consciousness made real. What follows is some mildly chronological context, sprinkled with some arguments and scare-quotes, all communicated a bit lazily.

Epistemology is supposed to be the study of knowledge and how we know. It is pretty simple as far as the general definition goes, but one will soon realize upon closer examination that writers have been complicating this area of study for years. The obvious pattern to follow through history is the longstanding debate between rationalists and empiricists, and less obviously, the debate between rationalists who tend to overcomplicate and empiricists who come dangerously close to (if not logically imply) that we cannot know anything at all. Here is a machine-gun history of epistemology:

Plato thinks that everything in the living world is a reflection of underlying reality, called the forms, with which we hang when our souls enter the land of the dead; he also “invents” the idea of “Justified True Belief” as the definition of knowledge.

Aristotle, with a slightly more scientific approach, develops his distinction between phainomena (observations as they appear) and endoxa (reasoned opinions about phainomena).

Fast forward about two-thousand years, and we get René Descartes, who coins the phrase, “I think. [Therefore,] I am,” as the first thing we can know with certainty purely through reason. He also attempts to defeat any doubt about our experiences (that we may be dreaming or being fooled by an evil genius) by “proving” that God exists.

John Locke, a British Empiricist during the Enlightenment period, argues that experience is the “basis” for knowledge in opposition to Descartes. His Causal Theory of Perceptions puts focus on the interaction between the world, our perceptual organs, and our minds.

David Hume, another famous British empiricist expands upon Locke and divides knowledge into two categories: relations of ideas (math/logic) and matters of fact (observation). His skepticism about the external world leads him to conclude that there is no rational justification for the existence of anything, that there is no such thing as “perfect knowledge,” and that metaphysics is stupid.

Hume finds an opponent in Immanuel Kant, who contends that metaphysics is not stupid and that it is possible to “know” things through reason alone. He frames this discussion with his introduction of the analytic-synthetic distinction (which I may explain at a later date). His overarching theory is called transcendental idealism, which postulates that there is a barrier between the mind and the external world created by our perceptions, marking yet another conceptual distinction between phenomenon and noumenon.

The spiritual successor to British empiricism became phenomenalism, which is the view that the existence of physical objects in the external world is not justifiable. According to a phenomenalist, when we speak about physical things we are talking about mere sense-data.

In the early nineteenth century, logical positivism went in a slightly different direction. Logical positivists pushed the verification principle, which states that a proposition is only “meaningful” if it is verifiable—can be proven true or false. A.J. Ayer defends this theory famously in his twentieth century work, Language, Truth, and Logic.

In 1963, Edward Gettier introduced a problem (previously raised by Bertrand Russell in 1912) with the “Justified True Belief” model of knowledge, explaining how a belief could be both justified and true but not be something that we would consider knowledge. The problem occurs when a person’s justified belief is coincidentally true, not by the original justification, but for different reasons. (I find fault with Gettier’s original problem, but there are many other examples that may or may not succeed.)

We also get some critical, agnostic thinkers, like Barry Stroud, who believe that none of the positive theories from these philosophers or movements are correct about epistemology, and we still haven’t solved the problem that Descartes elucidates—that we have no way of knowing if our experiences are real or just a dream.

Ayn Rand’s Objectivism similarly rejects the views of most mainstream philosophers like Hume and Kant, but instead of stopping at criticism, presents a fairly straight-forward way of defining knowledge and its processes. On the basis that reality exists self-evidently, Rand claims that knowledge is a conceptual organization of our perceptions of that reality. She defends the simple assumptions that we tend to accept about knowledge, but goes into detail in providing her framework for how it works. Unfortunately, she has trouble building a bridge between her philosophy and the established bodies of work, and there are a lot of holes that remain unfilled.

The most contemporary philosophers have the fortune of instant access to a wealth of present-day knowledge obtained via rapid advances in science and technology. Modern philosophers, like Daniel Dennett, now examine the nature of the human mind from a more scientific perspective in order to learn more about “knowledge” (or better yet, how we interact with the external world), as opposed to knowledge in and of itself. Now epistemological theories look like this.

So begins my foray into a somewhat outdated field in order to explain some things that the reader may already realize, but may not often think about.

My first order of business is to communicate this: the term “perfect knowledge” is unintelligible.

My second order of business will be to write the next post.

Objectivist Government: A Critique

Cowboy

Cowboy (Photo credit: Kevin Zollman)

Ayn Rand points to a few reasons why even a faultlessly moral society will still need a government structure: Honest disagreements between moral individuals will need a third-party arbiter and objective rules; pacifist societies will be at the mercy of the first bully to cross their path; and so forth. The underlying argument is that only a government structure could fulfill these needs. But even more crucial is that the objectivist goal is to develop a rule of necessity that must be true in all cases and applicable to all human societies. There must be a principle based upon human nature and morality that tells us that the right to self-defense should always be monopolized. It is my understanding that: A) objectivism provides no such principle and that B) defining “government” as a monopoly on force is troubling position.

The reasoning to answer challenge (A) is as follows: 1) A desirable society can only exist by recognizing civil rights. 2) Rights can only be violated through use of physical force. 3) Therefore, getting rid of the use of physical force between individuals is required in order to have a desirable civil society (that must deal in reason). 4) In conclusion, “The use of physical force—even its retaliatory use—cannot be left at the discretion of individual citizens,” and there must be a government to monopolize it.

I happen to believe that, as it stands, the conclusion is a solid practical position for today’s society. For instance, I do not believe the current United States would benefit in the short-term from a sudden shift to anarchy. However, the problem here is that (4) does not follow from the premises. It makes sense that if we want a society that respects civil rights, we should try to rid ourselves of the use of force in every-day exchanges. But it does not follow necessarily that a government structure will always be the best means to accomplish this.

I do believe that a group of pacifists would likely be vulnerable to a bully, and a post-apocalyptic anarchy or something similar would likely develop into a brutal free-for-all scenario. But Rand apparently believes that people need the threat of retaliatory force to be civil in more than just these cases. It is never enough to count on a society that will rationally and freely police itself.

When looking at the logical ramifications, there are some worrying conflicts here. If individuals—in general— are morally wise enough to put society under the “objective control” of a collectivized police force and justice system, then why wouldn’t they be wise enough to act justly without such institutions? On the other hand, if individuals generally lack the self-control to be without a police force and justice system, what will make the individuals in charge any better than the average immoral citizen?

The situation appears to be a paradox of sorts, but it is not necessarily true that a faultlessly moral society will have no use for governmental institutions. It is also not necessarily true that a society generally lacking morality will always have at the same instance, ineffective or morally questionable institutions to the degree which they are lacking. However, I find it difficult to predict otherwise in either case after taking history into account.

Let us examine this further. Take any area in which there is very little crime. You will notice a high correlation between low crime and institutional competence. One might expect it to be that the effective institutions cause the low level of crime. I would probably have to agree, but I would add that there is also a deeper causal relation that travels in the other direction. It is that the low level of crime, i.e., the relatively moral society, causes there to be effective institutions. If the two are truly interdependent, as I am led to believe, then it is no wonder that they are correlated.

Yet, I am also led to believe that one is more primary than the other. In a city full of thieves and killers, a strong institution can only do so much before it throws everyone in jail and empties the city of people to police (though one could hardly call what I describe a society in the first place). A weak institution could do little but be corrupt to some degree in order to survive. An effective institution requires a generally moral people. I think Mises and Bastiat would agree, individuals are ontologically prior to the groups they create. It should follow that the morality of the groups are dependent upon the morality of the individuals that fill them.

Given that this is true, an effective solution may be, as Rand proposes, to codify a set of objective laws with a government to enforce them. Some anarchists would argue that the solution is to remove all central control because power is what corrupts. History shows us that both sides have a point. Nearly all governments grow into monstrosities over time, even those born with something close to objective laws. But true anarchy can lead to utter chaos, suggesting that there should be some degree of order.

Neither proposal fixes the problem that individuals may choose not to follow the objective laws. What are governments after all? They are still just people. A stable society with effective institutions depends in the long-term upon the rational and moral individuals that comprise it. In turn, the development of these individuals depends upon the proper ideas and memes that will foster moral and rational behavior. If this is true, the correct type of government is simply not the most crucial requirement to create the kind of idealistic society we all want to see (though it would affect it for the better). Not only would a rational standardization of the Law be necessary for long-term stability, but also our systems of morality and logic and so forth.

At this point, people really would be coming to all the same conclusions about the most important things. If most everyone knew what the Law was, agreed to it, and were rational enough to follow it, there would be no need to enforce hardly anything by means of a police force or justice system. There would be no question as to when a criminal were breaking the Law, and any given citizen could carry out the sentence.

This may sound incredibly idealistic. It absolutely is. But I see no problem traveling here if Rand’s idea is to have a specific type of government limited to a justice system, police force, and military, funded entirely without taxes. There are examples of many unique semi-anarcho systems: small American towns and settlements in the old West, medieval Iceland, various Native American settlements, ancient Mediterranean “colonies,”  and so on.

This brings us to the definition of “government” and challenge (B). If Rand’s definition fits all of these examples, then she may be correct because there will be no cases of successful societies without government. But I don’t believe her definition fits any of these examples, nor any of the examples today. Her definition is as such: A government is an institution that holds the exclusive power to enforce certain rules of social conduct in a given geographical area,” and, “A government is the means of placing the retaliatory use of physical force under objective controli.e., under objectively defined laws.”

We might typically think that government has a monopoly on force. But governments almost never have a monopoly on force or even the enforcing of certain rules. It is usually only an attempt at a monopoly. A true monopoly has an exclusive control or possession of something. The potential for application of force exists in every individual, and it is applied every day by individuals of and not of various groups. Thus, the creation of governmental institutions is no more than the formation of an organized gang, a gang that we hope to hold to certain benevolent standards.

When it comes to government, there is little difference in kind between the giant institutions of today’s globalist West, and a posse of citizens gathered in the old American West to fight bandits. They are both groups of people gathered to exert (what should be) retaliatory force. The difference is that large institutions more often have a written set of rules. Even so, it is written in the United States Declaration that the people have a right to overthrow an unjust government because it recognizes that all individuals are capable of being arbiters of force. It should not be the burden of the state to tell everyone what is just; this is ultimately the burden of the individuals that allow the state to exist.

The part of Rand’s work that is very important is the need in society for an objective set of laws. Where she goes astray, I think, is when she suggests that the only way for this to happen is for force to be monopolized. First, I don’t think force can ever be truly monopolized (given current technology). Exceedingly decentralized societies have existed and thrived in the past. If these kinds of societies include systems that would meet Rand’s criteria for being called “government,” then the problem may be a failure to address specific structural details in “The Nature of Government” or a disagreement on the definition of “monopoly.” Second, the effectiveness and competence of any given governmental system is somewhat dependent upon the society that it governs. If one could suddenly conjure a set of objective rules for government out of thin air and garner the will to implement it, I would be elated. But this is barely less idealistic than my desire to see a fully rational and moral society that has no need for government. There doesn’t seem to be enough justification to disregard one end as too idealistic without disregarding the other.

Nevertheless, there is a need for a government in many different societies, but the type of government should fit the circumstances of the society. Many individuals rely on the United States government today, and were it to cease abruptly, it would probably be catastrophic. But this isn’t to say that this reliance is a good thing in the first place. More on this in the future…