General Thoughts on Epistemology IV: The Difficulty of “Faith”

A man praying at a Japanese Shintō shrine.

First, I don’t want to step on anybody’s beliefs, but, well, here we go. -Brian Regan

Given an understanding of the world based on the scientific method—what I call the “wager” process—there are some tricky things that happen when we try to reconcile the concept of “faith.” In what follows, I will explain why the term is unintelligible unless it is used to describe a situation in which one believes something while ignoring evidence to the contrary.

To reiterate what I have explained in previous entries, the “wager” theory claims that coming to a belief always involves a kind of wager. When a wager is made in gambling, risk is adopted that the wager will fail. The wager is then confirmed or denied when the cards (or what have you) are revealed. In the same way, holding a belief adopts the risk that it will be disproven, and it is confirmed or denied by physical or logical tests.

The Oxford Dictionary defines “faith” as “complete trust or confidence in someone or something.”  The Stanford Encyclopedia of Philosophy breaks it into three broad categories: affectivecognitive, and volitional. The affective component refers to the psychological state or attitude often denoted in the phrase “losing one’s faith.” The cognitive component gets at the epistemic nature of the concept, suggesting that it is an actual cognitive faculty by which we can come to “know” things. It is perhaps most accurately categorized as a kind of belief. The volitional component adds the notion that one can choose whether or not to have “faith”—to accept this special kind of belief. Most of what I will address here are the linguistic aspects of the cognitive component.

There is a basic definition given by the Hebrew bible that has a number of translations. The English Standard version is as follows: “Now faith is the assurance of things hoped for, the conviction of things not seen.” (Hebrews 11.1) Within every translation, two recurring elements of “faith” that appear to be essential components are “hope” and a sense of certainty that goes beyond mere belief. First, combining “hope” with “certainty” does not appear to give us any semantic tool that we don’t already have or refer to some mental state that we cannot pinpoint otherwise. Moreover, the two concepts appear to be at odds. Saying that I hope x will occur usually requires that I am uncertain that x will occur. It is often a conscious hedging away from certainty. As aspects of “faith,” they also run into issues on their own.

“Hope” is primarily a stronger form of “desire”. When we hope for something to be the case, we desire for it to happen. If “faith” is another type of desire—a stronger version of “hope” perhaps—then it wouldn’t serve any epistemic purpose whatsoever. My desire for something to happen or be true has nothing at all to do with attempting to discover the actual outcome.

On the other hand, “certainty” does imply an epistemic relation. Use seems to vacillate between having absolute conviction and having a mere strong conviction. “Faith” does appear to have an absolute quality about it, since having only a strong conviction doesn’t quite cut it as a substitute for having faith in most circles. If “faith” is more of an absolute conviction, however, then it begs a series of questions: By what standard is it absolute? And is it ever logically permissible to have such a conviction? And if/when it is permissible, is “faith” a reasonable vehicle to get there?

I haven’t yet seen an acceptable set of answers that suggests “faith” is a helpful and coherent tool in epistemology. Even if one can muster up an adequate description and justification of “absolute” conviction, we are still left with showing how “faith” works and making the case that it is something reasonable to use.

The problem is that there really is no good explanation for how it works. “Faith” is considered quite mysterious, intentionally so. And because reasonable use is contingent upon there being at least some tenable understanding of its function, the chances that faith and reason coincide are slim at best.

There are some attempts to do this, however. One way would be to identify faith as a component of every belief and that it somehow accounts for the gap between our natural state of ignorance and the actions we take. According to this model, every action we take requires an extra push to get us there from our beliefs. But “faith” doesn’t appear to provide any explanatory power here. And once we add the parts we need to describe the basic epistemic process—placing proposition-based wagers and using actions and “intuition pumps” as tests to confirm or deny them—it becomes unnecessary to add any more components.

We could attempt to identify the concept as the same exact thing as making a wager, but for “faith” proponents, this would probably be undesirable, for the element of certainty would be lost in this case. A bet (much like scientific theory) is something that holds strongly to the assumption that its failure is within the realm of possibility. Unless I am mistaken, it is clearly the opposite for matters of faith.

Another attempt would be to note that “faith” is more applicable when deciding whether or not to trust another human being, the thought being that we are so complex and unpredictable, that the only viable option is to take some “leap.” If so, then it would imply that faith and rationality are quite separate, since the assumption is that we cannot predict what will happen with reason.

However, the “wager” theory still acts as an intelligible substitute for this even if we cannot reasonably “predict” what will happen. One can effectively bet on what someone will do and hope that they do it. Nothing more is needed. To use “faith” as a replacement for these two operations still seems to fly in the face of the  certainty requirement. To tack it on as an extra piece of the puzzle seems to either be unintelligible or create a contradiction (being certain and uncertain simultaneously).

There is still one definition for “faith” that fits neatly into this narrative, and that is “to believe a proposition despite evident contradictions.” In this case, “faith” would not be something that is part of the proper epistemic process. It would describe a decisively irrational function by which one may believe certain propositions to be true while ignoring conflicting test results for reasons external to the epistemic process. Although employing “faith” could not be rational on this account, it may still have some positive psychological effects for various people. And it may even bring more happiness to those lives than “proper epistemology” would. Even so, I would not subscribe to it. But that’s just me.

General Thoughts on Epistemology III: We Are All Cosmic Gamblers

English: The French Gambling Aristocracy

English: The French Gambling Aristocracy (Photo credit: Wikipedia)

As a preface to this post, it is crucial for me to communicate that I am setting aside the discussion about “knowledge” as my primary pursuit. The reason, as I explained in part II, is that the use of the word has become muddled. My position is that philosophical skepticism about “knowledge” of the external (physical) world attempts to solve a presently unsolvable problem, and therefore, “knowledge” may not be the best term to use when attempting to describe how we gather and store data in our brains. Instead, I will focus on the interaction between human beings and the external world on its own terms. Hopefully what follows is indicative of this.

A while ago I wrote about Pascal’s Wager, contending that he removes the essential components from what it takes to justify a belief. His argument was that, when looking at belief in God in a cost-benefit analysis, it is more beneficial to choose to believe in God. I reject his presumption that a person can rationally choose to believe something exists while disregarding all evidence for or against its existence. However, his idea that it is useful to equate believing and wagering is worthy of consideration. My suggestion is that the process of making a wager is the best model to describe what is going on when we are deciding that something is the case. When we claim to have a belief, faith, trust, or even knowledge, what we ultimately have is a form of bet. This does not answer the question of how we ought to come to a belief, but my inclination is to say that this theory will help explain what these kinds of claims are, fundamentally.

There are a few “armchairish” observations sometimes taken for granted that hint at my suggestion being a good one: Assuming that humans all function the same in the following ways, 1) we use observations and cognitive processes to form beliefs; 2) we take action based on our beliefs; 3) sometimes mistakes, ambiguity, and/or external factors outside of our control cause us to be incorrect in/about our beliefs; and 4) replacing the word “belief” with “wager” in (1)-(3) results in a fairly coherent progression of thought.

Put simply, when we decide that P is true given the data we have, we are simultaneously placing a wager on P. In other words, we do not only think it is true now, but we are betting it will continue to be true as time goes on. Then, as we go about life taking action according to the wager, the proposition is tested by observations xy, z, etc. If these observations appear to connect logically and/or causally with P (if everything appears consistent), then we confirm it to some degree, and the wager is not changed. The risk adopted by the bet is that the proposition may be proven to be incorrect, and there may be some undesirable consequence as a result of the action(s) based upon the proposition.

This process is related to Jonathan Haidt’s psychological theory involving processes of the mind. He came to the conclusion that people have an initial reaction to some stimulus that consists of a snap-judgement. Then, what follows are a series of rational thoughts that he says “supports” the initial judgment. I agree with this general theory, but if we also add that it is possible for the rational thought to deny the initial judgment, the theory has an even wider application.

The nature of the game is that whenever we consider some question, we have an open field of possible truths that is narrowed whenever we rule things out based upon testing, observation, and logic. This thought isn’t new. It is reminiscent of the scientific method, which dates back to the Renaissance and early Colonial Era when it began to take shape. Most of the modern iterations of the scientific method assume that any theory is open and liable to change, not only because of more efficient and useful language, but also because the pool of data changes over time. A “wager,” I suppose, could be most analogous to a “hypothesis.” But hypotheses are more consciously contrived, and my goal for the “wager” is to be broad enough to refer to unconscious behavior, in addition to all matter of predictions, theories, and conjectures.

Note that merely “placing a bet” says nothing of my conviction in the outcome, the quality or quantity of information that is taken into account, the level of consciousness with which I make the bet, or the time-frame of my test. I may be unconsciously believing something ridiculous, or I could be making a detailed evaluation of a claim’s plausibility. Both would involve a wager of some form—a decision to hold some proposition to be true or at least to act as if it were true.

One might also point out that there are important distinctions between different kinds of wagers, namely temporal ones. There are wagers about what will happen (predictions), about what has already occurred (beliefs), and about that which is ongoing. Suppose my childhood friend and I see a squirrel darting through street, and he says, “I bet that squirrel will get hit by a car!” Our inclination would be to label this as a prediction, since the event has not yet come to pass. In continuing the story, suppose I were to respond, “I do not believe that squirrel will be hit by a car.” It seems odd at first that I would use the word “belief” for what should be another prediction.

The reason someone might make such a mistake, I think, hints at a deeper underlying theory like the one I have proposed. If both beliefs and predictions are forms of a wager, then there is an inherent predictive aspect in both of these terms. The substitution of “belief” for “prediction” is made because the character of “belief” has an inherent predictive quality about it, which makes it easily mistaken for a prediction. However, it still does not have the same set of qualities that “prediction” has. There is still room for distinction.

And the distinction is this: A prediction is a consideration of what has not yet come to pass; a belief considers what is ongoing or has passed. In my theory, they both still fall into the “wager” category. I can still comfortably replace both “I believe” and “I predict” with “I bet.”

The immediate worry that should arise is that it seems like I would be committed to saying that predictions, wagers, and beliefs are all the same thing. In response, I would say my claim is that wagers are broad enough to encompass all of these kinds of terms, and there is a predictive element to a wager. However, that element refers to a “first-order” qualification that beliefs, predictions, propositions, and the like contain in common an expectation of continued confirmation. And confirmation can only occur at points in time after a wager is made, regardless of its kind. Note that because people are continuously acting or not acting, each action must carry with it an implicit set of bets. Since this is always the case, it doesn’t seem that “beliefs” can be abstracted from “wagers.”

Secondary qualifications would make up the distinctions between the different kinds of wagers. For instance, expanding upon the distinction I made earlier: the content of a belief must be considering an ongoing or past phenomenon; the content of a prediction considers a future phenomenon.

To conclude, when we form beliefs and predictions, we are making bets on what we think is accurate. The discussion about whether we can verify knowledge globally is a bunch of bunk. It is pretty clear that sometimes we make mistakes, but our goal is to seek truth regardless. And ultimately, we all play the game. We are all cosmic gamblers.

Emotional Control Through Rational Thought (Learning How to be a Robot)

Contrary to what may be inferred by the title of this post, I do not think an individual can immediately decide to feel a certain way about such and such in opposition to an initial feeling. Even if it is possible, I do not think it is something easily accomplished. I do think, however, that an individual can condition oneself to react emotionally in one way or another over time.

One way to think about this is to examine Aristotle’s view, in which he divides the soul into three categories. The first amounts to what plants are capable of, basic nourishment and reproduction. The second level is that of animals who have the power of locomotion and perception. The third is the human level, which introduces the intellect (reason, rationality, or what have you). He uses this context in order to explain how one should live a eudiamonic (or the best kind of) life. His belief is that it is virtuous to utilize one’s rational capabilities to the fullest, and at the same time, one must exhibit self-control when dealing with the lower-level functions of life like appetite and emotion.

This may not be the most detailed or accurate way to categorize life considering our modern-day understanding, but there are a few reasoned observations that suggest that Aristotle is on to something: 1) The human capacity for rational thought, or something similar, is probably the essential characteristic that makes humans different from other life-forms on this planet. 2) The use of logic through our rational thought allows us to come to accurate conclusions about the world around us. 3) Rational conclusions can be overturned by emotional desires and vice versa. 4) Humans have the capability to change how rational thought and emotion are involved in their thought processes.

IF all this is pretty much true; and IF humankind is the most advanced form of life in existence; and IF there is an aristotelian “eudiamonic” life to be had, then MAYBE we should all aspire to become robots. These are some big “if’s” of course, hence the use of all caps… but no, I don’t actually advocate that we all aspire to become robots (right now anyway). Why? Human functioning is actually way more complex than what any robot can do. It is complex enough that we cannot yet replicate it by artificial means. I would advocate that people spend more time on “logic-based thought” than “emotional thought,” however. Why? Because I think it does more good for the world.

Whatever degree of utility that emotion plays in human thought processes, there is no denying that it takes relatively little time for most people to have emotional thoughts. Emotions are reactionary by nature. They are an automatic response that our bodies have to certain stimuli. We typically have very little control over these reactions, as they are hard-wired into our brains. Often they are explained as evolutionary survival mechanisms and thought to rise primarily from the limbic system. They explicitly fall into the category of non-rational functioning.

Rationality, on the other hand, is characterized by conscious and deliberate thought processes. To reason about something is considered an exercise in human agency. We are doing it on purpose, and we have control. Its function is essentially to discover truth by logically analyzing our observations. Processes in this category like differentiation and determining causal relations occur in the frontal lobe. I am of the impression that an individual can use rational processes like these to alter emotional processes.

Because emotions are closely tied to memory via the limbic system, I think the first step toward effective emotional control is to recognize the causal patterns of behavior. It would be prudent to analyze the typical triggers that cause associated emotional memories to fire. The goal should be to pinpoint the exact underlying causes that elicit the feeling. Sometimes it can be difficult when they are suppressed, but this is what your frontal lobe is there for. Taking the time to consciously face some of these issues might also require courage, but I don’t know how to help people with courage. Just don’t be a weenie I guess.

The second step would be to learn how to counteract the emotional reaction brought on by the trigger. There are many ways to do this, but I strongly advise against ignoring the emotion if your goal is long-term control. The objective of this step is to create emotional memories that override and replace the current ones. This can be done through introspection, external exposure, or a combination of the two. For example, suppose that I fear speaking in public. One thing I can do is to expose myself to situations in which there is more pressure to speak, like taking a speech class. Perhaps I can create a parallel scenario in which I am speaking in front of friends as if I were speaking in public. These are very common remedies to a common problem.

One uncommon method, though, is to use introspection. A solution can be found through creating a new perspective for oneself by thinking about the different possible outcomes. The practice could involve imagining worst case scenarios — those which would be most feared — and reconstructing the feeling in one’s mind. Doing this regularly may “wear the feeling out,” and the individual can better accept the emotion, making its effect negligible. Another option is to contrast the trigger situations with other situations that are far worse, creating a logical connection that will eliminate the reaction. Eventually it is possible for the subject to adopt the perspective of the indifferent observer: “So what?”

There isn’t really a third step.

If there were though, it would probably be to practice doing this until you become a really well-adjusted person.

…Or if your dream is to become a robot, then have at it.

Optimus Prime

Optimus Prime (Photo credit: Devin.M.Hunt)

Moral Sentimentalism and Moral Rationalism

Moral Sentimentalism and Moral Rationalism are two epistemological theories of morality—how we know what is right and wrong. Sentimentalists like Francis Hutcheson, David Hume, and Adam Smith have argued that a knowledge of morality arises from our senses. This has been described as an emotional basis and similar to the way we understand beauty. Rationalists like Immanuel Kant and Samuel Clarke have argued that we gain knowledge of morality from rational thought. In this view, the way we understand morality would be similar to the way we understand mathematics. Although this is a massive subject, I will do my best to reduce it to the essentials in order to explain why this is a false dichotomy and how we can better understand what happens when we make moral judgements.

Math is beautiful

Math is beautiful (Photo credit: quinn.anya)

Prof. Michael B. Gill from the University of Arizona tells us that the two positions, taken as a whole, are incompatible. The standard rationalist view holds that moral truths are necessary truths. They must be true in all possible worlds (alternate realities) in which they exist like “2+2=4.” If so, then judgements of morality are nothing like aesthetic judgements because we can imagine possible worlds in which one thing is beautiful and other possible worlds in which it is not. Conversely, sentimentalists hold that believing something to be beautiful and having a favorable feeling towards it are identical (or at least necessarily connected). In the same way, holding a moral belief toward a given action would be identical to having some feeling regarding that action. If this is the case, then there can be no analogy between morality and mathematics because math doesn’t address how we feel.

Gill rejects the idea that Sentimentalism and Rationalism are mutually-exclusive. I agree. I think it is primarily because of a failure in the discussion to connect on an epistemological level. There should be a number of premises that must be examined about how we know things in-general before we talk about morality. A theory that says we can know nothing through sentiment would eliminate sentimentalism completely, while a theory that says we do not use rational thought in obtaining knowledge would eliminate moral rationalism. Instead of “knowledge,” which serves to confuse the discussion, I think it is more useful to use “judgement.” It is much more intricate than this, but for my purposes, I will have to couch most of the epistemological discourse.

I think that both camps attempt to address different aspects of morality that are explained more clearly in a psychological context. Jonathan Haidt talks about this in the first part of his book, Why Good People are Divided by Politics and Religion. His belief is that moral “intuitions” come first and “strategic reasoning” comes second. It is usually the case that people have an initial unconscious reaction to a moral situation first, and then they rationalize it.

Does this mean that emotion serves as the basis for moral judgement? Prof. Jesse Prinz from the University of North Carolina at Chapel Hill thinks so. In fact, he argues, as Hume did, that a judgement that something is wrong is the same thing as having a negative sentiment towards it. He goes on to explain how emotion is both necessary and sufficient in order to make a moral judgement. There are studies that show certain areas of the brain which indicate emotion light up when people make moral judgements. These studies reveal that different emotions external to the moral dilemma affect the way we make judgements. The data also suggests that there is a correlation between a lack of emotion and an inability to draw a distinction between morality and mere convention in psychopathic subjects.

A joint study involving scholars from Harvard, Tufts, and UMBC remarks that the data is simply not enough to conclude that emotion is necessary and sufficient in order to make a moral judgement. More specifically, it does not provide a precise enough understanding of the role that emotion plays in judgement. Neuroimaging data only shows correlation between emotion and moral judgement but not causation. The effect of unrelated emotional inputs is not just limited to moral judgement. The research on psychopathic behavior shows in actuality that many still make the morality/conventionality distinction, only less often than normal subjects, but certainly not enough to confirm Prinz’s conclusion.

If one goal of moral judgement is to determine what is true, shouldn’t there be a key role for reason? The scholars from the joint study point towards an unconscious process that includes “causal-intentional representations,” which I take to be a form of reasoning. After all, a moral judgement is not meant to be a subjective statement. It is a statement that judges how things ought to be, suggesting that there should be a correct and objective answer. So looking at reason as opposed to “emotion” might not be the best way to describe what is happening.

Haidt says that there is a divide between two main types of cognition regarding morality: intuition (in place of emotion) and reasoning. He believes that intuitions come first and reasoning second, so he draws from this that Hume was right that passions (intuitions) “trump” reason. Emotions to him are just another form of cognition—information processing, and they should be categorized in a somewhat different way than they have been before. He also retains the idea that conscious reasoning is still an important aspect of moral judgment. Ultimately, I think we would both agree that the standard sentimentalist/rationalist dichotomy is faulty.

If Hume and Haidt are merely pointing out what happens—that people usually intuit first—then the claim is unsurprising and uncontroversial. If the claim is about what is most important, however, his conclusion about Hume is a bit odd. The problem, among other things, would be that it wouldn’t follow that Hume is correct. Just because people intuit first and reason consciously afterward does not mean that intuitions are necessarily more important—it only means that they happen first. From this interpretation, the two don’t appear to give any credence to the idea that perhaps what is most important is what is most effective. It also leads me to believe that Haidt would have to assume intuition and judgment are the same thing (what Hume argues) and disregard the notion that intuition could be connected to reason.

If this is not Haidt’s intention, we nevertheless can dig into the same discussion about how to address and think about morality. We may not immediately be thinking deeply in response to moral stimuli, but individuals can certainly change their habits in how they react over time by thinking consciously about them. I am convinced that if we can change what we think is moral, then we have some degree of choice that affects our “intuitions.” We are now faced, ironically, with a somewhat moral question about morality: “Should we try to rationalize morality as much as possible or just go with whatever we feel like?” Beginning with the next post, I will address questions like this in a series called The Morality of Moral Judgement.

Alone. Content.

Alone

What follows is grounded in anecdotal and personal experience. It was developed as a personal strategery for philosophization and is not guaranteed to relate with everyone’s experiences in life. I wrote this in the hopes that it would hit home for some seeking guidance and be a positive influence on those individuals. Credit is given to Ayn Rand for being an influence on this train of thought (because she would flip in her grave if I didn’t say so).

In the wake of unfortunate shootings like those in Newtown, we look for things that we can do as a society to prevent them from happening. The reaction is only natural. If we perceive something as a problem, many of us have an urge to fix it. The more widely-publicized it is, the greater the reaction from society.

There is much talk about having stricter gun laws or placing guards in schools or blaming violent video games. I find that having blanket measures will likely result in public outrage and unforeseen costs. The key to finding a solution is to look at the root cause, which begins with people. Individuals kill other individuals. They do it for reasons that can only be explained psychologically. Psychology often dissects how people think about and perceive reality in order to find causes for their actions. This inevitably brings us to philosophy, which gives us the tools to discover how to think about and perceive reality.

My goal here is to examine a general method of rearranging one’s perception of reality in order to demonstrate the need for a sweeping philosophical shift in society. The method to which I am referring begins with the goal of learning how to be content and alone, but it ultimately reveals a reason to live. It employs philosophical thought processes to alter psychological behavior.

People who find themselves alienated from society for one reason or another tend to alienate themselves further. This may happen because of bullying, self-esteem issues, or any combination of intellectual rationales. They often find themselves in vicious cycles of negativity, which ultimately lead to nihilistic justifications for behavior. The thought is often that “nothing matters, so I may as well do something extreme.”

I would urge everyone, no matter who they are, to both teach and learn how to be content with oneself, while generally disregarding others. This is not to say that others should be treated poorly and ignored entirely, but that one should learn the art of being happily alone when the need arises. Many of the explanations for psychological issues ultimately reduce to social trauma as the cause, and social trauma only occurs as a result of other people. Therefore, in these cases especially, one must learn the skills to both recognize and disregard the irrational expectations and behavior of others in order to find contentment. This may require the individual to be isolated for an extended period of time in order to think, but not always.

Human beings have an evolutionary need to connect with other human beings. This can cause complications when attempting to detach oneself from others. It can be deadly to become detached without any measures to offset the pain created from the natural desire for belonging.

Therefore, the other half of the method is crucial. This is to learn how to be content while detached from the influences of other people. This may require certain behavioral changes. It will require a shift in focus from people to things and ideas. One must learn how to find appreciation in life as a good in itself. For many, this will only occur when they look at their own lives introspectively in relation to the reality surrounding them. Being alone makes it easier to do this. They should realize that they have a choice between truly living and caring for oneself or disregarding reality and rationality altogether.

This choice is a fundamentally philosophical one. It is between two simple beliefs: accepting that there is objective reality and throwing out the notion of objective reality. If one decides that there is no objective reality, there is no reason to do anything in particular because there is no such thing as value. If there is objective reality, there are things in life that are valuable because there are individuals who desire them. If an individual is not alive and conscious, there can be no value because there is no individual to conceptualize it. Thus, because the existence of the concept of value is dependent upon the existence of the individual, the individual’s life should be treated as holding some level of intrinsic value.

As a society, we typically accept that there is an objective reality, and we take it to be a rationally-held belief. The other choice ends in absurdity and in extreme cases, reality-denying actions like mass-shootings and/or suicides. It is incoherent when one takes any action whilst simultaneously denying the existence of reality. It is incoherent to ever treat one’s own life as valuable whilst simultaneously claiming that said life is not valuable. It is irrational for one to believe that human beings exist whilst simultaneously believing that value does not exist (because human beings must hold desires, and desires entail value judgements). As a society, if we want people to live positive and fulfilling lives, we must make it clear that there is a choice and that one ought to choose to live in reality instead of wallowing in non-reality.

I believe that this choice is one of the most basic philosophical decisions that a person can make. In order for people to do this, however, they must recognize that it is a choice in the first place and that it will dictate the way they live their lives. Unfortunately, many people have not made this conscious decision, and therefore, they do not have any fundamental guiding principle to dictate how they live. These people have likely spent most of their lives making decisions on largely non-rational or instinctual bases.

Developing a logical justification for life can be deeply satisfying. It is literally having a reason to live. The contentment that this can bring is profound. Although some may not find immediate value in a justification for life, in time, one should realize that a human being has no choice but to seek truth in order to live at all. I must use my eyes to see my surroundings and comprehend what exists. I use the information I accumulate over the years to decide how to interact with those things that exist. As I have said, one cannot coherently deny reality while accepting it in practice.

With a new perspective on life, it becomes much easier to do all the things that one values and achieve one’s goals. Moreover, because the entire process of rationalizing and choosing life is done alone, there should be a newfound sense of self-reliance. If an individual is able to make it through the process, it is now clear to that person that he or she can think for oneself. Following the process, there should rarely be any need to rely on others for psychological motivation.

The method I have described breaches a massive subject and combines a number of disciplines. I am not sure I have done it justice. I am also not sure that it is something available to everyone. There are appropriate periods of time to isolate oneself purposefully in order to think, but sometimes one is thrust into a situation of isolation. The decision to live is especially important for them. In some cases, it can be an actual matter of life or death.