General Thoughts on Epistemology II: Global Philosophical Skepticism

nihilism

nihilism (Photo credit: stungeye)

The various forms of philosophical (p.) skepticism in the study of epistemology seek to question, deny, or limit the categories of what is possible to know. While it is useful to point out that we don’t have the data to confirm or deny some claims, many skeptics go so far that they would be forced to live a lifestyle that is inconsistent with their beliefs. I argue that questioning the “existence” of knowledge as a whole probably serves no purpose, since either answer will have no bearing on the everyday criteria for making decisions and taking action.

General p. skepticism questions the prospect of “knowledge.” There are several schools of thought that can be found on the family tree of p. skepticism. The extreme form is called epistemological nihilism, in which all “knowledge” is denied. There is also epistemological solipsism, a theory stating that “knowledge” about the external world is impossible, but “knowledge” about the agent’s mind is possible. ‘Global’ p. skeptics claim that they hold no “absolute knowledge,” while ‘local’ p. skeptics question specific types of knowledge. My target is the multifarious forms of global (g.) p. skepticism including nihilism and solipsism, and there are three main issues with these positions that I will touch upon.

First, this debate has been prolonged by a failure to give proper attention to semantics. The quotations around the word “knowledge” in my explanation are present because I have difficulty pinpointing exactly to what g.p. skeptics are referring when they use the term. It wouldn’t be just to completely fault the deceased thinkers who would have benefitted from access to modern advances in neuroscience. Nevertheless, “knowledge” remains ambiguous even without modern scientific perspectives.

It could be that “knowledge” really is “Justified True Belief.” Or it could be some abstract thing we achieve when we fully “understand” such and such. Perhaps it could be as simple as a subconscious observation that we sense in our environment. Ever walk into a room with, say, blue wallpaper, but hardly pay it any mind? How conscious does one have to be in order to know that the wallpaper is blue?

A g.p. skeptic might respond, “Exactly. The point is that we have no clear idea of what knowledge is.”

But they forget that human beings are the authors of language. We get to decide exactly to what “knowledge” refers. Any label without a referent is a floating abstraction. The question should not be, “What is knowledge?”, but rather, “How should ‘knowledge’ be defined?” The question should not be “Does knowledge exist?”, but rather, “Is ‘knowledge’ a useful term, given its definition?”

The second issue is that some skeptics must act inconsistently with their beliefs in order to interact with their environment. Nihilists believe that there can be no “knowledge” about the external world, and therefore, it cannot be verified. Yet, if they want to do anything, they must act upon information they receive through their senses. If they believe nothing they receive should be characterized as “knowledge,” then fine. The discussion becomes semantic. Otherwise, unless the nihilist sits quietly until death (or is deaf, dumb, blind, etc.), then the beliefs he/she holds will be violated.

As justification, I propose this thought experiment: Try to think of just one instance in which a normal, conscious human being can act physically without being aware of something in the physical realm.

Finally, there is an almost comical response to g.p. skeptics from a philosopher named G.E. Moore. He essentially says (not an actual quote), “Look. Here is my hand. I am perceiving a human hand right now. That is a fact that I ‘know.’ If you do not think I know it, then let’s say that I am acquiring sense-data. One sense-datum that I perceive is the surface of my hand.”

Although he employs an intriguing way of engaging g.p. skepticism, if we take the analysis one step further, we arrive at an important point. It is that perception of the so-called “external world” is the default state of human beings. There are all these lights and sounds and feels going on that we report on and verify through language. “Stuff” happens with such frequency that there is not really any reason to deny knowledge of the external world, especially on the basis that we could be wrong about individual bits of knowledge. (Proving something false logically necessitates the truth of the falsehood). The burden of proof is on the denier of clear and obvious evidence that literally surrounds us at all times.

To reiterate, challenging “knowledge” as a useful term is not a poor challenge, but questioning the concept while presupposing an ambiguous meaning is problematic. Going too far with g.p. skepticism will result in an inconsistent lifestyle. Acting in accordance with one’s sense-data regularly, while globally denying that very same data is an incoherent position. Therefore, the only option by default is to accept the external world as a given. Even settling for an agnostic position, abstaining from belief in the external world, may conflict somewhat with taking action. As I like to say, half-seriously, in all its Objectivist glory, “Reality doesn’t care about your nuanced opinion.”

General Thoughts on Epistemology I: A (Mostly) Platitudinous Introduction

The School of Athens

Everything I would like to discuss in the realm of epistemology may require a little background, especially for those who are new to this area of study. In addition, it was drilled into my brain that I ought to prove that I know a little bit about something before I enter the discussion, so it is perfectly permissible for the reader to see this as part of my self-consciousness made real. What follows is some mildly chronological context, sprinkled with some arguments and scare-quotes, all communicated a bit lazily.

Epistemology is supposed to be the study of knowledge and how we know. It is pretty simple as far as the general definition goes, but one will soon realize upon closer examination that writers have been complicating this area of study for years. The obvious pattern to follow through history is the longstanding debate between rationalists and empiricists, and less obviously, the debate between rationalists who tend to overcomplicate and empiricists who come dangerously close to (if not logically imply) that we cannot know anything at all. Here is a machine-gun history of epistemology:

Plato thinks that everything in the living world is a reflection of underlying reality, called the forms, with which we hang when our souls enter the land of the dead; he also “invents” the idea of “Justified True Belief” as the definition of knowledge.

Aristotle, with a slightly more scientific approach, develops his distinction between phainomena (observations as they appear) and endoxa (reasoned opinions about phainomena).

Fast forward about two-thousand years, and we get René Descartes, who coins the phrase, “I think. [Therefore,] I am,” as the first thing we can know with certainty purely through reason. He also attempts to defeat any doubt about our experiences (that we may be dreaming or being fooled by an evil genius) by “proving” that God exists.

John Locke, a British Empiricist during the Enlightenment period, argues that experience is the “basis” for knowledge in opposition to Descartes. His Causal Theory of Perceptions puts focus on the interaction between the world, our perceptual organs, and our minds.

David Hume, another famous British empiricist expands upon Locke and divides knowledge into two categories: relations of ideas (math/logic) and matters of fact (observation). His skepticism about the external world leads him to conclude that there is no rational justification for the existence of anything, that there is no such thing as “perfect knowledge,” and that metaphysics is stupid.

Hume finds an opponent in Immanuel Kant, who contends that metaphysics is not stupid and that it is possible to “know” things through reason alone. He frames this discussion with his introduction of the analytic-synthetic distinction (which I may explain at a later date). His overarching theory is called transcendental idealism, which postulates that there is a barrier between the mind and the external world created by our perceptions, marking yet another conceptual distinction between phenomenon and noumenon.

The spiritual successor to British empiricism became phenomenalism, which is the view that the existence of physical objects in the external world is not justifiable. According to a phenomenalist, when we speak about physical things we are talking about mere sense-data.

In the early nineteenth century, logical positivism went in a slightly different direction. Logical positivists pushed the verification principle, which states that a proposition is only “meaningful” if it is verifiable—can be proven true or false. A.J. Ayer defends this theory famously in his twentieth century work, Language, Truth, and Logic.

In 1963, Edward Gettier introduced a problem (previously raised by Bertrand Russell in 1912) with the “Justified True Belief” model of knowledge, explaining how a belief could be both justified and true but not be something that we would consider knowledge. The problem occurs when a person’s justified belief is coincidentally true, not by the original justification, but for different reasons. (I find fault with Gettier’s original problem, but there are many other examples that may or may not succeed.)

We also get some critical, agnostic thinkers, like Barry Stroud, who believe that none of the positive theories from these philosophers or movements are correct about epistemology, and we still haven’t solved the problem that Descartes elucidates—that we have no way of knowing if our experiences are real or just a dream.

Ayn Rand’s Objectivism similarly rejects the views of most mainstream philosophers like Hume and Kant, but instead of stopping at criticism, presents a fairly straight-forward way of defining knowledge and its processes. On the basis that reality exists self-evidently, Rand claims that knowledge is a conceptual organization of our perceptions of that reality. She defends the simple assumptions that we tend to accept about knowledge, but goes into detail in providing her framework for how it works. Unfortunately, she has trouble building a bridge between her philosophy and the established bodies of work, and there are a lot of holes that remain unfilled.

The most contemporary philosophers have the fortune of instant access to a wealth of present-day knowledge obtained via rapid advances in science and technology. Modern philosophers, like Daniel Dennett, now examine the nature of the human mind from a more scientific perspective in order to learn more about “knowledge” (or better yet, how we interact with the external world), as opposed to knowledge in and of itself. Now epistemological theories look like this.

So begins my foray into a somewhat outdated field in order to explain some things that the reader may already realize, but may not often think about.

My first order of business is to communicate this: the term “perfect knowledge” is unintelligible.

My second order of business will be to write the next post.

Emotional Control Through Rational Thought (Learning How to be a Robot)

Contrary to what may be inferred by the title of this post, I do not think an individual can immediately decide to feel a certain way about such and such in opposition to an initial feeling. Even if it is possible, I do not think it is something easily accomplished. I do think, however, that an individual can condition oneself to react emotionally in one way or another over time.

One way to think about this is to examine Aristotle’s view, in which he divides the soul into three categories. The first amounts to what plants are capable of, basic nourishment and reproduction. The second level is that of animals who have the power of locomotion and perception. The third is the human level, which introduces the intellect (reason, rationality, or what have you). He uses this context in order to explain how one should live a eudiamonic (or the best kind of) life. His belief is that it is virtuous to utilize one’s rational capabilities to the fullest, and at the same time, one must exhibit self-control when dealing with the lower-level functions of life like appetite and emotion.

This may not be the most detailed or accurate way to categorize life considering our modern-day understanding, but there are a few reasoned observations that suggest that Aristotle is on to something: 1) The human capacity for rational thought, or something similar, is probably the essential characteristic that makes humans different from other life-forms on this planet. 2) The use of logic through our rational thought allows us to come to accurate conclusions about the world around us. 3) Rational conclusions can be overturned by emotional desires and vice versa. 4) Humans have the capability to change how rational thought and emotion are involved in their thought processes.

IF all this is pretty much true; and IF humankind is the most advanced form of life in existence; and IF there is an aristotelian “eudiamonic” life to be had, then MAYBE we should all aspire to become robots. These are some big “if’s” of course, hence the use of all caps… but no, I don’t actually advocate that we all aspire to become robots (right now anyway). Why? Human functioning is actually way more complex than what any robot can do. It is complex enough that we cannot yet replicate it by artificial means. I would advocate that people spend more time on “logic-based thought” than “emotional thought,” however. Why? Because I think it does more good for the world.

Whatever degree of utility that emotion plays in human thought processes, there is no denying that it takes relatively little time for most people to have emotional thoughts. Emotions are reactionary by nature. They are an automatic response that our bodies have to certain stimuli. We typically have very little control over these reactions, as they are hard-wired into our brains. Often they are explained as evolutionary survival mechanisms and thought to rise primarily from the limbic system. They explicitly fall into the category of non-rational functioning.

Rationality, on the other hand, is characterized by conscious and deliberate thought processes. To reason about something is considered an exercise in human agency. We are doing it on purpose, and we have control. Its function is essentially to discover truth by logically analyzing our observations. Processes in this category like differentiation and determining causal relations occur in the frontal lobe. I am of the impression that an individual can use rational processes like these to alter emotional processes.

Because emotions are closely tied to memory via the limbic system, I think the first step toward effective emotional control is to recognize the causal patterns of behavior. It would be prudent to analyze the typical triggers that cause associated emotional memories to fire. The goal should be to pinpoint the exact underlying causes that elicit the feeling. Sometimes it can be difficult when they are suppressed, but this is what your frontal lobe is there for. Taking the time to consciously face some of these issues might also require courage, but I don’t know how to help people with courage. Just don’t be a weenie I guess.

The second step would be to learn how to counteract the emotional reaction brought on by the trigger. There are many ways to do this, but I strongly advise against ignoring the emotion if your goal is long-term control. The objective of this step is to create emotional memories that override and replace the current ones. This can be done through introspection, external exposure, or a combination of the two. For example, suppose that I fear speaking in public. One thing I can do is to expose myself to situations in which there is more pressure to speak, like taking a speech class. Perhaps I can create a parallel scenario in which I am speaking in front of friends as if I were speaking in public. These are very common remedies to a common problem.

One uncommon method, though, is to use introspection. A solution can be found through creating a new perspective for oneself by thinking about the different possible outcomes. The practice could involve imagining worst case scenarios — those which would be most feared — and reconstructing the feeling in one’s mind. Doing this regularly may “wear the feeling out,” and the individual can better accept the emotion, making its effect negligible. Another option is to contrast the trigger situations with other situations that are far worse, creating a logical connection that will eliminate the reaction. Eventually it is possible for the subject to adopt the perspective of the indifferent observer: “So what?”

There isn’t really a third step.

If there were though, it would probably be to practice doing this until you become a really well-adjusted person.

…Or if your dream is to become a robot, then have at it.

Optimus Prime

Optimus Prime (Photo credit: Devin.M.Hunt)

Realists and Idealists

Realists VS Idealists

Realists VS Idealists (Photo credit: Emilie Ogez)

At some point in time you have probably been taught that there is a difference between a realist and an idealist. There are two chief understandings of this perceived dichotomy. The common usage is a description of human behavior, often seen an explanation for political decisions. This is dependent upon the second philosophical usage, however, which largely denotes an epistemic stance. My argument is that the philosophical version is a false dichotomy, and as a result, the common version is not a very useful mental construct.

Usually idealists are understood to take action based upon what they want to see as an ideal theoretical end. Sometimes we call people “idealists” when we observe that they think big without adequately taking into account the steps needed to achieve their goals. They are dreamers and eternal optimists.

Realists, on the other hand, are thought to be more pragmatic in their approach. They tend to be more pessimistic about the world and what can be accomplished, but they are coincidentally (if not causally) more often correct and may even live longer.  Realists supposedly see the world as it is, and they act more pragmatically without looking outside of their personal sphere to accomplish lofty, theoretical goals.

In a philosophical context, the respective meanings are different but related. In one sense, realism and idealism can be understood as metaphysical interpretations that may apply to any field of philosophy. Within every interpretation is a claim about the existence of something and to what degree it exists independent of our knowledge. Therefore the discussion is rooted in the most primary forms of philosophy: metaphysics and epistemology.

Generic Realism goes something like this: “ab, and c and so on exist, and the fact that they exist and have properties such as F-nessG-ness, and H-ness is (apart from mundane empirical dependencies of the sort sometimes encountered in everyday life) independent of anyone’s beliefs, linguistic practices, conceptual schemes, and so on.” So a theory of epistemological realism might make the claim that all things we know are generically real. This theory would be a subcategory of objectivism.

The theories in opposition to epistemological realism, labeled non-realist, are numerous. But the most widely referenced is—you guessed it—epistemological idealism. Plato was one of the first epistemic idealists, with his cave analogy and his famous theory of the forms. His key belief was that knowledge consists of “memories” that your “soul” recalls from its time in the underworld hanging out with the forms (which are supposedly perfect versions of all the “imperfect” knowledge we gather in the human world).

A more representative picture of current philosophical idealism can be seen in German idealists like Kant and Hegel, who are among the most influential. Kant posits that all we are capable of observing is the sense data we obtain through our experiences, and therefore, knowledge relies on a framework of universal, a priori truths in the human mind (like the logical implications of space and time) in order to understand our experiences. He divides these two understandings into two realms: the phenomenal (experiential) and the noumenal (transcendental).

Hegel accepts Kant’s belief that knowledge begins with our experiences, but he rejects the idea that we can know anything transcendental. He argues that we can only be skeptical of such things. Although, he does agree that our experiences are mediated through the mind.

Part of the reason I say what follows is because I know there will be no recourse from dead men: most of these epistemological debates are just an intellectual pissing match. Their differences about the nature of knowledge are essentially unessential, and only the things they agree upon, for the most part, are important. Realists and the various idealists all agree that we have experiences by way of the senses, that we analyze them with our brains, and by that general process we form “knowledge” (whatever its nature may be). Most of the disagreement results from a failure to clearly define knowledge and its characteristics. I suppose this makes me a semi-quietist.

Ultimately, generic epistemic realism and most forms of idealism are not actually in conflict. It may be that Kant’s framework of understanding is valid—that all we observe is sense data and that it is meaningful to (at least) distinguish between physical and nonphysical things. Perhaps Hegel is right that we should be skeptical about nonphysical things. In the end, it serves no purpose.

What idealists have mostly done is to bicker about the degree to which realism can(not) be proven. But they fail to deny (or sometimes even to observe) that realism must be assumed in the actions of every day life. Imagine living a life full of the worry that things will spontaneously phase out of existence of you pay them no attention. Along a similar line of thought, we make use of “transcendental” or metaphysical concepts all the time. We can disregard their idealistic origin should we so choose, but we must recognize their utility, for example, when we employ mathematics, geometry, and calculus to solve real-world problems.

The problem with this philosophical dichotomy is similar to its colloquial cousin. At most, “realist” and “idealist” could be used as labels for people who actually fit their narrow description. Almost all people, however, operate according to the simple, functional framework that I just explained and thus, would not be categorized as such. Even those who use them regularly typically concede that the dichotomy should be understood in terms of a scale, in which an individual may favor one disposition over the other.

This practice, even with the concession, is still dangerous because it pigeon-holes people into mental structures that limit their capabilities. If a person thinks he or she is predisposed to acting on ideals, then it will likely become a self-fulfilling prophecy, and that person may refuse to take certain realistic issues into account when it would not be difficult to otherwise. And the related outcome is true of people who think that they are “realists.”

The important thing for people to recognize is that there is no real utility to the mutual exclusivity between colloquial realism and idealism. They should strive to make use of both in concert, as our brains already do functionally according to a more accurate conceptual understanding.

The Morality of Moral Judgment II: Principle

John Stuart Mill

John Stuart Mill (Photo credit: Wikipedia)

In my first post in this series, I attempted to provide a framework for thinking about how moral judgements work. I noted in the end that morality is something that we should try to be scientific about. In defense of this, I would attest that all prescriptive statements are factual claims by definition. They are claims about what people should do given certain circumstances and goals. If there is no attempt to seek a “correct” answer, then it is incoherent to make a moral claim.

If there are such things as right and wrong, and we want to argue that things are such, we should try to develop a set of general principles that can guide us. Granted, the kinds of principles found in a moral discussion should probably not be held to the same standard of certainty as scientific laws. Yet, this doesn’t make them unimportant, for if there are no moral principles to speak of, then there is no morality.

There are two chief types of principles in the study of ethics. They are descriptive and normative. (There is also a third category called metaethics, within which one could categorize the content of my discussion in the previous installment.)

Descriptive principles are rules of observation, denoting how people look at morality or act according to morality. Psychologist Jonathan Haidt’s work serves as an example that includes these kinds of conclusions. For instance, he theorizes that there are five main moral values that shape our political views: harm/care, fairness/reciprocity, in-group/loyalty, authority/respect, and purity/sanctity. I analyzed some of this in Moral Sentamentalism and Moral Rationalism.

Normative principles are claims about moral truths. These are the kinds of principles that will be the target of my discussion and are the kinds of principles that must exist in order for morality to exist. (If this sounds confusing, one helpful analogy might be the notion that mathematics cannot “exist” without the “existence” of its parts—addition, subtraction, and so on.) Because there are a great many competing normative principles, I will address a few of the most prevalent.

One of the most timeless principles is known by and large as the Golden Rule. There are a number of variants, but in the United States at least, people are usually referencing the Christian Bible: “So whatever you wish that others would do to you, do also to them…” (Matthew 7.12). This original version is often criticized for its lack of universality. It is easy to see how it fails when imagining how sadomasochists would attempt to apply this principle. As a result, some attempt to follow the Platinum Rule, which urges individuals to treat others as they would like to be treated. Again this runs into a problem when how people like to be treated is not the same as how they should be treated.

One might observe the difficulties in studying moral principles and conclude either that there is no right answer, or that we are not capable of discovering one if there is. It is true that the history of ethical studies is rife with disagreement and complication. This is reason enough to be extremely careful about adopting moral principles, but it is no reason to throw away universality as the goal. For instance, I follow the Silver Rule based on Confucian teachings: “‘Zi Gong asked: ‘Is there a single concept that we can take as a guide for the actions of our whole life?’ Confucius said, ‘What about fairness? What you don’t like done to yourself, don’t do to others'” (15:24).

This inverse version of the Golden Rule is the less risky version. It defines moral duty by what is wrong to do and suggests inaction, whereas the “original” suggests that moral duty comprises actions. Upon analysis, I would argue that it is worse in the aggregate to take action and harm someone than to fail to act and not help someone, and as the scale of faulty action increases, the damage grows. There is more on this, but I will leave it there for now.

The Silver Rule, put into practice, may yet serve as a universal rule. Moreover, the Silver Rule is implicative of the Harm Principle. Often attributed to John Stuart Mill in his work, On Liberty, it is as follows: “That the only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent [physical] harm to others. [All else is permitted].” It would follow logically from the harm principle that morally impermissible actions comprise those that would cause undo physical harm to others, including the use of force absent proper justification.

Such principles may not provide answers for all questions, and they may not be the same principles to which all adhere. But they serve to limit the boundaries of our moral understanding and give a clearer view of what is within the realm of moral discussion. As an end in itself, we can gather that the act of undo harm or force is a moral evil. There is an obligation to abstain from heinous deeds. What remains is to examine when people commit such deeds and when it is permissible to judge so.