Preprint
Article

Rethinking Human and Machine Intelligence through Kant, Wittgenstein, and Gödel

Altmetrics

Downloads

384

Views

563

Comments

3

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

26 October 2023

Posted:

26 October 2023

Read the latest preprint version here

Alerts
Abstract
This paper proposes a new metaphysical framework for distinguishing between human and machine intelligence. By drawing an analogy from Kant’s incongruent counterparts, it posits two deterministic worlds -- one comprising a human agent and the other comprising a machine agent. Using ideas from Wittgenstein and Gödel, the paper defines “deterministic knowledge” and investigates how this knowledge is processed differently in those worlds. By postulating the distinctiveness of human intelligence, this paper addresses what it refers to as “the vantage point problem” – namely, how to make a qualitative distinction between the determinist and the universe where the determinist belongs.
Keywords: 
Subject: Arts and Humanities  -   Philosophy

Introduction

This paper was motivated by asking questions that arose from the concept of amor fati (or “love of fate”) (Nietzsche, 1990, 99). This philosophy of life has inspired many of Nietzsche’s readers seeking guidance in life. However, his advice seems to have certain inconsistencies. For instance, how can someone learn to embrace fate if everything in her life has been predetermined? If determinism is true, would it not be accurate to say that even an act of learning to love her fate was also predetermined? Was Nietzsche himself, perhaps, predestined to encourage his readers to love their fate? When determinists assert that the universe is deterministic, one cannot help avoiding the impression that they are putting themselves away from the very universe where they belong. Apparently, none of them has provided a convincing basis for justifying the significance of their assertion while situating themselves within the universe.1 In other words, no qualitative distinction has been drawn between the act of declaring the universe as deterministic and all the events of the universe that should also comprise the very act of declaration (This is the “vantage point problem.”).
One may argue that such an issue has been addressed through compatibilism, which proposes that one’s perceived sense of free will is compatible with determinism. However, the compatibilist view of human nature still relies on causal determinism, which is rooted in the notion of causality. This may reinforce the idea that humans are not essentially different from computing machines. Therefore, compatibilism itself may not be particularly helpful in clarifying what significant distinction lies between the determinist’s mind and the events of the universe that are within the determinist’s scope. If compatibilism is true, it is possible that the human mind differs from computers or other physical events of the universe only in terms of complexity, all on the same hierarchical level.
To address this issue, this paper proposes a novel philosophical perspective by discussing two different types of deterministic worlds. Kant’s “incongruent counterparts” (hereinafter, “ICs”) provided an inspiration for the paper’s argument. The argument also builds upon Gödel’s proof strategy for his incompleteness theorem and Wittgenstein’s (1922) proposition that “the world is the totality of facts” (p. 25). Admittedly, the use of Kant’s ICs as an analogy may seem far-fetched, since his original purpose was to resolve an absolute versus relational space controversy. In addition, Gödel’s theorem belongs in the field of mathematics, so its connection to determinism might be initially difficult to grasp. This paper will address these issues and conclude by presenting a solution to the vantage point problem.2

1. The Incongruent Counterparts

Kant devised the concept of ICs to address the issue of absolute versus relational space (Kant, 1994, pp. 145-174). According to the theory of absolute space, even if the universe had only one body and nothing else, that body would still have a spatial background in which it could move (Asher, 1987). However, the relational view of space denies the existence of absolute space and defines motion only in relation to other bodies.
Kant begins his argument by imagining two worlds. One world includes only a left hand (“LH”). The other world includes only a right hand (“RH”). If the relational view is correct, there should be no difference between these two. However, from an external perspective, the two worlds are clearly different. Therefore, Kant concludes that the relational theory of space must be incorrect.
However, the aim of this paper is to use the IC analogy as a speculative tool for discussing the nature of determinism in relation to human reasoning. To achieve this, the paper will consider the following cases by building upon Kant’s concept.
LH1: A right hand cannot enter into an LH world. Also, the right hand is inconceivable in the LH world.
LH2: A right hand can enter into an LH world, and if it does, it will be perceived no differently than the existing left hand.
RH1: A left hand cannot enter into an RH world. Nevertheless, its attributes can be hypothesized in the RH world.
RH2: A left hand can enter into an RH world. Also, the RH world can hypothesize the attributes of the left hand before such entry takes place.

2. Deterministic Knowledge

This paper will use the following key definitions:
(1) Deterministic knowledge (D knowledge): A totality of facts associated with all the past, present, and future events in a deterministic world. The totality coincides with every time point of the world.
(2) Metaphysically open deterministic world: A deterministic world where there is a metaphysical sense in assuming a scenario in which its deterministic knowledge is provided to a cognitive agent of the world.3
The concept of D knowledge is similar to Carnap’s (1947) “one state-description” (he notes that this idea was inspired by Wittgenstein) (p. 10). Specifically, it “describes the actual state of the universe” and “contains all true atomic sentences and the negations of those which are false” (p. 10). However, Carnap primarily devised this concept in relation to a semantical system for linguistic analysis. Meanwhile, D knowledge relates to descriptions of a deterministic world. In this regard, these two notions are different. Nevertheless, following Carnap, we will assume that D knowledge is an entirety of atomic sentences that describe a deterministic world.
Regarding Definition (2), the idea of the cognitive agent receiving D knowledge bears a resemblance to the “circular-seeming idea of substituting a string’s own Gödel number into the string itself” (Nagel & Newman, 2001, p. 89). But what is a string? In a formalized system of mathematics, “postulates and theorems” are “‘strings’ (or finitely long sequences) of meaningless marks, constructed according to rules” (p. 26). Further, a Gödel number is a “unique number [assigned] to each elementary sign, each formula (or sequence of signs), [or] each proof (or finite sequence of formulas),” which “serves as a distinctive tag or label” (p. 69). The D knowledge specific to the universe can be likened to the Gödel number assigned to a string (or mathematical theorem). Also, the cognitive agent can be compared to a variable in the theorem. Just as the Gödel number for the theorem is plugged into the variable in the theorem, the D knowledge is fed back into the cognitive agent’s information processing mechanism. Put simply:
Gödel number (representing a theorem) → D knowledge (describing the universe)
Theorem → Universe
Variable (included in the theorem) → Agent (included in the universe)
Plugging the Gödel number into the variable → Providing the D knowledge to the agent
Undoubtedly, the idea of the cognitive agent receiving the D knowledge is unconventional and seemingly contradictory. How could someone know about her future if it was predetermined? One way of circumventing this contradiction might be to assume that a particular deterministic world is contained within a larger system and that there exists a mathematical probability that the descriptions in the D knowledge will be at a particular time point provided to the agent from the larger system. Technically speaking, however, that would be an indeterministic world. Accordingly, this paper proposes to examine reception of D knowledge in a metaphysical sense only.4
Another unconventional aspect of this paper is the assumption of two apparently identical but different deterministic worlds. For example, Schwartz (2012) defines determinism as the view “that [possible] worlds cannot be the same up to a point and then diverge” (p. 216). However, in our thought experiment, it is possible for two deterministic worlds to be causally the same up to a point and then diverge when D knowledge is provided to them. If one contends that the human mind cannot be fully reduced to an algorithm, it is becomes necessary to assume that such a divergence is possible. For further discussion, we define the following two metaphysically open deterministic worlds that are established as “ICs.”
(i) The original world like ours.
(ii) A simulated world that replicates every aspect of the original world and emulates the human mind in a causal manner.
The paper will discuss information processing through computational concepts. According to Beraldo-de-Araújio, the essence of computation is “symbolic manipulation” and concerns “mapping function between two sets of symbols” (Polak & Krzanowski, 2019, p. 6). A human agent’s symbolic manipulation, for instance, may take place through neural activities in the brain. Meanwhile, a machine agent performs symbolic manipulation by processing machine-readable symbols. By modifying Beraldo-de-Araújio’s definitions on p.6, this paper defines computation as follows.
(1) A process is a function P: I → O such that its domain I is a set whose elements are called input events and its co-domain O is a set whose elements are called output events, while both I and O are subsets of a physical world. For all x∈I, y = P(x) (y∈O) is a corresponding output event.
(2) A computer is a function C: S → T from a set of input symbols S to a set of output symbols T, such that C(x̅) is outputted by computing x̅. (x̅ is a symbolic representation of x.) A process P: I → O is computational if P is generated by a computer C.
In the simulated world, we suppose that the mind is a “classical” von Neumann computer and that “its representation-bearers [are] data structures” (Frankish & Ramsey, 2012, pp. 31-32).5 This world is intentionally designed to avoid being based on a “connectionist” model.6 Specifically, it may not be feasible for this model to accurately emulate the human mind due to its comparatively high stochastic nature. Such a feature might hinder accurate realization of a scripted scenario in which probabilities may be occasionally assigned to counterfactual input events.7 Although the classical model may be much less sophisticated, it can at least robustly emulate human behaviors in hindsight if all the relevant information is available.

2.1. Type 1

If the D knowledge specific to the simulated world were provided to its cognitive agent, the agent would process reception of the D knowledge simply as one of the existing potential input events. This suggests that the agent executes rigid processing, as it cannot process in any other way an input that it was not configured to receive. This world is trivially deterministic in that it is governed by a predefined type of D knowledge (i.e., Type 1) that dictates how things should occur.
Through the IC analogy, the simulated world can be physically characterized by “LH1.” Recall that a right hand cannot enter into LH1. Similarly, D knowledge cannot be provided to the simulated world. Additionally, the simulated world can be metaphysically characterized by “LH2.” If a right hand enters into LH2, it will be perceived no differently than the existing left hand. Likewise, even if the D knowledge were provided to the simulated world, its provision could not be identified by its cognitive agent as distinct from all the other existing potential input events.
See the following mappings.
I = {x1, x2, …, xn}
O = {y1, y2, …, yn}
Since this is a trivially deterministic world, only one of the input events from x1 to xn is to occur. The pairs other than the actual input-output pair serve to illustrate counterfactual cases. These cases are included in Type-1 D knowledge. Now suppose that reception of D knowledge occurs immediately before a particular event in the input event set does. Then:
xD = reception of D knowledge
xD = xk (xD is reduced to xk.) 1 ≤ k ≤ n
yD = yk
However, the above mappings are based on a non-stochastic model, which does not allow for indeterminacy. By supposing for now that the simulated world is indeterministic, we can establish the following mappings.
I = {x1, x2, …, xn}
O = {y1[1], …, y1[s1]}, {y2[1], …, y2[s2]}, …, {yn[1], …, yn[sn]}
xD = xk (xD is reduced to xk.) 1 ≤ k ≤ n
yD = One element from {yk[1], …, yk[sk]} (The probabilities assigned to these stochastic outcomes add up to 1.)
The above stochastic model suggests the following. Even in an indeterministic world, as long as the agent relies on rigid processing, its response to reception of the D knowledge could be nothing other than any one of the predefined output events.
To illustrate the triviality of the simulated world, let us consider a hypothetical scenario involving a clinical psychologist named “Millicent” (or simply “Millie”). She loves coffee but often hesitates whether to drink it. One morning, she decides to have a coffee anyway. She starts drinking it while watching a seminar video through a tablet device. In this case, suppose that there is a 60% chance that she will stop drinking her coffee if she happens to think that it will do her no good. The following event mappings are established for her in atomic-sentential form:
x1 = The seminar tires me.
x2 = The coffee does not convince me of insomnia.
x3 = The coffee convinces me of insomnia.
y1 = I stop watching.
y2 = I keep drinking.
y31 = I stop drinking.
y32 = I keep drinking.
However, since the world is deterministic, only a particular event such as x1 (which does not allow a stochastic outcome) would have been configured to occur. Meanwhile, in a metaphysical sense, it is possible to assume that specific descriptions in the D knowledge could be provided to her immediately before x1 happens. Suppose that her tablet displays not only the above mappings but also a short history of her activities in the morning and the events to unfold throughout the day. How would she respond?
From a humanistic perspective, there must be a distinct mental representation corresponding to the event of “I see the descriptions.” However, Millie’s rigid processing mechanism would only be able to interpret the sight of the display as one of x̅1 to x̅3. Recall that Millie’s mind follows the classical computer model whose representation-bearers are data structures. Since she only executes rigid processing, a bit structure corresponding to her symbolic representation of the event would most probably be translated to a particular bit structure corresponding to one of x̅1 to x̅3. Suppose that it is interpreted as x̅3. Then, her processing mechanism would output either one of y̅31 and y̅32. Given the 60% chance, it would probably output y̅31, which should be accompanied by y31. In other words, she would probably stop drinking her coffee in response to receiving the D knowledge.

2.2. Type 2

If the D knowledge specific to the original world were provided to its cognitive agent, the agent would process reception of the D knowledge as a different input event than all the other previous potential input events. This means that the agent’s processing mechanism exhibits emergent processing, as it can distinctly identify a particular input event that was not supposed to happen. This world is non-trivially deterministic. Using the IC analogy, it can be physically characterized by “RH1” and metaphysically by “RH2.” Further, it is possible (rather than necessary) that the D knowledge only reflects every physical event across time. Unlike Type 1, this type of D knowledge (namely, Type 2) does NOT include counterfactual cases. Also, this knowledge is compatible with the block universe theory.
In the block universe model, “[w]hether past, present or future, all events ‘lie frozen’ in the four-dimensional block, much like the scenes from a movie are fixed on the film roll” (Thyssen, 2020, p. 6). If one were to see the events of the universe like fixed scenes on a film roll from an omniscient viewpoint across time, she might be able to extrapolate to a certain extent counterfactual cases in relation to those events. However, the scenes themselves do not include such information. In that sense, Type-2 D knowledge only mirrors the physical events.
Meanwhile, we assume that emergence of a new output in response to D knowledge reception is necessary, considering that the agent’s processing mechanism is assumed to be governed by causality. However, the content of the new output may be deterministic or non-deterministic. This is highlighted by the question mark in the input-output mappings below. The pairs other than the actual input-output pair are provided as dummies whose contents are unknown (i.e., the counterfactual cases are unknown). “xn+1” (i.e., reception of D knowledge) is enclosed in parentheses to indicate that it is only a latent event in a metaphysical sense.
I = {x1, x2, …, xn,(xn+1)}
O = {y1, y2, …, yn, (?)}
xD = xn+1               yD = ?
If the Millie scenario happened in the original world, she might have been struck to the core and asked, “Am I living in a Matrix?” by emergently interpreting D knowledge reception.

2.3. Type 3

Despite its assertion that the universe is deterministic, the block universe theory does not demand absolute causality. Polkinghorne (2007) notes that “[b]elievers in the block universe are not forced to commit themselves to a deterministic account of its causal structure” (p. 977). However, assume that the original world is thoroughly deterministic in a causal sense. Then, we can entertain the idea that its cognitive agent’s decision-making processes are strictly deterministic in a metaphysical as well as physical sense. Specifically, the agent should produce a new corresponding output (whose content is deterministic) in response to receiving D knowledge of Type 2. This hypothetical situation would generate a derivative version of D knowledge (namely, D’). Then, the agent should produce another output in response to receiving D’, thereby generating another derivative version of D knowledge (namely, D’’). To aid in understanding this somewhat complex scenario, let us go back to the Millie story. With regard to the Millie of the original world, D’ knowledge might state as follows:
“Millie responds to D knowledge. She utters “Am I living in a Matrix?”
D’’ knowledge might state:
“Millie responds to D’ knowledge. She utters ‘I might need to take some medication to calm my caffeine-induced paranoia. Or maybe this world that I’m living in was monstrously rigged, and I must somehow survive by figuring out how I first reacted to… I don’t know, but it seems like this situation that I’m in happened already once before, and I must figure out whatever this evil gadget had said in the first place. Let me think… Whatever action I take right now, was that also predetermined?’”
See the following formal mappings:
I = {x1, x2, …, xn, (xn+1), (xn+2), … }
O = {y1, y2, …, yn, (yn+1), (yn+2), …}
xD = xn+1              yD = yn+1
xD’ = xn+2              yD’ = yn+2
…                            …
The above mappings may develop indefinitely.8 All these potentially infinite counterfactual cases are included in Type 3.9 Further, it can be said that this type of knowledge is generated by the first cause of the world. For instance, Tegmark (2008) argues that it is “plausible that our universe could be simulated by quite a short computer program” (p. 18). Based on the idea that “our universe is mathematics” (p. 1), he maintains that its realization only requires storage of “all the 4-dimensional data” (i.e., all the “[encoded] properties of the mathematical structure that is our universe”) (p. 18). He states that a “complete description” of a mathematical structure is “a specification of the relations between the elements” of the mathematical structure (p. 18). As such, the 4-dimensional data primarily relate to the abstract realm of mathematics. If his argument is true, we would not need any type of D knowledge in order to simulate a universe. Rather, D knowledge would be a byproduct of the mathematical structure and its specification.

3. Verbal Information Processing

This section explores different causal characteristics exhibited by human and machine agents with regard to processing verbal information.

3.1. The “Retaining” Process

When a sentence is input into a machine agent, it is made to process the sentence through a mere concatenation of words. The machine has no sense of a temporal flow when executing the process. It simply moves from one bit to another. On the other hand, when a sentence is presented to the human agent, the agent conjures up a mental image of the subject word and retains10 it up to the point of recognizing the predicate. Ultimately, the images of the subject and predicate are combined to create a holistic image of the sentence itself.11

3.2. Continuity of Space and Time

The process of “retaining” mental images raises the question of how any spatial/temporal transitions can occur if space and time are continuous. For instance, what does it mean to move within continuous space when no immediately subsequent coordinate can be defined with respect to an origin? This issue can be resolved through an “ontological” argument. Specifically, transitions can happen because they should happen in order for the notion of continuity to be established. As illustrated in Zeno’s paradox, continuity is discovered retroactively12 through endless transitions. Without relying on these transitions, it is impossible to identify continuity. Therefore, transitions must exist. Moreover, the very initial distance between the two points that is to be split in two ensures the presence of a discrete leap in real space. Ultimately, it can be proposed that the human agent’s cognitive mechanism proactively achieves a discrete leap in real space and time, by retaining relevant information (e.g., perceptible spatial/temporal coordinates) along the way. This enables the human agent to process verbal information in a different way.

4. The Vantage Point Problem

This section explores how the “vantage point problem” stated in this paper’s introduction can be addressed by relying on the concept of D knowledge. Let us first look into two philosophical cases where this problem has not been properly addressed.
(1) Tegmark (2008) asserts that “[t]here exists an external physical reality completely independent of us humans” and that “[o]ur external physical reality is a mathematical structure (p. 1). However, despite his convincing arguments, he still fails to address the vantage point problem. In footnote 3 on p. 5, he notices the problem of how to derive, through (i) “a mathematical structure” alone, (ii) “an empirical domain” and (iii) “a set of correspondence rules which link parts of the mathematical structure with parts of the empirical domain.” He hints at a possibility of achieving this by introducing a “car” analogy. Specifically, “given an abstract but complete description of a car (essentially the locations of its atoms),” “someone” that wants “practical use of this car” might “be able to figure out how the car works and write her own manual” by “carefully examining the original description” (footnote 3 on p. 5). Put simply:
“Someone” → Mathematician13
Description of the car → Mathematical structure of the universe
Practical use of the car → Empirical domain of the universe
Knowledge of how the car works → Correspondence rules linking the mathematical structure with the empirical domain
While the mathematician is part of the universe, that “someone” is not part of the car. Therefore, the car analogy fails. In fact, the analogy well illustrates a common mistake made by scientists as well as philosophers – namely, the confusion that arises from the vantage point problem.
(2) Dennett (2003) notes that "confusion [over determinism] arises when one tries to maintain two perspectives on the universe at once" (p. 93). One perspective is the "God’s eye" perspective, and the other is the "engaged perspective of an agent within the universe" (p. 93). His description of the former perspective coincides with the Parmenidean view of the universe. Specifically, he states that "[f]rom the timeless God’s-eye perspective nothing ever changes" as "the whole history of the universe is laid out ’at once’" (p. 93). Dennett appears to give equal weights to both perspectives but cautions against assuming them at the same time. He does not provide a philosophical scheme where both perspectives can coexist.
The two cases show that scientists and philosophers alike are struggling to reconcile the discrepancy between a human agent making a declarative statement (e.g., a deterministic worldview) about the universe at large and the universe where the agent belongs. It is believed that this paper’s conceptual framework has resolved this issue to a certain extent. It has provided a basis for characterizing the human agent as capable of processing -- to use a bit of an oxymoron -- even “otherworldly but comprehensible” knowledge (i.e., D knowledge).14 That a particular entity in an “otherworldly” realm is “comprehensible” to an agent means that the agent could potentially view the universe from a vantage point situated in the otherworldly realm. This peculiar dynamics between the agent and the universe can be best described through a dialectic circle15 that grows as the agent and the objects/events of the universe continue to encircle each other in an alternating manner. It is this dialectic circle that provides a holistic scheme for investigation of the universe.

5. Conclusions

The major ideas of this paper can be outlined as follows.
(1) Deterministic knowledge
  • Type 1
    Dictates the world.
    Includes finite counterfactual cases.
  • Type 2
    Reflects the world.
    Includes no counterfactual cases.
  • Type 3
    Is generated by the world.
    Includes infinite counterfactual cases.
(2) Metaphysically open deterministic world
  • Trivially deterministic world
    Its agent executes rigid processing, which relies on concatenation of bits.
  • Non-trivially deterministic world
    Its agent executes emergent processing, which relies on the “retaining” process.
Based on the above conceptual scheme, this paper has sought to preserve the uniqueness of the human mind by allowing for hard determinism. Additionally, it attempted to answer the question of how to establish a qualitative distinction between a human agent as an investigator of the universe and the universe that the agent belongs to.
However, this paper does have several limitations. For instance, section 2.3 explores the notion that the human mind may provide a distinct response to reception of each of the infinite derivative versions of D knowledge. Several readers may find this idea implausible. In addition, the conception of D knowledge may encounter challenges from quantum physicists, who argue that describing physical events through exact spatial/temporal coordinates on the quantum level is inherently impossible. Finally, this paper cannot explain the phenomenon of qualia or a sense of agency and free will. These problems require further study.

Acknowledgements

The author thanks Jae-ok Lee and Jia-wei Li (“Mili”) for their feedback.

References

  1. Asher, W. O. "Berkeley on absolute motion." History of Philosophy Quarterly 4, no. 4 (1987): 447-466. http://www.jstor.org/stable/27743831.
  2. Carnap, R. (1947). Meaning and Necessity. University of Chicago Press.
  3. Dennett, D. C. Freedom Evolves. Penguin Books, 2003.
  4. Dodd, J. "Reading Husserl’s time-diagrams from 1917/18." Husserl Studies 21, no. 2 (2005): 95-115.
  5. Feldman, J. A., & Ballard, D. H. (1982). Connectionist Models and Their Properties. Cognitive Science, 6(3), 205-254.
  6. Frankish, K., & Ramsey, W. (Eds.). (2012). The Cambridge Handbook of Cognitive Science. Cambridge University Press. https://doi.org/10.1017/CBO9781139033916.
  7. Kant, I. "Concerning the ultimate ground of the differentiation of directions in space." In Symmetries in Physics 3, 145-174. Springer Netherlands, 1994.
  8. Maybee, J. E. "Hegel’s dialectics." In The Stanford Encyclopedia of Philosophy, edited by E. N. Zalta, Winter 2020 edition. Stanford University, 2020. https://plato.stanford.edu/archives/win2020/entries/hegel-dialectics/.
  9. Nagel, E., and J. R. Newman. Gödel’s Proof. Edited by D. R. Hofstadter. New York, NY: NYU Press, 2001.
  10. Nietzsche, F. "The Anti-Christ, Ecce Homo, Twilight of the Idols." In Friedrich Nietzsche: The Anti-Christ, Ecce Homo, Twilight of the Idols, edited by R. J. Hollingdale, 1-199. Cambridge University Press, 1990.
  11. Polak, P., & Krzanowski, R. (2019). Deanthropomorphized Pancomputationalism and the Concept of Computing. Foundations of Computing and Decision Sciences, 44(1), 45-54. [CrossRef]
  12. Polkinghorne, J. (2007). Space, time, and causality. Zygon, 41(4), 975-984. [CrossRef]
  13. Schwartz, S. P. A Brief History of Analytic Philosophy: From Russell to Rawls. Wiley Blackwell, 2012.
  14. Sterelny, K. (1990). The representational theory of mind: An introduction. Basil Blackwell.
  15. Tegmark, M. (2008). The Mathematical Universe. Foundations of Physics, 38, 101-150. [CrossRef]
  16. Thyssen, P. The Block Universe: A Philosophical Investigation in Four Dimensions. Doctoral dissertation, KU Leuven, Humanities and Social Sciences Group, Institute of Philosophy, 2020.
  17. Vihvelin, K. (2023). Determinism, Counterfactuals, and the Possibility of Time Travel. Philosophies, 8, 68. [CrossRef]
  18. Wittgenstein, L. Tractatus Logico-Philosophicus. Project Gutenberg, 1922. https://www.gutenberg.org/ebooks/5740.
  19. Žižek, S. The Most Sublime Hysteric: Hegel with Lacan. John Wiley & Sons, 2014.
1
For instance, Wittgenstein (1922) states that the “[metaphysical] subject does not belong to the world” (p. 74).
2
This paper consciously takes an emergentist approach to the human mind. Even a complete mathematical formulation of the neural correlate of consciousness might not be able to explain how it specifically gives rise to consciousness.
3
Contemporary metaphysicians propose to “[leave] open the metaphysical possibility of time travel to the past and backwards causation” (Vihvelin, 2023, Abstract). This idea is implicitly involved in the definition.
4
We assume that the cognitive agent receives only a “small breadth” of D knowledge that is associated with the agent. The entirety of D knowledge would be too immense to be processed by any agent.
5
A representation-bearer is a means through which an object being represented is perceived by an agent. For details, see Frankish & Ramsey (2012, p. 9).
6
Per Feldman & Ballard (1982), the premise of connectionism is that “individual neurons do not transmit large amounts of symbolic information” and that “they compute by being appropriately connected to large numbers of similar units” (p. 208).
7
The Millie scenario to be discussed in section 2.1 includes stochastic outcomes in response to a counterfactual input event.
8
Similarly, Sterelny (1990) states that the “ability to think about the world as it is and as it might be, to think indefinitely many and indefinitely complex thoughts” may be a “necessary condition on having intentional states” (p. 29).
9
When considering the infinite counterfactual cases, we see that there can be no predefined type of D knowledge (i.e., Type 1) that dictates a non-trivial world.
10
Husserl’s diagram can provide a useful illustration for how the “retaining” takes place (Dodd, 2005).
11
One’s image of a word arises through the interconnections with other words in her subconscious corpus. How did she build the corpus from scratch? It started by matching a word with a physical object and on and on.
12
The “ontological” argument was influenced by Žižek (2014), who mentions “a retroactive realization that the solution can be found in what we originally saw as the problem” (p. 29).
13
Tegmark (2008) mentions “the bird perspective of a mathematician studying the mathematical structure” (p. 3).
14
D knowledge can never be accessed but exists in comprehensible form. In that sense, it is unlike Kantian things-in-themselves, which are “otherworldly and incomprehensible.”
15
This circle resembles Hegel’s “Absolute Idea” illustrated in FIGURE 2 in Maybee (2020, Section 1).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated