Spinoza's causal axiom is at the foundation of the Ethics. I motivate, develop and defend a new interpretation that I call the ‘causally restricted interpretation’. This interpretation solves several longstanding puzzles and helps us better understand Spinoza's arguments for some of his most famous doctrines, including his parallelism doctrine and his theory of sense perception. It also undermines a widespread view about the relationship between the three fundamental, undefined notions in Spinoza's metaphysics: causation, conception and inherence.
The Hyperuniverse Programme, introduced in Arrigoni and Friedman (2013), fosters the search for new set-theoretic axioms. In this paper, we present the procedure envisaged by the programme to find new axioms and the conceptual framework behind it. The procedure comes in several steps. Intrinsically motivated axioms are those statements which are suggested by the standard concept of set, i.e. the `maximal iterative concept', and the programme identi fies higher-order statements motivated by the maximal iterative concept. The satisfaction of these statements (...) (H-axioms) in countable transitive models, the collection of which constitutes the `hyperuniverse' (H), has remarkable 1st-order consequences, some of which we review in section 5. (shrink)
Discussion of new axioms for set theory has often focused on conceptions of maximality, and how these might relate to the iterative conception of set. This paper provides critical appraisal of how certain maximality axioms behave on different conceptions of ontology concerning the iterative conception. In particular, we argue that forms of multiversism and actualism face complementary problems. The latter view is unable to use maximality axioms that make use of extensions, where the former has to contend with the existence (...) of extensions violating maximality axioms. An analysis of two kinds of multiversism, a Zermelian form and Skolemite form, leads to the conclusion that the kind of maximality captured by an axiom differs substantially according to background ontology. (shrink)
Formalizing Euclid’s first axiom. Bulletin of Symbolic Logic. 20 (2014) 404–5. (Coauthor: Daniel Novotný) -/- Euclid [fl. 300 BCE] divides his basic principles into what came to be called ‘postulates’ and ‘axioms’—two words that are synonyms today but which are commonly used to translate Greek words meant by Euclid as contrasting terms. -/- Euclid’s postulates are specifically geometric: they concern geometric magnitudes, shapes, figures, etc.—nothing else. The first: “to draw a line from any point to any point”; the last: (...) the parallel postulate. -/- Euclid’s axioms are general principles of magnitude: they concern geometric magnitudes and magnitudes of other kinds as well even numbers. The first is often translated “Things that equal the same thing equal one another”. -/- There are other differences that are or might become important. -/- Aristotle [fl. 350 BCE] meticulously separated his basic principles [archai, singular archê] according to subject matter: geometrical, arithmetic, astronomical, etc. However, he made no distinction that can be assimilated to Euclid’s postulate/axiom distinction. -/- Today we divide basic principles into non-logical [topic-specific] and logical [topic-neutral] but this too is not the same as Euclid’s. In this regard it is important to be cognizant of the difference between equality and identity—a distinction often crudely ignored by modern logicians. Tarski is a rare exception. The four angles of a rectangle are equal to—not identical to—one another; the size of one angle of a rectangle is identical to the size of any other of its angles. No two angles are identical to each other. -/- The sentence ‘Things that equal the same thing equal one another’ contains no occurrence of the word ‘magnitude’. This paper considers the problem of formalizing the proposition Euclid intended as a principle of magnitudes while being faithful to the logical form and to its information content. (shrink)
The purpose of this paper is to challenge some widespread assumptions about the role of the modal axiom 4 in a theory of vagueness. In the context of vagueness, axiom 4 usually appears as the principle ‘If it is clear (determinate, definite) that A, then it is clear (determinate, definite) that it is clear (determinate, definite) that A’, or, more formally, CA → CCA. We show how in the debate over axiom 4 two different notions of clarity (...) are in play (Williamson-style "luminosity" or self-revealing clarity and concealeable clarity) and what their respective functions are in accounts of higher-order vagueness. On this basis, we argue first that, contrary to common opinion, higher-order vagueness and S4 are perfectly compatible. This is in response to claims like that by Williamson that, if vagueness is defined with the help of a clarity operator that obeys axiom 4, higher-order vagueness disappears. Second, we argue that, contrary to common opinion, (i) bivalence-preservers (e.g. epistemicists) can without contradiction condone axiom 4 (by adopting what elsewhere we call columnar higher-order vagueness), and (ii) bivalence-discarders (e.g. open-texture theorists, supervaluationists) can without contradiction reject axiom 4. Third, we rebut a number of arguments that have been produced by opponents of axiom 4, in particular those by Williamson. (The paper is pitched towards graduate students with basic knowledge of modal logic.). (shrink)
In quantum theory every state can be diagonalized, i.e. decomposed as a convex combination of perfectly distinguishable pure states. This elementary structure plays an ubiquitous role in quantum mechanics, quantum information theory, and quantum statistical mechanics, where it provides the foundation for the notions of majorization and entropy. A natural question then arises: can we reconstruct these notions from purely operational axioms? We address this question in the framework of general probabilistic theories, presenting a set of axioms that guarantee that (...) every state can be diagonalized. The first axiom is Causality, which ensures that the marginal of a bipartite state is well defined. Then, Purity Preservation states that the set of pure transformations is closed under composition. The third axiom is Purification, which allows to assign a pure state to the composition of a system with its environment. Finally, we introduce the axiom of Pure Sharpness, stating that for every system there exists at least one pure effect occurring with unit probability on some state. For theories satisfying our four axioms, we show a constructive algorithm for diagonalizing every given state. The diagonalization result allows us to formulate a majorization criterion that captures the convertibility of states in the operational resource theory of purity, where random reversible transformations are regarded as free operations. (shrink)
In the early 1900s, Russell began to recognize that he, and many other mathematicians, had been using assertions like the Axiom of Choice implicitly, and without explicitly proving them. In working with the Axioms of Choice, Infinity, and Reducibility, and his and Whitehead’s Multiplicative Axiom, Russell came to take the position that some axioms are necessary to recovering certain results of mathematics, but may not be proven to be true absolutely. The essay traces historical roots of, and motivations (...) for, Russell’s method of analysis, which are intended to shed light on his view about the status of mathematical axioms. I describe the position Russell develops in consequence as “immanent logicism,” in contrast to what Irving (1989) describes as “epistemic logicism.” Immanent logicism allows Russell to avoid the logocentric predicament, and to propose a method for discovering structural relationships of dependence within mathematical theories. (shrink)
In this article I develop an elementary system of axioms for Euclidean geometry. On one hand, the system is based on the symmetry principles which express our a priori ignorant approach to space: all places are the same to us, all directions are the same to us and all units of length we use to create geometric figures are the same to us. On the other hand, through the process of algebraic simplification, this system of axioms directly provides the Weyl’s (...) system of axioms for Euclidean geometry. The system of axioms, together with its a priori interpretation, offers new views to philosophy and pedagogy of mathematics: it supports the thesis that Euclidean geometry is a priori, it supports the thesis that in modern mathematics the Weyl’s system of axioms is dominant to the Euclid’s system because it reflects the a priori underlying symmetries, it gives a new and promising approach to learn geometry which, through the Weyl’s system of axioms, leads from the essential geometric symmetry principles of the mathematical nature directly to modern mathematics. (shrink)
A description of consciousness leads to a contradiction with the postulation from special relativity that there can be no connections between simultaneous event. This contradiction points to consciousness involving quantum level mechanisms. The Quantum level description of the universe is re- evaluated in the light of what is observed in consciousness namely 4 Dimensional objects. A new improved interpretation of Quantum level observations is introduced. From this vantage point the following axioms of consciousness is presented. Consciousness consists of two distinct (...) components, the observed U and the observer I. The observed U consist of all the events I is aware of. A vast majority of these occur simultaneously. Now if I were to be an entity within the space-time continuum, all of these events of U together with I would have to occur at one point in space-time. However, U is distributed over a definite region of space-time (region in brain). Thus, I is aware of a multitude of space-like separated events. It is seen that this awareness necessitates I to be an entity outside the space-time continuum. With I taken as such, a new concept called concept A is introduced. With the help of concept A a very important axiom of consciousness, namely Free Will is explained. Libet s Experiment which was originally seen to contradict Free will, in the light of Concept A is shown to support it. A variation to Libet s Experiment is suggested that will give conclusive proof for Concept A and Free Will. (shrink)
Second-order Peano Arithmetic minus the Successor Axiom is developed from first principles through Quadratic Reciprocity and a proof of self-consistency. This paper combines 4 other papers of the author in a self-contained exposition.
Ontology engineering is a hard and error-prone task, in which small changes may lead to errors, or even produce an inconsistent ontology. As ontologies grow in size, the need for automated methods for repairing inconsistencies while preserving as much of the original knowledge as possible increases. Most previous approaches to this task are based on removing a few axioms from the ontology to regain consistency. We propose a new method based on weakening these axioms to make them less restrictive, employing (...) the use of refinement operators. We introduce the theoretical framework for weakening DL ontologies, propose algorithms to repair ontologies based on the framework, and provide an analysis of the computational complexity. Through an empirical analysis made over real-life ontologies, we show that our approach preserves significantly more of the original knowledge of the ontology than removing axioms. (shrink)
The independence phenomenon in set theory, while pervasive, can be partially addressed through the use of large cardinal axioms. A commonly assumed idea is that large cardinal axioms are species of maximality principles. In this paper, I argue that whether or not large cardinal axioms count as maximality principles depends on prior commitments concerning the richness of the subset forming operation. In particular I argue that there is a conception of maximality through absoluteness, on which large cardinal axioms are restrictive. (...) I argue, however, that large cardinals are still important axioms of set theory and can play many of their usual foundational roles. (shrink)
We present an elementary system of axioms for the geometry of Minkowski spacetime. It strikes a balance between a simple and streamlined set of axioms and the attempt to give a direct formalization in first-order logic of the standard account of Minkowski spacetime in [Maudlin 2012] and [Malament, unpublished]. It is intended for future use in the formalization of physical theories in Minkowski spacetime. The choice of primitives is in the spirit of [Tarski 1959]: a predicate of betwenness and a (...) four place predicate to compare the square of the relativistic intervals. Minkowski spacetime is described as a four dimensional ‘vector space’ that can be decomposed everywhere into a spacelike hyperplane - which obeys the Euclidean axioms in [Tarski and Givant, 1999] - and an orthogonal timelike line. The length of other ‘vectors’ are calculated according to Pythagora’s theorem. We conclude with a Representation Theorem relating models of our system that satisfy second order continuity to the mathematical structure called ‘Minkowski spacetime’ in physics textbooks. (shrink)
In mathematics, if one starts with the infinite set of positive integers, P, and want to compare the size of the subset of odd positives, O, with P, this is done by pairing off each odd with a positive, using a function such as P=2O+1. This puts the odds in a one-to-one correspondence with the positives, thereby, showing that the subset of odds and the set of positives are the same size, or have the same cardinality. This counter-intuitive result ignores (...) the “natural” relationship of one odd for every two positives in the sequence of positive integers; however, in the set of axioms that constitute mathematics, it is considered valid. In the physical universe, though, relationships between entities matter. For example, in biochemistry, if you start with an organism and you want to study the heart, you can do this by removing some heart cells from the organism and studying them in isolation in a cell culture system. But, the results are often different than what occurs in the intact organism because studying the cells in culture ignores the relationships in the intact body between the heart cells, the rest of the heart tissue and the rest of the organism. In chemistry, if a copper atom was studied in isolation, it would never be known that copper atoms in bulk can conduct electricity because the atoms share their electrons. In physics, the relationships between inertial reference frames in relativity and observer and observed in quantum physics can't be ignored. Furthermore, infinities cause numerous problems in theoretical physics such as non-renormalizability. What this suggests is that the pairing off method and the mathematics of infinite sets based on it are analogous to a cell culture system or studying a copper atom in isolation if they are used in studying the real, physical universe because they ignore the inherent relationships between entities. In the real, physical world, the natural, or inherent, relationships between entities can't be ignored. Said another way, the set of axioms which constitute abstract mathematics may be similar but not identical to the set of physical axioms by which the real, physical universe runs. This suggests that the results from abstract mathematics about infinities may not apply to or should be modified for use in physics. (shrink)
In this article, a possible generalization of the Löb’s theorem is considered. Main result is: let κ be an inaccessible cardinal, then ¬Con( ZFC +∃κ) .
Axiom weakening is a novel technique that allows for fine-grained repair of inconsistent ontologies. In a multi-agent setting, integrating ontologies corresponding to multiple agents may lead to inconsistencies. Such inconsistencies can be resolved after the integrated ontology has been built, or their generation can be prevented during ontology generation. We implement and compare these two approaches. First, we study how to repair an inconsistent ontology resulting from a voting-based aggregation of views of heterogeneous agents. Second, we prevent the generation (...) of inconsistencies by letting the agents engage in a turn-based rational protocol about the axioms to be added to the integrated ontology. We instantiate the two approaches using real-world ontologies and compare them by measuring the levels of satisfaction of the agents w.r.t. the ontology obtained by the two procedures. (shrink)
The basic axioms or formal conditions of decision theory, especially the ordering condition put on preferences and the axioms underlying the expected utility formula, are subject to a number of counter-examples, some of which can be endowed with normative value and thus fall within the ambit of a philosophical reflection on practical rationality. Against such counter-examples, a defensive strategy has been developed which consists in redescribing the outcomes of the available options in such a way that the threatened axioms or (...) conditions continue to hold. We examine how this strategy performs in three major cases: Sen's counterexamples to the binariness property of preferences, the Allais paradox of EU theory under risk, and the Ellsberg paradox of EU theory under uncertainty. We find that the strategy typically proves to be lacking in several major respects, suffering from logical triviality, incompleteness, and theoretical insularity. To give the strategy more structure, philosophers have developed “principles of individuation”; but we observe that these do not address the aforementioned defects. Instead, we propose the method of checking whether the strategy can overcome its typical defects once it is given a proper theoretical expansion. We find that the strategy passes the test imperfectly in Sen's case and not at all in Allais's. In Ellsberg's case, however, it comes close to meeting our requirement. But even the analysis of this more promising application suggests that the strategy ought to address the decision problem as a whole, rather than just the outcomes, and that it should extend its revision process to the very statements it is meant to protect. Thus, by and large, the same cautionary tale against redescription practices runs through the analysis of all three cases. A more general lesson, simply put, is that there is no easy way out from the paradoxes of decision theory. (shrink)
Those incompleteness theorems mean the relation of (Peano) arithmetic and (ZFC) set theory, or philosophically, the relation of arithmetical finiteness and actual infinity. The same is managed in the framework of set theory by the axiom of choice (respectively, by the equivalent well-ordering "theorem'). One may discuss that incompleteness form the viewpoint of set theory by the axiom of choice rather than the usual viewpoint meant in the proof of theorems. The logical corollaries from that "nonstandard" viewpoint the (...) relation of set theory and arithmetic are demonstrated. (shrink)
CAT4 is proposed as a general method for representing information, enabling a powerful programming method for large-scale information systems. It enables generalised machine learning, software automation and novel AI capabilities. It is based on a special type of relation called CAT4, which is interpreted to provide a semantic representation. This is Part 1 of a five-part introduction. The focus here is on defining the key mathematical structures first, and presenting the semantic-database application in subsequent Parts. We focus in Part 1 (...) on general axioms for the structures, and introduce key concepts. Part 2 analyses the CAT2 sub-relation of CAT4 in more detail. The interpretation of fact networks is introduced in Part 3, where we turn to interpreting semantics. We start with examples of relational and graph databases, with methods to translate them into CAT3 networks, with the aim of retaining the meaning of information. The full application to semantic theory comes in Part 4, where we introduce general functions, including the language interpretation or linguistic functions. The representation of linear symbolic languages, including natural languages and formal symbolic languages, is a function that CAT4 is uniquely suited to. In Part 5, we turn to software design considerations, to show how files, indexes, functions and screens can be defined to implement a CAT4 system efficiently. (shrink)
The naive theory of properties states that for every condition there is a property instantiated by exactly the things which satisfy that condition. The naive theory of properties is inconsistent in classical logic, but there are many ways to obtain consistent naive theories of properties in nonclassical logics. The naive theory of classes adds to the naive theory of properties an extensionality rule or axiom, which states roughly that if two classes have exactly the same members, they are identical. (...) In this paper we examine the prospects for obtaining a satisfactory naive theory of classes. We start from a result by Ross Brady, which demonstrates the consistency of something resembling a naive theory of classes. We generalize Brady’s result somewhat and extend it to a recent system developed by Andrew Bacon. All of the theories we prove consistent contain an extensionality rule or axiom. But we argue that given the background logics, the relevant extensionality principles are too weak. For example, in some of these theories, there are universal classes which are not declared coextensive. We elucidate some very modest demands on extensionality, designed to rule out this kind of pathology. But we close by proving that even these modest demands cannot be jointly satisfied. In light of this new impossibility result, the prospects for a naive theory of classes are bleak. (shrink)
We introduce translations between display calculus proofs and labeled calculus proofs in the context of tense logics. First, we show that every derivation in the display calculus for the minimal tense logic Kt extended with general path axioms can be effectively transformed into a derivation in the corresponding labeled calculus. Concerning the converse translation, we show that for Kt extended with path axioms, every derivation in the corresponding labeled calculus can be put into a special form that is translatable to (...) a derivation in the associated display calculus. A key insight in this converse translation is a canonical representation of display sequents as labeled polytrees. Labeled polytrees, which represent equivalence classes of display sequents modulo display postulates, also shed light on related correspondence results for tense logics. (shrink)
“I am me”, but what does this mean? For centuries humans identified themselves as conscious beings with free will, beings that are important in the cosmos they live in. However, modern science has been trying to reduce us into unimportant pawns in a cold universe and diminish our sense of consciousness into a mere illusion generated by lifeless matter. Our identity in the cosmos is nothing more than a deception and all the scientific evidence seem to support this idea. Or (...) is it not? The goal of this paper is to discard current underlying dogmatism (axioms taken for granted as "self-evident") of modern mind research and to show that consciousness seems to be the ultimate frontier that will cause a major change in the way exact sciences think. If we want to re-discover our identity as luminous beings in the cosmos, we must first try to pinpoint our prejudices and discard them. Materialism is an obsolete philosophical dogma and modern scientists should try to also use other premises as the foundation of their theories to approach the mysteries of the self. Exact sciences need to examine the world with a more open mind, accepting potentially different interpretations of existing experimental data in the fields of brain research, which are currently not considered simply on the basis of a strong anti-spiritual dogmatism. Such interpretations can be compatible with the notion of an immaterial spirit proposed by religion for thousands of years. Mind seems that is not the by-product of matter, but the opposite: its master. No current materialistic theory can explain how matter may give rise to what we call “self” and only a drastic paradigm shift towards more idealistic theories will help us avoid rejecting our own nature. (shrink)
In Popper's Logik der Forschung, a theoretical system is a set of sentences that describe a particular sub-area of science, in particular of empirical science. The goal of axiomatizing a theoretical system is to specify a small number of "axioms" describing all presuppositions of the sub-area under consideration, so that all other sentences of this system can be derived from them by means of logical or mathematical transformations. The paper discusses two philosophical interpretations of these proper axioms. First, proper axioms (...) stipulate the use of the signs for the basic concepts of the system. Consequently, the proper axioms turn out to be analytic relative to a class of interpretations of the underlying logic. Hence, they cannot be falsified by refuting their logical consequences because these consequences are analytic as well. Secondly, proper axioms are synthetic, falsifiable and uncertain sentences. Hence, they are not immunized against falsification by refuting their logical consequences. (shrink)
A description of consciousness leads to a contradiction with the postulation from special relativity that there can be no connections between simultaneous event. This contradiction points to consciousness involving quantum level mechanisms. The Quantum level description of the universe is re- evaluated in the light of what is observed in consciousness namely 4 Dimensional objects. A new improved interpretation of Quantum level observations is introduced. From this vantage point the following axioms of consciousness is presented. Consciousness consists of two distinct (...) components, the observed U and the observer I. The observed U consist of all the events I is aware of. A vast majority of these occur simultaneously. Now if I were to be an entity within the space-time continuum, all of these events of U together with I would have to occur at one point in space-time. However, U is distributed over a definite region of space-time (region in brain). Thus, I is aware of a multitude of space-like separated events. It is seen that this awareness necessitates I to be an entity outside the space-time continuum. With I taken as such, a new concept called concept A is introduced. With the help of concept A a very important axiom of consciousness, namely Free Will is explained. (shrink)
For centuries the case of Galileo Galilei has been the cornerstone of every major argument against the church and its supposedly unscientific dogmatism. The church seems to have condemned Galileo for his heresies, just because it couldn’t and wouldn’t handle the truth. Galileo was a hero of science wrongfully accused and now – at last – everyone knows that. But is that true? This paper tries to examine the case from the point of modern physics and the conclusions drawn are (...) startling. It seems that contemporary church was too haste into condemning itself. The evidence provided by Galileo to support the heliocentric system do not even pass simple scrutiny, while modern physics has ruled for a long time now against both heliocentric and geocentric models as depictions of the “truth”. As Einstein eloquently said, the debate about which system is chosen is void of any meaning from a physics’ point of view. At the end, the selection of the center is more a matter of choice rather than a matter of ‘truth’ of any kind. And this choice is driven by specific philosophical axioms penetrating astronomy for hundreds of years now. From Galileo to Hubble, the Copernican principle has been slowly transformed to a dogma followed by all mainstream astronomers. It is time to challenge our dogmatic adherence to the anti-humanism idea that we are insignificant in the cosmos and start making true honest science again, as Copernicus once postulated. (shrink)
Verteilungsgerechtigkeit befasst sich mit der Verteilung von Gütern innerhalb einer Gruppe, wobei verschiedene Verteilungsprinzipien und -ergebnisse als mögliche Ideale einer solchen Verteilung verhandelt werden. Diese normativen Ansätze sind oft rein verbal formuliert, wodurch ihre Anwendung auf unterschiedliche konkrete Verteilungssituationen, die hinsichtlich ihrer Gerechtigkeit beurteilt werden sollen, häufig schwer fällt. Eine Möglichkeit, fein abgestufte Gerechtigkeitsbeurteilungen verschiedener Verteilungen präzise erfassen zu können, besteht in der formalen Modellierung solcher Ideale durch Maße oder Indizes. Die Auswahl eines geeigneten Maßes, das ein gewisses Ideal abbilden (...) soll, muss ihrerseits eine Fundierung erfahren, was durch die Forderung von begründeten Axiomen erreicht werden kann, denen ein Maß genügen soll. In der vorliegenden Arbeit werden solche Axiome für Maße der Verteilungsgerechtigkeit am Beispiel von Bedarfsgerechtigkeit eingeführt. Ferner werden exemplarische Maße der Bedarfsgerechtigkeit vorgestellt. Damit wird für die Beurteilung und Modellierung von Maßen der Verteilungsgerechtigkeit eine erste diskutable Grundlage gelegt. (shrink)
Peano arithmetic cannot serve as the ground of mathematics for it is inconsistent to infinity, and infinity is necessary for its foundation. Though Peano arithmetic cannot be complemented by any axiom of infinity, there exists at least one (logical) axiomatics consistent to infinity. That is nothing else than a new reading at issue and comparative interpretation of Gödel’s papers (1930; 1931) meant here. Peano arithmetic admits anyway generalizations consistent to infinity and thus to some addable axiom(s) of infinity. (...) The most utilized example of those generalizations is the complex Hilbert space. Any generalization of Peano arithmetic consistent to infinity, e.g. the complex Hilbert space, can serve as a foundation for mathematics to found itself and by itself. (shrink)
The cognition of quantum processes raises a series of questions about ordering and information connecting the states of one and the same system before and after measurement: Quantum measurement, quantum in-variance and the non-locality of quantum information are considered in the paper from an epistemological viewpoint. The adequate generalization of ‘measurement’ is discussed to involve the discrepancy, due to the fundamental Planck constant, between any quantum coherent state and its statistical representation as a statistical ensemble after measurement. Quantum in-variance designates (...) the relation of any quantum coherent state to the corresponding statistical ensemble of measured results. A set-theory corollary is the curious in-variance to the axiom of choice: Any coherent state excludes any well-ordering and thus excludes also the axiom of choice. However the above equivalence requires it to be equated to a well-ordered set after measurement and thus requires the axiom of choice for it to be able to be obtained. Quantum in-variance underlies quantum information and reveals it as the relation of an unordered quantum “much” (i.e. a coherent state) and a well-ordered “many” of the measured results (i.e. a statistical ensemble). It opens up to a new horizon, in which all physical processes and phenomena can be interpreted as quantum computations realizing relevant operations and algorithms on quantum information. All phenomena of entanglement can be described in terms of the so defined quantum information. Quantum in-variance elucidates the link between general relativity and quantum mechanics and thus, the problem of quantum gravity. The non-locality of quantum information unifies the exact position of any space-time point of a smooth trajectory and the common possibility of all space-time points due to a quantum leap. This is deduced from quantum in-variance. Epistemology involves the relation of ordering and thus a generalized kind of information, quantum one, to explain the special features of the cognition in quantum mechanics. (shrink)
The link between the high-order metaphysics and abstractions, on the one hand, and choice in the foundation of set theory, on the other hand, can distinguish unambiguously the “good” principles of abstraction from the “bad” ones and thus resolve the “bad company problem” as to set theory. Thus it implies correspondingly a more precise definition of the relation between the axiom of choice and “all company” of axioms in set theory concerning directly or indirectly abstraction: the principle of abstraction, (...)axiom of comprehension, axiom scheme of specification, axiom scheme of separation, subset axiom scheme, axiom scheme of replacement, axiom of unrestricted comprehension, axiom of extensionality, etc. (shrink)
Quantum invariance designates the relation of any quantum coherent state to the corresponding statistical ensemble of measured results. The adequate generalization of ‘measurement’ is discussed to involve the discrepancy, due to the fundamental Planck constant, between any quantum coherent state and its statistical representation as a statistical ensemble after measurement. A set-theory corollary is the curious invariance to the axiom of choice: Any coherent state excludes any well-ordering and thus excludes also the axiom of choice. It should be (...) equated to a well-ordered set after measurement and thus requires the axiom of choice. Quantum invariance underlies quantum information and reveals it as the relation of an unordered quantum “much” (i.e. a coherent state) and a well-ordered “many” of the measured results (i.e. a statistical ensemble). It opens up to a new horizon, in which all physical processes and phenomena can be interpreted as quantum computations realizing relevant operations and algorithms on quantum information. All phenomena of entanglement can be described in terms of the so defined quantum information. Quantum invariance elucidates the link between general relativity and quantum mechanics and thus, the problem of quantum gravity. (shrink)
A principle, according to which any scientific theory can be mathematized, is investigated. That theory is presupposed to be a consistent text, which can be exhaustedly represented by a certain mathematical structure constructively. In thus used, the term “theory” includes all hypotheses as yet unconfirmed as already rejected. The investigation of the sketch of a possible proof of the principle demonstrates that it should be accepted rather a metamathematical axiom about the relation of mathematics and reality. Its investigation needs (...) philosophical means. Husserl’s phenomenology is what is used, and then the conception of “bracketing reality” is modelled to generalize Peano arithmetic in its relation to set theory in the foundation of mathematics. The obtained model is equivalent to the generalization of Peano arithmetic by means of replacing the axiom of induction with that of transfinite induction. A comparison to Mach’s doctrine is used to be revealed the fundamental and philosophical reductionism of Husserl’s phenomenology leading to a kind of Pythagoreanism in the final analysis. Accepting or rejecting the principle, two kinds of mathematics appear differing from each other by its relation to reality. Accepting the principle, mathematics has to include reality within itself in a kind of Pythagoreanism. These two kinds are called in paper correspondingly Hilbert mathematics and Gödel mathematics. The sketch of the proof of the principle demonstrates that the generalization of Peano arithmetic as above can be interpreted as a model of Hilbert mathematics into Gödel mathematics therefore showing that the former is not less consistent than the latter, and the principle is an independent axiom. An information interpretation of Hilbert mathematics is involved. It is a kind of ontology of information. Thus the problem which of the two mathematics is more relevant to our being is discussed. An information interpretation of the Schrödinger equation is involved to illustrate the above problem. (shrink)
The concept of formal transcendentalism is utilized. The fundamental and definitive property of the totality suggests for “the totality to be all”, thus, its externality (unlike any other entity) is contained within it. This generates a fundamental (or philosophical) “doubling” of anything being referred to the totality, i.e. considered philosophically. Thus, that doubling as well as transcendentalism underlying it can be interpreted formally as an elementary choice such as a bit of information and a quantity corresponding to the number of (...) elementary choices to be defined. This is the quantity of information defined both transcendentally and formally and thus, philosophically and mathematically. If one defines information specifically, as an elementary choice between finiteness (or mathematically, as any natural number of Peano arithmetic) and infinity (i.e. an actually infinite set in the meaning of set theory), the quantity of quantum information is defined. One can demonstrate that the so-defined quantum information and quantum information standardly defined by quantum mechanics are equivalent to each other. The equivalence of the axiom of choice and the well-ordering “theorem” is involved. It can be justified transcendentally as well, in virtue of transcendental equivalence implied by the totality. Thus, all can be considered as temporal as far anything possesses such a temporal counterpart necessarily. Formally defined, the frontier of time is the current choice now, a bit of information, furthermore interpretable as a qubit of quantum information. (shrink)
An isomorphism is built between the separable complex Hilbert space (quantum mechanics) and Minkowski space (special relativity) by meditation of quantum information (i.e. qubit by qubit). That isomorphism can be interpreted physically as the invariance between a reference frame within a system and its unambiguous counterpart out of the system. The same idea can be applied to Poincaré’s conjecture (proved by G. Perelman) hinting another way for proving it, more concise and meaningful physically. Mathematically, the isomorphism means the invariance to (...) choice, the axiom of choice, well-ordering, and well-ordering “theorem” (or “principle”) and can be defined generally as “information invariance”. (shrink)
The principle of maximal entropy (further abbreviated as “MaxEnt”) can be founded on the formal mechanism, in which future transforms into past by the mediation of present. This allows of MaxEnt to be investigated by the theory of quantum information. MaxEnt can be considered as an inductive analog or generalization of “Occam’s razor”. It depends crucially on choice and thus on information just as all inductive methods of reasoning. The essence shared by Occam’s razor and MaxEnt is for the relevant (...) data known till now to be postulated as an enough fundament of conclusion. That axiom is the kind of choice grounding both principles. Popper’s falsifiability (1935) can be discussed as a complement to them: That axiom (or axiom scheme) is always sufficient but never necessary condition of conclusion therefore postulating the choice in the base of MaxEnt. Furthermore, the abstraction axiom (or axiom scheme) relevant to set theory (e.g. the axiom scheme of specification in ZFC) involves choice analogically. (shrink)
This paper is about Poincaré’s view of the foundations of geometry. According to the established view, which has been inherited from the logical positivists, Poincaré, like Hilbert, held that axioms in geometry are schemata that provide implicit definitions of geometric terms, a view he expresses by stating that the axioms of geometry are “definitions in disguise.” I argue that this view does not accord well with Poincaré’s core commitment in the philosophy of geometry: the view that geometry is the study (...) of groups of operations. In place of the established view I offer a revised view, according to which Poincaré held that axioms in geometry are in fact assertions about invariants of groups. Groups, as forms of the understanding, are prior in conception to the objects of geometry and afford the proper definition of those objects, according to Poincaré. Poincaré’s view therefore contrasts sharply with Kant’s foundation of geometry in a unique form of sensibility. According to my interpretation, axioms are not definitions in disguise because they themselves implicitly define their terms, but rather because they disguise the definitions which imply them. (shrink)
Distributive justice deals with allocations of goods and bads within a group. Different principles and results of distributions are seen as possible ideals. Often those normative approaches are solely framed verbally, which complicates the application to different concrete distribution situations that are supposed to be evaluated in regard to justice. One possibility in order to frame this precisely and to allow for a fine-grained evaluation of justice lies in formal modelling of these ideals by metrics. Choosing a metric that is (...) supposed to map a certain ideal has to be justified. Such justification might be given by demanding specific substantiated axioms, which have to be met by a metric. This paper introduces such axioms for metrics of distributive justice shown by the example of needs-based justice. Furthermore, some exemplary metrics of needs-based justice and a three dimensional method for visualisation of non-comparative justice axioms or evaluations are presented. Therewith, a base worth discussing for the evaluation and modelling of metrics of distributive justice is given. (shrink)
The Principle of Ariadne, formulated in 1988 ago by Walter Carnielli and Carlos Di Prisco and later published in 1993, is an infinitary principle that is independent of the Axiom of Choice in ZF, although it can be consistently added to the remaining ZF axioms. The present paper surveys, and motivates, the foundational importance of the Principle of Ariadne and proposes the Ariadne Game, showing that the Principle of Ariadne, corresponds precisely to a winning strategy for the Ariadne Game. (...) Some relations to other alternative. set-theoretical principles are also briefly discussed. (shrink)
We examine some of Connes’ criticisms of Robinson’s infinitesimals starting in 1995. Connes sought to exploit the Solovay model S as ammunition against non-standard analysis, but the model tends to boomerang, undercutting Connes’ own earlier work in functional analysis. Connes described the hyperreals as both a “virtual theory” and a “chimera”, yet acknowledged that his argument relies on the transfer principle. We analyze Connes’ “dart-throwing” thought experiment, but reach an opposite conclusion. In S , all definable sets of reals are (...) Lebesgue measurable, suggesting that Connes views a theory as being “virtual” if it is not definable in a suitable model of ZFC. If so, Connes’ claim that a theory of the hyperreals is “virtual” is refuted by the existence of a definable model of the hyperreal field due to Kanovei and Shelah. Free ultrafilters aren’t definable, yet Connes exploited such ultrafilters both in his own earlier work on the classification of factors in the 1970s and 80s, and in Noncommutative Geometry, raising the question whether the latter may not be vulnerable to Connes’ criticism of virtuality. We analyze the philosophical underpinnings of Connes’ argument based on Gödel’s incompleteness theorem, and detect an apparent circularity in Connes’ logic. We document the reliance on non-constructive foundational material, and specifically on the Dixmier trace −∫ (featured on the front cover of Connes’ magnum opus) and the Hahn–Banach theorem, in Connes’ own framework. We also note an inaccuracy in Machover’s critique of infinitesimal-based pedagogy. (shrink)
Standard approaches to proper names, based on Kripke's views, hold that the semantic values of expressions are (set-theoretic) functions from possible worlds to extensions and that names are rigid designators, i.e.\ that their values are \emph{constant} functions from worlds to entities. The difficulties with these approaches are well-known and in this paper we develop an alternative. Based on earlier work on a higher order logic that is \emph{truly intensional} in the sense that it does not validate the axiom scheme (...) of Extensionality, we develop a simple theory of names in which Kripke's intuitions concerning rigidity are accounted for, but the more unpalatable consequences of standard implementations of his theory are avoided. The logic uses Frege's distinction between sense and reference and while it accepts the rigidity of names it rejects the view that names have direct reference. Names have constant denotations across possible worlds, but the semantic value of a name is not determined by its denotation. (shrink)
An axiom of medical research ethics is that a protocol is moral only if it has a “favorable risk-benefit ratio”. This axiom is usually interpreted in the following way: a medical research protocol is moral only if it has a positive expected value -- that is, if it is likely to do more good (to both subjects and society) than harm. I argue that, thus interpreted, the axiom has two problems. First, it is unusable, because it requires (...) us to know more about the potential outcomes of research than we ever could. Second, it is false, because it conflicts with the so-called “soft paternalist” principles of liberal democracy. In place of this flawed rule I propose a new way of making risk-benefit assessments, one that does comport with the principles of liberalism. I argue that a protocol is moral only if it would be entered into by competent subjects who are informed about the protocol. The new rule this eschews all pseudo-utilitarian calculation about the protocol’s likely harms and benefits. (shrink)
The text is a continuation of the article of the same name published in the previous issue of Philosophical Alternatives. The philosophical interpretations of the Kochen- Specker theorem (1967) are considered. Einstein's principle regarding the,consubstantiality of inertia and gravity" (1918) allows of a parallel between descriptions of a physical micro-entity in relation to the macro-apparatus on the one hand, and of physical macro-entities in relation to the astronomical mega-entities on the other. The Bohmian interpretation ( 1952) of quantum mechanics proposes (...) that all quantum systems be interpreted as dissipative ones and that the theorem be thus derstood. The conclusion is that the continual representation, by force or (gravitational) field between parts interacting by means of it, of a system is equivalent to their mutual entanglement if representation is discrete. Gravity (force field) and entanglement are two different, correspondingly continual and discrete, images of a single common essence. General relativity can be interpreted as a superluminal generalization of special relativity. The postulate exists of an alleged obligatory difference between a model and reality in science and philosophy. It can also be deduced by interpreting a corollary of the heorem. On the other hand, quantum mechanics, on the basis of this theorem and of V on Neumann's (1932), introduces the option that a model be entirely identified as the modeled reality and, therefore, that absolutely reality be recognized: this is a non-standard hypothesis in the epistemology of science. Thus, the true reality begins to be understood mathematically, i.e. in a Pythagorean manner, for its identification with its mathematical model. A few linked problems are highlighted: the role of the axiom of choice forcorrectly interpreting the theorem; whether the theorem can be considered an axiom; whether the theorem can be considered equivalent to the negation of the axiom. (shrink)
How do axioms, or first principles, in ethics compare to those in mathematics? In this companion piece to G.C. Field's 1931 "On the Role of Definition in Ethics", I argue that there are similarities between the cases. However, these are premised on an assumption which can be questioned, and which highlights the peculiarity of normative inquiry.
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Email
RSS feed
About us
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.