By now we have an effectively computable propositional logic system (see PropLogicProject). This comes with an algebraic abstraction of propositional logic as a theory algebra, based on the semantic and syntactic order, rather than the usual and purely semantical reconstruction as a boolean algebra. In a next step, we generalize this bit-valued or propositional theory algebra to a general theory algebra of relations. This foundation project is a foundation in two directions: first, we use the propositional logic system as the sole tool for a reconstruction of the computability concept. In other words, we develop a programming language, which is based on propositional algebra, similar to functional languages, which are based on the lambda calculus. This dynamic exploitation of propositional logic starts off with a theoretificationism, which shows how theories, theory functions and theory relations can be seen to be identical entities. In the other direction, we show how the more general theory algebras of relations provide a very appropriate and natural semantics for predicate logic. And there is a hierachy of computable subsystems of these theory algebras, which enable the direct implementation of more or less general model constructors for first-order theories. As a consequence of this approach, the theorem of Church and Turing for the undecidability of first-order predicate logic could appear in a new light.
The single parts
The whole Foundation project is a major project and currently comprises the following minor subprojects.
In the first place, a theory algebra was the algebraic abstraction of propositional logic, designed to overcome certain limitations of the traditional abstraction as a boolean algebra. For certain formalized applications, the purely semantic order structure preserved in the boolean algebra does not suffice and the syntactic order needs to be considered as well. A theory algebra thus is the structure induced not by one, but by two interconnected quasi-order relations. Subsequently it turned out, that a generalization of the original propositional theory algebras, called theory algebras of relations, is a very suitable algebraization for predicate logic (see the according minor project below).
A theoretificationism comprises a couple of operators which turn theories (i.e. the elements of given theory algebras) into functions and relations on theories, and vice versa. That way they unify the three basic ingredients of mathematical structures, namely sets, functions and relations, into a single form and representation.
A theoretificationism (see the previous project) defines how a theory is a function from theories to theories, and how binary theory relations (i.e. sets of theory-pairs) are represented by single theories. When we concentrate on propositional theories (say simple propositional formulas), this provides us with an encoding mechanism for data structures. Integers and their arithmetic, finite sets, sequences, hash tables etc. can be represented as propositional formulas and we can adopt many of these encodings from the well-established digital techniques for standard computer system design. Once these primitive data types and operations are defined, we can design looping and iteration mechanisms by recursive theory applications and modifications. As a goal, we should be able to design a Turing-complete programming language based on propositional logic on a very minimal and elegant core system.
By now we have a fully elaborated Theory algebra of relations. Now we show how every model for a first-order theory turns into a member of a theory algebra (here the similarity of the two distinct "theory" notions unfolds again). In this respect, theory algebras accomplish the same goal as cylindric algebras , namely an algebraization of first-order logic. In cylindric algebras, variables are abstracted away, according to the traditional epistemiology that doesn't allow to think of symbols as part of the semantics. But that idea only generates unnessessary artificialities. Theory algebras somehow provide a more "natural" semantics: each term and each formula turns into a schematic function where the typed free variables make up the schema. And each entire model turns into a single schematic relation. It is possible to broaden this approach to an alternative relational foundation of mathematics itself, which is the third way next to the first set-theoretical foundation and the second and more recent categorical or functional foundation.
The previous theory-algebraic semantic of predicate logic stimulates the design of model generators for first-order theories. By model generator we mean an algorithm, that takes as input a first-order theory and returns all (or at least one, or the smallest one) model; or a negative answer, if the theory has no model at all. [A first-order theory is a signature together with a an axiom set, which must be finite here. A signature is made of set symbols and constants, function symbols and relation symbols, which are typed on the given set symbols. An axiom is a closed formula in the language of the given signature.] At the moment, I am under the impression that a model generator in this general fashion might actually be possible - an idea that would refute Church and Turings proof for the undecidability of predicate logic . But actually implemented by now is a more special model generator, that requires a finite carrier set or its cardinality as a second input parameter and returns all possible models of the theory on this given carrier set. (The according program is implemented as a Haskell package and should be published at some future point.)
Links and references