Working with Sets, Finite Sets, Intervals and Boundaries

Sets and different ways to express sets and manipulate them.

First with the builtin sets, and then using symbolic expressions.

In [2]:
from sympy import init_printing, latex
init_printing()
In [3]:
import sympy as sp
A,B,C = sp.symbols('A,B,C',positive=True)

Sets using the Python builtin sets.

In [4]:
s1 = set([A,B])
s2 = set([C])
s1.isdisjoint(s2)
Out[4]:
True

Basic Sets using Sympy, class sympy.sets.sets Set is not meant to be used as a container of items like the Python builtin sets. Several classes related to intervals are based on Sets. As can be seen below, the boundary of an interval is a FiniteSet(). It might be instructive to understand a little more about intervals, this is a topology subject. https://docs.sympy.org/latest/modules/sets.html

In [5]:
sp.Interval(0,1).boundary
Out[5]:
$\displaystyle \left\{0, 1\right\}$

FiniteSet represents a finite set of discrete numbers. This can take the numbers directly or unpack from a list.https://docs.sympy.org/latest/modules/sets.html#sympy.sets.sets.FiniteSet

In [6]:
sp.FiniteSet(1,2,3,4)
Out[6]:
$\displaystyle \left\{1, 2, 3, 4\right\}$
In [7]:
members = [1,2,3,4]
f1 = sp.FiniteSet(*members)
f1
Out[7]:
$\displaystyle \left\{1, 2, 3, 4\right\}$
In [8]:
f1-sp.FiniteSet(2)
Out[8]:
$\displaystyle \left\{1, 3, 4\right\}$

Rewrite a FiniteSet in terms of equalities and logic operators. Not really sure what this means but maybe it will be useful.

In [9]:
s1f = sp.FiniteSet(1,2,3,4)
s1f.as_relational(A)
Out[9]:
$\displaystyle A = 1 \vee A = 2 \vee A = 3 \vee A = 4$

$\{x | condition(x) is True for x in S\}$ This gives the set of elements which satisfies a given condition.

In [10]:
x = sp.Symbol('x',positive=True)
sp.ConditionSet(x,x>=3,s1f)
Out[10]:
$\displaystyle \left\{3, 4\right\}$
In [11]:
sp.ConditionSet(x,x**2>4,sp.Interval(2,4))
Out[11]:
$\displaystyle \left\{x \mid x \in \left[2, 4\right] \wedge x^{2} > 4 \right\}$

Now for the intersection of some sets.

In [12]:
s1 = sp.FiniteSet(1,2,3,4)
s2 = sp.FiniteSet(4,5,6,7)
sp.Intersection(s1,s2)
Out[12]:
$\displaystyle \left\{4\right\}$

Logical propositions and statements

Logic in Sympy. https://docs.sympy.org/latest/modules/logic.html

A proposition is a statement proposing and idea that can be either true or false, and can be the result of logical connectives that combine propositions.

In [13]:
x,y = sp.symbols('x,y')
y | (x & y)
Out[13]:
$\displaystyle y \vee \left(x \wedge y\right)$
In [14]:
x,y = sp.symbols('x,y',positive=True)
y | (x & y)
Out[14]:
$\displaystyle \text{True}$
In [15]:
x,y = sp.symbols('x,y',real=True)
y | (x & y)
Out[15]:
$\displaystyle y \vee \left(x \wedge y\right)$

The implication is that x implies y, which means if x is true then y is true. So the statement x implies y is only true if both x and y are true. This does not specify a causal relationship between x and y, it simply means that whenever x is true, y must also be true. So the entire proposition is false if and only if x is true and y is false. A contingency is the status of propositions that are neither necessarily true nor necessarily false, that is they are not true for every possible valuation, nor false for every possible valuation. So the implication is a contingency, it depends on the variables.

In [16]:
x >> y
Out[16]:
$\displaystyle x \Rightarrow y$
In [17]:
(x & y).subs({x:True,y:True})
Out[17]:
$\displaystyle \text{True}$
In [18]:
(x >> y).subs({x:True,y:False})
Out[18]:
$\displaystyle \text{False}$
In [19]:
(x >> y).subs({x:True,y:True})
Out[19]:
$\displaystyle \text{True}$
In [20]:
sp.S.true
Out[20]:
$\displaystyle \text{True}$

The following generates a truth table of sorts.

In [21]:
import itertools as it
from sympy import true,false
l1 = it.product([false,true],repeat=2)
prop1 = (x >> y)
results_list = list()
for i in l1:
    results_list.append([i[0],i[1],prop1.subs({x:i[0],y:i[1]})])
    print(i)
    print(prop1.subs({x:i[0],y:i[1]}))
(False, False)
True
(False, True)
True
(True, False)
False
(True, True)
True

Truth Table for Implication

In [22]:
import pandas as pd
df1 = pd.DataFrame(results_list,columns=['A','B','$A\implies B$'])
df1
Out[22]:
A B $A\implies B$
0 False False True
1 False True True
2 True False False
3 True True True

The domain of each function in this case is represented by a list of true false pairs.

In [23]:
list1 = [['T','F'],['T','F']]
import itertools
c1 = 0
for el in itertools.product(*list1):
    c1 += 1
    print(el)
print("count = ",c1)
# each one of these could be mapped to a logic function
input_list = list(itertools.product(*list1))
('T', 'T')
('T', 'F')
('F', 'T')
('F', 'F')
count =  4

Each pair would be mapped to an output by one of these functions in the list. For every input pair from the list above there is a choice, which consitutes a mapping, the completion of choices constitutes a function.

In [24]:
l1 = list(itertools.product([0,1],[0,1],[0,1],[0,1]))
print(l1)
print(len(l1))
[(0, 0, 0, 0), (0, 0, 0, 1), (0, 0, 1, 0), (0, 0, 1, 1), (0, 1, 0, 0), (0, 1, 0, 1), (0, 1, 1, 0), (0, 1, 1, 1), (1, 0, 0, 0), (1, 0, 0, 1), (1, 0, 1, 0), (1, 0, 1, 1), (1, 1, 0, 0), (1, 1, 0, 1), (1, 1, 1, 0), (1, 1, 1, 1)]
16
In [25]:
# select function 0
f = l1[0]
# lambdify it
lf = lambda x_idx: f[x_idx]
# input arguments
x_0 = input_list[0]
print("Input arguments are ",x_0)
# function output
print("Function result is ",lf(0))
Input arguments are  ('T', 'T')
Function result is  0

A logic function maps inputs from the domain of true false value pairs to an output value.

In [26]:
l1 = lambda a,b:bool(a)&bool(b)
l1(*input_list[0])
Out[26]:
True
In [27]:
def convert_to_bool(in_str):
    if not isinstance(in_str,str):
        return None
    elif in_str.upper() == 'T':
        return True
    elif in_str.upper() == 'F':
        return False
    else:
        return None

for el in input_list:
    A,B = el
    A = convert_to_bool(A)
    B = convert_to_bool(B)
    print(l1(A,B))
True
False
False
False

How is proposition A by itself different from A given B? Well the truth value of the propositions could differ very much. If truth value is different then you would think that would be important in determining the overall truth value of proposition made from logical conjunctives. The question is how to combine those for pluasibilities, that is, these propositions that map to real numbers.

What is the domain of the function we want? If its just A|C and B|C each can range freely, whereas A|BC and B|C is a way to judge them relative to the other. So A|BC sortof holds B|C constant while A varies. The quality of this function being that just A|C instead of A|BC is not the same function because its not the same domain. We are actually trying to map the implication, so say A|BC came from the weak syllogism, that is if A implies B, and we know A then B ought to be certain, however if just know that B is true, then A could also be true.

The truth table for implication is:

A B $A \implies B$
T T T
T F F
F T T
F F T

The domain for a logic function is $\{ (T,T),(T,F),(F,T),(F,F) \}$ represented by the symbols A and B.

The domain for the probability function is continuous since each input can vary between 0 and 1. Each of the following would map to a real number, which would then serve as the input to a product or sum. $\{ A|C, B|C, A|BC, B|AC \} \rightarrow \{x,y,z,v: x,y,z,v are part of the interval from 0 to 1\}$

These mapping should show a relationship, that how A|C ,B|AC, and AB|C interact when updated. Pg 5 in Jaynes.

Looking at the implication again and reading Jaynes page 5 concerning plausible reasoning. If A is true then B is true (the implication) then we observe A is false, therefore B becomes less plausible. So, looking at the truth table for the implication, looking at where A is false, there are two possible outcomes, B is true, and B is false, so which one is it, there now seems to come to mind some kind of weight that would pull more towards either True or False depending on the evidence.

Plausibility Mapping

This shows probability or plausibility numbers to go with a specific statement. Then it shows a function that maps each value via a mathematical function. I guess you could say that a mapping is occuring twice.

In [28]:
import math
list1 = {'A|C':0.5,'B|C':0.5,'A|BC':0.1,'B|AC':0.2}
def plaus(val):
    return math.exp(val)

for it in map(plaus,list1.values()):
    print(it)
1.6487212707001282
1.6487212707001282
1.1051709180756477
1.2214027581601699

Jaynes book proposes the following form for the product rule, that it is a function of A|C and B|AC, or it is a function of B|C and A|BC. A|C, B|C, A|BC, and B|AC all map to positive numbers and will be represented as such. Let f be a mapping function that maps the members of the domain represented by the set $\{AB|C,A|C,B|C,B|AC,A|BC\}$ to positive real numbers. The output from each function will be represented as follows $u = f(AB|C)$, $v = f(A|C)$, $w=f(B|AC)$, $x=f(B|C)$, $y=f(A|BC)$. Another function represented by F can map the domain of two positive numbers to another positive number. This function is positive definite? the function F must find the plausibility of AB|C that is A and B together given C.

In [29]:
u,v,w,x,y = sp.symbols('u,v,w,x,y',positive=True)
f1 = sp.Function('F')(x,y)
f2 = sp.Function('F')(v,w)
# both functions f1 and f2 are equal to u
f1, f2
Out[29]:
$\displaystyle \left( F{\left(x,y \right)}, \ F{\left(v,w \right)}\right)$

Making a function just to print the representation.

In [30]:
d1 = {u:'AB|C',v:'A|C',w:'B|AC',x:'B|C',y:'A|BC'}
def show_logical_symbols(sym1,sym2):
    print(d1[sym1],d1[sym2])

Jaynes book asks, why wouldnt the arguments of F be something other than x,y or v,w, such as say x,v? Well x might be very probable and v might be very probable, but x and v together might not be very probable. So the alternatives do not make sense.

To generate all possible functions, we would generate all possible pairs of v,w,x and y.

In [31]:
import itertools as it
c = 0
for el in it.product([v,w,x,y],repeat=2):
    a,b = el
    if a != b:
        show_logical_symbols(a,b)
        c +=1 
print("Count = ",c)
A|C B|AC
A|C B|C
A|C A|BC
B|AC A|C
B|AC B|C
B|AC A|BC
B|C A|C
B|C B|AC
B|C A|BC
A|BC A|C
A|BC B|AC
A|BC B|C
Count =  12

Trying to understand why some cases for different arguments to calculate the product rule couldnt be A|BC and B|AC. If the answer for the calculation of the plaus of AB|C is different from what the consituent parts are, then it wouldnt make sense. Ok so calculation of plaus for AB|C would be high if both could be there at the same time, but low if one or both were low. So when would A|BC be high and B|AC be high at the same time. The plaus of A|BC is contingent on B of course, and B|AC is contingent on A, but there is no information on either A or B by themselves so that seems to be missing information, that information would be needed to calculate the plaus of A and B together. Calculating the plaus of AB|C by combining the information of A|C and B|C gives their individual plausibilities, but doesnt give any information about the relationship between the two. The plaus for the two together could be very different.

I was trying to put together a sort of truth table for some scenarios to get some insight into the plaus calculation, not sure that it shed new light on it.

The truth table for implication and AND:

$A$ $B$ $$A \Rightarrow B$$ $AB$
T T T T
T F F F
F T T F
F F T F

According to the table above if the implication is true, then given that A is true, then B has to be true. Given that B is true, A could be true or false. Since were going with the scenario that the implication is true, perhaps this should be the background info. If the implication is true and we look at f(B|AC) we would have to assign it certainty. If we look at f(A|BC) then we would know that it wouldnt be as certain as f(B|AC) because in the truth table, for the cases when the implication is true and B is true, A could be true or false. Given the implication as background information, trying to reason about the plausibility of AB|C is certainly asymmetrical, since the reasoning is different depending on which one comes first. If $A \implies B$ and you know you have A then B becomes very certain, and then AB|C should be certain. But if you have B known to be true, then it could have come from A being true, or perhaps not. This doesnt seem to be a calculation that would arrive at the same plaus number, regardless of which fact was known. Come back to this later.

On pg 26 in Jaynes, the author points out that increases in plaus of one argument that doesnt change the contingency of the other, would necessarily result in a greater than or equal change in the plaus of the combined result. Also that if the simple argument B|C got new information and didnt change the plaus, but the plaus of A|BC increased based on new information, then the plaus of AB|C should be equal to or greater than what it was before. This leads to the idea of how the function could increase smoothly based on the change in arguments. This leads to an assumption of a smooth differentiable function.

In [32]:
# working on page 27 of Jaynes
x,y,z,u,v = sp.symbols('x,y,z,u,v',positive=True)
f1 = sp.Function('F')(x,y)
f2 = sp.Function('F')(y,z)
f3 = sp.Function('F')(f1,z)
f4 = sp.Function('F')(x,f2)
sp.diff(f3,x)
Out[32]:
$\displaystyle \frac{\partial}{\partial x} F{\left(x,y \right)} \frac{\partial}{\partial F{\left(x,y \right)}} F{\left(F{\left(x,y \right)},z \right)}$
In [33]:
sp.diff(f4,x)
Out[33]:
$\displaystyle \frac{\partial}{\partial x} F{\left(x,F{\left(y,z \right)} \right)}$
In [34]:
expr1 = f3-f4
d1 = sp.diff(expr1,x)
d1
Out[34]:
$\displaystyle \frac{\partial}{\partial x} F{\left(x,y \right)} \frac{\partial}{\partial F{\left(x,y \right)}} F{\left(F{\left(x,y \right)},z \right)} - \frac{\partial}{\partial x} F{\left(x,F{\left(y,z \right)} \right)}$
In [35]:
d2 = sp.diff(d1,y)
d2
Out[35]:
$\displaystyle \frac{\partial}{\partial x} F{\left(x,y \right)} \frac{\partial}{\partial y} F{\left(x,y \right)} \frac{\partial^{2}}{\partial F{\left(x,y \right)}^{2}} F{\left(F{\left(x,y \right)},z \right)} - \frac{\partial}{\partial y} F{\left(y,z \right)} \frac{\partial^{2}}{\partial F{\left(y,z \right)}\partial x} F{\left(x,F{\left(y,z \right)} \right)} + \frac{\partial}{\partial F{\left(x,y \right)}} F{\left(F{\left(x,y \right)},z \right)} \frac{\partial^{2}}{\partial y\partial x} F{\left(x,y \right)}$
In [36]:
factor1 = sp.S(1)/sp.diff(f1,x)
factor1
Out[36]:
$\displaystyle \frac{1}{\frac{\partial}{\partial x} F{\left(x,y \right)}}$
In [37]:
#(2.17) in Jaynes, pg 27 maybe come back and finish derivation later.
# Also note that differential equations in sympy could use Derivative instead of diff.
sp.expand(factor1*d2)
Out[37]:
$\displaystyle \frac{\partial}{\partial y} F{\left(x,y \right)} \frac{\partial^{2}}{\partial F{\left(x,y \right)}^{2}} F{\left(F{\left(x,y \right)},z \right)} - \frac{\frac{\partial}{\partial y} F{\left(y,z \right)} \frac{\partial^{2}}{\partial F{\left(y,z \right)}\partial x} F{\left(x,F{\left(y,z \right)} \right)}}{\frac{\partial}{\partial x} F{\left(x,y \right)}} + \frac{\frac{\partial}{\partial F{\left(x,y \right)}} F{\left(F{\left(x,y \right)},z \right)} \frac{\partial^{2}}{\partial y\partial x} F{\left(x,y \right)}}{\frac{\partial}{\partial x} F{\left(x,y \right)}}$

Testing out the Product Rule Solution

According to Jaynes pg 28, the solution looks like this. $w(F(x,y))=w(x)w(y)$ (2.26)

And so $F(x,y) = w^{-1}((w(x)w(y))$

Does that work?

Seems like a custom inverse function is needed.

Here is an example function. And then my attempt at an implementation.

In [38]:
class my_func(sp.Function):
    @classmethod
    def eval(cls, x):
        if x.is_Number:
            if x.is_zero:
                return sp.S.One
            elif x is S.Infinity:
                return S.Zero
    
    def _eval_is_real(self):
        return self.args[0].is_real

my_func(0)
f1.free_symbols
Out[38]:
$\displaystyle \left\{x, y\right\}$
In [39]:
class w_inv(sp.Function):
    w = True
    @classmethod
    def eval(cls, x_in):
        if isinstance(x_in,sp.Function) and x_in.w:
            sm = x_in.args
            return sm[0]

w_inv(x)
Out[39]:
$\displaystyle \operatorname{w_{inv}}{\left(x \right)}$
In [40]:
isinstance(x,sp.Symbol)
Out[40]:
True
In [41]:
class w(sp.Function):
    w = True
    @classmethod
    def eval(cls, x):
        if isinstance(x,w_inv):
            sm = x.args
            return sm[0]
w(x)
Out[41]:
$\displaystyle w{\left(x \right)}$
In [42]:
w_inv(w(x))
Out[42]:
$\displaystyle x$
In [43]:
w(w_inv(x))
Out[43]:
$\displaystyle x$

The function $F(x,y) = w^{-1}(w(x)w(y))$

In [44]:
class F(sp.Function):
    @classmethod
    def eval(cls, x,y):
        return w_inv(w(x)*w(y))
    
    def _eval_is_real(self):
        return (self.x.is_real and self.y.is_real)
F(y,z)
Out[44]:
$\displaystyle \operatorname{w_{inv}}{\left(w{\left(y \right)} w{\left(z \right)} \right)}$
In [45]:
expr1
Out[45]:
$\displaystyle - F{\left(x,F{\left(y,z \right)} \right)} + F{\left(F{\left(x,y \right)},z \right)}$
In [46]:
F(F(x,y),z)
Out[46]:
$\displaystyle \operatorname{w_{inv}}{\left(w{\left(x \right)} w{\left(y \right)} w{\left(z \right)} \right)}$
In [47]:
F(x,F(y,z))
Out[47]:
$\displaystyle \operatorname{w_{inv}}{\left(w{\left(x \right)} w{\left(y \right)} w{\left(z \right)} \right)}$
In [48]:
F(F(x,y),z) - F(x,F(y,z)) == 0
Out[48]:
True
In [49]:
w(F(x,y))
Out[49]:
$\displaystyle w{\left(x \right)} w{\left(y \right)}$

According to Jaynes pg. 29 the relation sought must take on the functional form.

$w(AB|C) = w(A|BC)w(B|C) = w(B|AC)w(A|C)$

Suppose that A is certain given C, then AB|C has the same truth value as B|C. Then AB|C = B|C since with AB|C, if A is declared true given C, then the result of AB|C depends only on B|C.

As can be seen by the subsitutions below. If AB|C is equal to B|C, plugging the result into the product rule results in.

$w(B|C) = w(A|BC)w(B|C)$

Also $w(A|BC) = w(A|C)$ since knowledge of B doesnt change the fact that A is a certain event.

$w(B|C) = w(A|C)w(B|C)$

The resulting solution below shows that $w(A|C)=1$.

In [50]:
x0,x1,x2,x3,x4 = sp.symbols('x:5',positive=True)
d1 = {'AB|C':x0,'A|C':x1,'B|AC':x2,'B|C':x3,'A|BC':x4}
expr = w(x3)-w(x1)*w(x3)
expr_f = sp.factor(expr,w(x3))
expr_f
Out[50]:
$\displaystyle \left(1 - w{\left(x_{1} \right)}\right) w{\left(x_{3} \right)}$
In [51]:
expr_f.subs(w(x3),0)
Out[51]:
$\displaystyle 0$
In [52]:
expr_f.subs(w(x1),1)
Out[52]:
$\displaystyle 0$

Now suppose that A|C is impossible. AB|C should only depend on A. So w(AB|C) = w(A|C). Since A is known to be impossible, then any knowledge of B wouldnt matter in the case of A|BC, so w(A|BC) = w(A|C).

$w(A|C) = w(A|C)w(B|C)$

In [53]:
expr = w(x1)*w(x3)-w(x1)
expr
Out[53]:
$\displaystyle w{\left(x_{1} \right)} w{\left(x_{3} \right)} - w{\left(x_{1} \right)}$
In [54]:
expr_f = sp.factor(expr,w(x1))
expr_f
Out[54]:
$\displaystyle \left(w{\left(x_{3} \right)} - 1\right) w{\left(x_{1} \right)}$

The equation can be zero if w(x1) is zero, or if w(x3) is 1. $w(x3) = w(B|C)$, now we dont assume that B|C is a certain event. If we look at $w(x1) = w(A|C)$ as zero, would solve the equation, and that fits with our assumption that A|C is impossible.

In [55]:
expr_f.subs(w(x1),0)
Out[55]:
$\displaystyle 0$
In [56]:
# Make a symbol to subst to take the limit
w1 = sp.Symbol('w1',positive = True)
expr_w1 = expr_f.subs(w(x1),w1)
expr_w1
Out[56]:
$\displaystyle w_{1} \left(w{\left(x_{3} \right)} - 1\right)$

Taking the limit as $w(x1)$ as symbolized by $w_1$, approaches infinity.

In [57]:
sp.limit(expr_w1,w1,sp.oo)
Out[57]:
$\displaystyle \infty \operatorname{sign}{\left(w{\left(x_{3} \right)} - 1 \right)}$

The Sum Rule

Jaynes pg 30 shows the assumptions for the sum rule. It says that the plaus that A is false must in some way depend on whether A is true. $\mathbf{u \equiv w(A|C)}$ $\mathbf{v \equiv w(\neg A|C)}$.

In [58]:
u,v = sp.symbols('u,v',positive = True)

Establish a function with the right properties. The domain of the function is another function, but lets use the output of these functions as the inputs of the S function we want to establish. Since Sympy already has something called an S, we will choose a different function name called S1.

In [59]:
x0,x1,x2,x3,x4,x5,x6,x7,x8 = sp.symbols('x:9',positive=True)
In [60]:
d1 = {'AB|C':x0,'A|C':x1,'B|AC':x2,'B|C':x3,'A|BC':x4,'A notB|C':x5,'notB|AC':x6,'notA B|C':x7,'notA|BC':x8}
d1
Out[60]:
{'AB|C': x0,
 'A|C': x1,
 'B|AC': x2,
 'B|C': x3,
 'A|BC': x4,
 'A notB|C': x5,
 'notB|AC': x6,
 'notA B|C': x7,
 'notA|BC': x8}
In [61]:
def lookup(val):
    key = next(key for key, value in d1.items() if value == val)
    return key
In [62]:
class S1(sp.Function):
    @classmethod
    def eval(cls, x):
        if x.is_Number:
            if x.is_zero:
                return sp.S.One
            elif x is sp.S.One:
                return sp.S.Zero
        if isinstance(x,w):
            if x.args[0] == d1['B|AC']:
                return w(d1['notB|AC'])
            elif x.args[0] == d1['A|BC']:
                return w(d1['notA|BC'])
    
    def _eval_is_real(self):
        return self.args[0].is_real

S1(u,evaluate=False)
S1(w(d1['B|AC']))
S1(w(d1['A|BC']))
Out[62]:
$\displaystyle w{\left(x_{8} \right)}$
In [63]:
expr2p37 = w(d1['AB|C']) - w(d1['A|C'])*w(d1['B|AC'])
expr2p37
Out[63]:
$\displaystyle w{\left(x_{0} \right)} - w{\left(x_{1} \right)} w{\left(x_{2} \right)}$
In [64]:
expr2p38 = w(d1['A notB|C']) - w(d1['A|C'])*w(d1['notB|AC'])
expr2p38
Out[64]:
$\displaystyle - w{\left(x_{1} \right)} w{\left(x_{6} \right)} + w{\left(x_{5} \right)}$
In [65]:
expr3 = expr2p37.subs(w(d1['B|AC']),S1(w(d1['notB|AC'])))
expr3
Out[65]:
$\displaystyle - \operatorname{S_{1}}{\left(w{\left(x_{6} \right)} \right)} w{\left(x_{1} \right)} + w{\left(x_{0} \right)}$
In [66]:
#TODO: Recreate eqn 2.39 in Jaynes pg 30
v1 = sp.solve(expr2p38,w(x6))
expr2p39 = expr3.subs(w(x6),v1[0])
expr2p39
Out[66]:
$\displaystyle - \operatorname{S_{1}}{\left(\frac{w{\left(x_{5} \right)}}{w{\left(x_{1} \right)}} \right)} w{\left(x_{1} \right)} + w{\left(x_{0} \right)}$
In [67]:
#TODO: Recreate equation 2.40 Jaynes pg 31
expr2p37a = w(d1['AB|C'])-w(d1['B|C'])*w(d1['A|BC'])
expr2p38a = w(d1['notA B|C'])-w(d1['B|C'])*w(d1['notA|BC'])
expr4 = expr2p37a.subs(w(d1['A|BC']),S1(w(d1['notA|BC'])))
expr4
Out[67]:
$\displaystyle - \operatorname{S_{1}}{\left(w{\left(x_{8} \right)} \right)} w{\left(x_{3} \right)} + w{\left(x_{0} \right)}$
In [68]:
v2 = sp.solve(expr2p38a,w(d1['notA|BC']))
expr2p39a = expr4.subs(w(x8),v2[0])
expr2p39a
Out[68]:
$\displaystyle - \operatorname{S_{1}}{\left(\frac{w{\left(x_{7} \right)}}{w{\left(x_{3} \right)}} \right)} w{\left(x_{3} \right)} + w{\left(x_{0} \right)}$

Finally Now this is equation 2.40 from Jaynes

In [69]:
expr2p40 = expr2p39-expr2p39a
expr2p40
Out[69]:
$\displaystyle - \operatorname{S_{1}}{\left(\frac{w{\left(x_{5} \right)}}{w{\left(x_{1} \right)}} \right)} w{\left(x_{1} \right)} + \operatorname{S_{1}}{\left(\frac{w{\left(x_{7} \right)}}{w{\left(x_{3} \right)}} \right)} w{\left(x_{3} \right)}$
In [71]:
d1
Out[71]:
{'AB|C': x0,
 'A|C': x1,
 'B|AC': x2,
 'B|C': x3,
 'A|BC': x4,
 'A notB|C': x5,
 'notB|AC': x6,
 'notA B|C': x7,
 'notA|BC': x8}

So now as in pg31 in Jaynes we wish to simplify things.

$\mathbf{\operatorname{S}{\left(\frac{w{\left(A \wedge \neg B |C \right)}}{w{\left(A |C \right)}} \right)} w{\left(A |C \right)} + \operatorname{S}{\left(\frac{w{\left(\neg A \wedge B|C \right)}}{w{\left(B |C \right)}} \right)} w{\left(B |C \right)}}$

A necessary relationship to simplify some the logic expressions. Where D is any new proposition.

$\mathbf{\neg B = AD}$

The simplified expressions are,

$\mathbf{A \wedge \neg B = \neg B}$

$\mathbf{B \wedge \neg A = \neg A}$

So now we have,

$\mathbf{\operatorname{S}{\left(\frac{w{\left(\neg B |C \right)}}{w{\left(A |C \right)}} \right)} w{\left(A |C \right)} + \operatorname{S}{\left(\frac{w{\left(\neg A |C \right)}}{w{\left(B |C \right)}} \right)} w{\left(B |C \right)}}$

But to prove that those simplifications make sense.

In [72]:
from sympy.logic import simplify_logic
from sympy import S
A,B,D = sp.symbols('A,B,D')
B_not = A&D
A&B_not
Out[72]:
$\displaystyle A \wedge D$

First thing is to show that $\mathbf{B \wedge \neg A = \neg A}$, by negating $\mathbf{\neg B}$ and then using Demorgans theorem to expand, distributing out and eliminating terms.

Expanding using Demorgan's theorem.

In [73]:
simplify_logic(~B_not)
Out[73]:
$\displaystyle \neg A \vee \neg D$

Establishing a basic fact.

In [74]:
~A&~A
Out[74]:
$\displaystyle \neg A$
In [75]:
A&A
Out[75]:
$\displaystyle A$
In [76]:
simplify_logic(~B_not&~A) == simplify_logic((~A&~A) | (~A&~D))
Out[76]:
True
In [77]:
simplify_logic(~B_not&~A) == simplify_logic((~A) | (~A&~D))
Out[77]:
True
In [78]:
simplify_logic(~B_not&~A) == simplify_logic(~A&(sp.S.One | ~D))
Out[78]:
True
In [79]:
simplify_logic((sp.S.One | ~D))
Out[79]:
$\displaystyle \text{True}$
In [80]:
simplify_logic(~B_not&~A) == simplify_logic(~A)
Out[80]:
True
In [81]:
simplify_logic(~B_not&~A)
Out[81]:
$\displaystyle \neg A$

Now to show that $\mathbf{A \wedge \neg B = \neg B}$, so looks pretty simple just $\mathbf{A \wedge A \wedge D = A \wedge D = \neg B}$

In [82]:
simplify_logic(A&B_not) == simplify_logic(A&A&D)
Out[82]:
True
In [83]:
simplify_logic(A&B_not) == simplify_logic(B_not)
Out[83]:
True
In [84]:
expr2p40
Out[84]:
$\displaystyle - \operatorname{S_{1}}{\left(\frac{w{\left(x_{5} \right)}}{w{\left(x_{1} \right)}} \right)} w{\left(x_{1} \right)} + \operatorname{S_{1}}{\left(\frac{w{\left(x_{7} \right)}}{w{\left(x_{3} \right)}} \right)} w{\left(x_{3} \right)}$

A necessary relationship to simplify some the logic expressions. Where D is any new proposition.

$\mathbf{\neg B = AD}$

The simplified expressions are,

$\mathbf{A \wedge \neg B = \neg B}$

$\mathbf{B \wedge \neg A = \neg A}$

So now we have,

$\mathbf{\operatorname{S}{\left(\frac{w{\left(\neg B |C \right)}}{w{\left(A |C \right)}} \right)} w{\left(A |C \right)} + \operatorname{S}{\left(\frac{w{\left(\neg A |C \right)}}{w{\left(B |C \right)}} \right)} w{\left(B |C \right)}}$

In [85]:
d1
Out[85]:
{'AB|C': x0,
 'A|C': x1,
 'B|AC': x2,
 'B|C': x3,
 'A|BC': x4,
 'A notB|C': x5,
 'notB|AC': x6,
 'notA B|C': x7,
 'notA|BC': x8}

Assuming $\mathbf{S(w(B|C)) = w(\neg B|C)}$ and $\mathbf{S(w(A|C)) = w(\neg A|C)}$

In [86]:
w(x1)*S1(S1(w(x3))/w(x1)) - w(x3)*S1(S1(w(x1))/w(x3))
Out[86]:
$\displaystyle - \operatorname{S_{1}}{\left(\frac{\operatorname{S_{1}}{\left(w{\left(x_{1} \right)} \right)}}{w{\left(x_{3} \right)}} \right)} w{\left(x_{3} \right)} + \operatorname{S_{1}}{\left(\frac{\operatorname{S_{1}}{\left(w{\left(x_{3} \right)} \right)}}{w{\left(x_{1} \right)}} \right)} w{\left(x_{1} \right)}$
In [87]:
expr_s = x*S1(S1(y)/x)-y*S1(S1(x)/y)
expr_s
Out[87]:
$\displaystyle x \operatorname{S_{1}}{\left(\frac{\operatorname{S_{1}}{\left(y \right)}}{x} \right)} - y \operatorname{S_{1}}{\left(\frac{\operatorname{S_{1}}{\left(x \right)}}{y} \right)}$

After substitution of 1 for y, it is clear that S1(S(x)) has to equal x. So that implies that S1 is a self-reciprocal function.

In [88]:
expr_s.subs(y,1)
Out[88]:
$\displaystyle x - \operatorname{S_{1}}{\left(\operatorname{S_{1}}{\left(x \right)} \right)}$

$y = w(B|C)$ The proposition D is arbitrary. $\neg B = AD$ and $S_1(w(B|C)) = w(\neg B|C) = w(AD|C)$

$w(AD|C)=w(A|C)w(D|AC)$, by various choices of D we can make $w(D|AC)$ take on any value between zero and 1. So $0 \lt w(D|AC) \leq 1$.

Explain how $0 \lt S(y) \leq x$ where $x = w(A|C)$

$S(y) = S(w(B|C)) = w(\neg B|C) = w(AD|C) = w(A|C)*w(D|AC)$ and so if $w(D|AC)$ is allowed to fluctuate between 0 and 1, then $S(y)$ will be greater than zero and less than or equal to w(A|C) which is x.

What does this mean? The domain that this function can operate on has this property where D can be changed to see the resulting change in S(y).

The ideas that lead to the S function (S1 in our case) came from assuming that there is a function that takes a weight as its input and outputs the weight of its inverse. Then we take the product rule and substitute in this S function, then using some D variable to simplify some symbols, we have a functional equation. The functional equation becomes a differential equation, then by a simple assumption becomes the familiar form of the sum rule.

Application to syllogisms

P implies Q, therefore if P then Q. Our background information C represents the major premise, that is A implies B. AB|C equals A|C, if you look at the truth table you will see that it works out. Put that into the product rule and what do you get? Show how the strong syllogism pans out (although the implication is kinda weak in its own way it is still stronger than the weak syllogism)

The weak syllogism. C: A implies B, the assumption is that B is true, what does it do to the plaus that A is true?

What are the weak syllogisms, they say have certain premises, and certain conclusions based on a premise. The syllogism consists of two truths known as the premises, and the third part which is the conlcusion.

What is the background information C?