>>>>> www.lesliemelcher.com

Home

Probability Factors, Future and Predictions in Project Management:

An attempt to create a probabilistic method exacting Time–critical processes and future-oriented projects (or predictions) L. Melcher – Updated 7/18/2007 - DOWNLOAD in PDF format

Notes & Basic Notions

INDUCTION LOGIC

INDUCTIVE procedures must be established to predict the future. Time management is not done in space but in time and is only a self-referential concept. There is no data in the future as time is a process that we create. However we try to separate Quantified Time (Clock / calendar) and time-based processes. However the calculations overlap. Can we construct a ' Meta Time' to manage an ongoing project? That would measure an imaginary length (space) of indeterminate and subjective sequences of actions, or do we need a system – which would be time-based only - into any project or goal?

Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence, or inferences about the future from the present (similarity -> Markov)

Accomplices: Recursion / Iteration: See the relationship between induction/Recursion and iteration as members of the same finite class, as a family – All bear resemblances and are ' genetically' bound to the family - Any element of major subsets constitute the whole of the system.

Short reference list

Wittgenstein, - all 2nd period / Kurzwell (the Age of… ASM), Mozart's Brain (MZB), STBL (struck by lightning), Levitin This your brain (TIY), MInimax Theory (iteration). Nash / Von Neumann…See Strathen “Dr Strangelove Game” (DSG) chapter 12 pp293, On de Moivre and Mandeville – The bell curve and regressions to the means see DSG chapter 3 pp78 -82, Integrative neurosciences (IN)ixon (Ingenuity Gap), RAD Development, Effective Project Management , A mind of its own, Terror management theory, Quantum Elements (lots of ref books)

____________________________________________________

Induction: The many definitions (sample)

•  The premise precedes the conclusion

•  Any complex system that should be measured against primes

•  The causes of complex phenomena are interpreted in advance.

•  The particular to the general. Not the inverse

•  Based on observation

•  Based on repetition

•  Based on correlation

•  Based on Average

Recursion - > Iteration = an iteration is a self-referential system. It takes its output to measure the validity of its input:- A function , process or any activity can be iterative (i.e. In programming, take business requirement – go thru phase 1 then check against business requirements to check if there are any changes or discrepancies. If so, loop back and restart, if not, continue onto phase 2. The brain itself it self-referential – It studies itself –> It is what I call the ‘Fishbowl effect': (suspend disbelief here) the fish in its water cannot see outside the bowl but makes analysis and deductions from within its bowl. It is a closed loop. Our minds are no different.

Regression -> from multiple causes to the root – Purpose? System analysis, fault tolerance, coherence and consistency of a closed system –> – Regression can be statistical (the means or average) BUT can be used to ‘break' the Recursion / iteration loop as described above. How? The model is first referenced to itself -> then if 1 = 1, introduce 1+n and regress to see if you add validity to your prediction (logical induction) – Algorithmically: right or wrong (1 or 0) can be a self learning system based on rules stored (databases) by experts and evaluated against the rules. For each right or wrong move, the system 'learns' and collects more data for each new pass. With much iteration, these steps could be aggregated as chunks, as a 'mental' model. It could also be connected algorithmically to the concept of removing redundancies or look alikes. The trained mind seems to perceive immediately the answer without any conscious cognition. A Series of juxtapositions of elements in space, related to each other / alternating with, moving in a mind-time – like thinking about movement of clouds in superposition, moving and developing, or chess moves, in your ‘mind's eye'.

INDUCTION as method -> Perceptions and Psy -> Core of inductive process: used in medicine, police work, etc… Make a plausible conclusion in the future and evaluate premises… Keep least amount of possible cases.

Predicting the future – in probabilistic terms can mean: •  Reducing the total amount of ‘population' (the how many) = how?

•  PR(A) = A/(Total A) of events ‘likely to happen' Likely to happen? Is pure data = physical evidence? Average success rate per stage / event / task leading to ‘Future Goal'. Example: Task well defined and well assigned = 99% of success. Why: Task breakdown or Elementary Task: = Smallest element of a structure. Allocation:

1) To whom (taking into account needs ands motivations) 2) How long – Determine by code and method review – then by interview.

Definition by Recursion

(Algorithmic technique) Definition: Recursion is an algorithmic technique where a function, in order to accomplish a task, calls itself with some part of the task. == è Process by which a task ‘learns' its viability and can expand. Note: Every recursive solution involves two major parts or cases, the second part having three components.

Base case(s), in which the problem is simple enough to be solved directly, and

Recursive case(s). A recursive case has three components:

1. divide the problem into one or more simpler or smaller parts of the problem,

2. Call the function (recursively) on each part, and

3. Combine the solutions of the parts into a solution for the problem.

Depending on the problem, any of these may be trivial or complex.

Iteration: (algorithmic technique): Definition: Solve a problem by repeatedly working on successive parts of the problem. (Repetition of back <– > forth cycles)

BAYLES MODEL (Also, see Hume's position)

Induction as seen by Bayles: an attempt to clarify: I) Bayesian inference: This uses probability theory as the framework for induction. Given new evidence, Bayles' theorem is used to evaluate how much the strength of a belief in a hypothesis should change. There is debate around what informs the original degree of belief. Objective Bayesians seek an objective value for the degree of probability of a hypothesis being correct and so do not avoid the philosophical criticisms of objectivism. Subjective Bayesians hold that prior probabilities represent subjective degrees of belief, but that the repeated application of Bayles' theorem leads to a high degree of agreement on the posterior probability. They therefore fail to provide an objective standard for choosing between conflicting hypotheses. The theorem can be used to produce a rational justification for a belief in some hypothesis, but at the expense of rejecting objectivism. Such a scheme cannot be used, for instance, to decide objectively between conflicting scientific paradigms. Edwin Jaynes, an outspoken physicist and Bayesian, argued that "subjective" elements are present in all inference, for instance in choosing axioms for deductive inference; in choosing initial degrees of belief or prior probabilities; or in choosing likelihoods. He thus sought principles for assigning probabilities from qualitative knowledge. Maximum entropy – a generalization of the principle of indifference – and transformation groups are the two tools he produced. Both attempt to alleviate the subjectivity of probability assignment in specific situations by converting knowledge of features such as a situation's symmetry into unambiguous choices for probability distributions. Cox's theorem, which derives probability from a set of logical constraints on a system of inductive reasoning, prompts Bayesians to call their system an inductive logic.

Retroduction is similar to induction, but it is predicated on known or assumed relationary rules and observations that contain at least one of the predicates or predictors of the rules in question. (One known forceful indicator / predictor know a priori. Another predicate of the relationary rule is then generalized to the observation due to the coincidence of the other predicates in both the observation and the rule.

This is commonly applied in police work to determine the initial suspects of a crime via means, motive, and opportunity, and in medical diagnostics via the patient's symptoms and established diagnostic decision trees.

•  proposition 1 - > postulate (from redroduction) - It will take 2 months to finish this cycle

•  prop 2 -> find relationship that confirm prop 1 - The most common forms of logic systems built up through retroductive reasoning involve or are related to complexity theory.

•  Inductive reasoning is the complement of deductive reasoning. Induction or inductive reasoning: Processes of reasoning ->the premises of an argument are believed to support the conclusion but do not ensure it. It is used to ascribe properties or relations to category of beings/elements (i.e., on one or a small number of observations or experiences). This means => q or portion of class (q) gives meaning to contextual relationships based on external events (ex: an object = sum of properties) or (relations = structure of links) or types (entities, formulating concepts on future events) based on tokens or samples (partial elements) based on experience / correlation / repetition. In short, based on a definition of a problematic – unscientific ‘reality'<- reinforcement of these experiences creates stronger probability or infirm the original premise.

• This correlation system is close to Popper's system of falsifiability – can you break it?) And a base rule for learning (coupled with recursion)

Formulate laws based on limited observations of recurring phenomenal patterns. Induction is used, for example, in using specific propositions such as:

This ice is cold.

A billiard ball moves when struck with a cue.

...to infer general propositions such as:

All ice is cold.

Anything struck with a cue moves.

Induction Logic

NOTE: Induction should have no pretence at being ‘scientific'. Merely using human experience to test and measure future events or make ‘predictions”

This exemplifies the nature of induction: inducing the universal from the particular. However, the conclusion is not certain. Unless we are certain that we have seen every crow – something that is impossible – there may be one of a different color.

The conclusion cannot be strongly inductively made from the premise. Using other knowledge we can easily see that this example of induction would lead us to a clearly false conclusion. Conclusions drawn in this manner are usually overgeneralizations.

Validity: Formal logic as most people learn it is deductive rather than inductive. In contrast to deductive reasoning, conclusions arrived at by inductive reasoning do not necessarily have the same degree of certainty as the initial premises. The classic philosophical treatment of the problem of induction, meaning the search for a justification for inductive reasoning, was by the Scottish philosopher David Hume. Hume highlighted the fact that our everyday reasoning depends on patterns of repeated experience rather than deductively valid arguments. For example, we believe that bread will nourish us because it has done so in the past, but this is not a guarantee that it will always do so. As Hume said, someone who insisted on sound deductive justifications for everything would starve to death. Instead of approaching everything with unproductive skepticism, Hume advocated a practical skepticism based on common sense, where the inevitability of induction is accepted. We have something of the sort in Raulston Saul's book “On Equilibrium” were common sense is defined as ‘shared knowledge' and an integral part of society's major attributes.

Induction is sometimes framed as reasoning about the future from the past, but in its broadest sense it involves reaching conclusions about unobserved things on the basis of what has been observed. Inferences about the past from present evidence – for instance, as in archaeology, count as induction. Induction could also be across space rather than time, for instance as in physical cosmology where conclusions about the whole universe are drawn from the limited perspective we are able to observe (see cosmic variance); or in economics, where national economic policy is derived from local economic performance. Twentieth-century philosophy has approached induction very differently. Rather than a choice about what predictions to make about the future, induction can be seen as a choice of what concepts to fit to observations or of how to graph or represent a set of observed data. Nelson Goodman posed a "new riddle of induction" by inventing the property "grue" to which induction does not apply.

Types of inductive reasoning

Generalization

A generalization (more accurately, an inductive generalization ) proceeds from a premise about a sample to a conclusion about the population: How great the support which the premises provide for the conclusion is dependent on (a) the number of individuals in the sample group compared to the number in the population; and (b) the randomness of the sample. The hasty generalization and biased sample are fallacies related to generalization.

Statistical syllogism

The proportion in the first premise would be something like "3/5ths of", "all", "few", etc. Two fallacies can occur in statistical syllogisms: "accident" and "converse accident".

Simple induction = CoA (Criterion of Adequacy) from Class member to Class member or proceeds from a premise about a sample group to a conclusion about another individual: Proportion Q of the known instances of population P has attribute A. Individual I is another member of P. therefore there is a probability corresponding to Q that I has A.

This is a combination of a generalization and a statistical syllogism, where the conclusion of the generalization is also the first premise of the statistical syllogism.

Argument from analogy = CoA from Class member to Class member by Similarity: An (inductive) analogy proceeds from known similarities between two things to a conclusion about an additional attribute common to both things:

P= Q. P -> A Q ->A.

An analogy relies on the inference that the properties known to be shared (the similarities) imply that A is also a shared property. The support which the premises provide for the conclusion is dependent upon the relevance and number of the similarities between P and Q. The fallacy related to this process is false analogy.

Causal inference

A causal inference draws a conclusion about a causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship. A prediction draws a conclusion about a future individual from a past sample.

Argument from authority

An argument from authority draws a conclusion about the truth of a statement based on the proportion of true propositions provided by a source. It has the same form as a prediction.

Also, see Carnap / Hume / K. Popper / B. Russell