There is only one underlying reality
Truth about the state of the world exists, defined as an accurate description of its current state, and we observe it through direct observation and indirect reasoning
Nothing about the world is entirely uniform, either in form or in distribution, and therefore things present in clusters
Even a minor deviation from perfect uniformness of a distribution results in clustering since no distribution is in a perfectly stable equilibrium to external stresses, and so it does
Every idea of a “thing” involves identification of a collection of properties, the boundaries of which are necessarily fuzzy
The necessity of the fuzziness comes from the requirement for the possibility of a change to the definition of said “thing”
The definitions are artificial for ease of communication and are therefore flexible - i.e., it's epistemic uncertainty
A particular arrangement of clusters is not, in any meaningful sense, superior to another - whether it's seen entirely holistically or looked at as individual objects
Aggregation of these clusters together creates the world we see and experience at higher levels, linked together through causal interlinkages amongst all phenomena (and across multiple aggregation levels), which can be called causal-webs
These causal-webs have a number of ways of interacting with each other, as can be seen by looking at the number of possible factorial combinations, increasing its inherent complexity
The number of relationships that exist between variables is going to be varied amongst direct relationships and indirect
The number of indirect connections will naturally tend to be much higher than direct, and this only tends to make the propagation of a force through the causal-web harder to backtrack to its initial conditions
The world is a latticework of facts and causal relationships that stack atop each other to create a complex architecture, in which extracting an individual fact or relationship in isolation is often practically impossible due to the sheer number of interrelationships
Statistical and probabilistic reasoning exists as a way therefore to learn more about the world, as a way to glean insight into truth at different levels of abstraction
Probabilities are our measure of the various uncertainties which reflect our lack of knowledge, and not necessarily a property of reality
Probabilities therefore are shortcuts to help understand other multiple layers of the world through a simplification process on large sections of the causal-web
Component interactions aggregate together and sometimes form discernible large scale distribution patterns, which help with predictability of the overall system
Systems contained within the causal-web emerge from underlying interaction of simple components
Depending on the number and architecture of the components, the systems become more and more complex
Complexity of a system is an emergent property that manifests itself in multiple ways, with one specific one being that it makes it difficult to predict outcomes purely from component level analyses
As the number of the interactions increases, as a function of both number of components and it's degree of interaction, the complexity of the system increases
As there is within the system multiple feedback loops, both positive and negative, this increases the complexity of the overall system
There's a cost to complexity within smaller parts of the causal-webs, within systems, both in terms of its overall function and also the increasing number of individual components
Systems have feedback loops within that usually guide actions, reactions and self correcting or enhancing behaviour
The loops can be self correcting usually through individual aspects of the complex system interacting with each other; they can also be mutually enhancing through these same interactions
As aspects of a complex system gets more intertwined so does the potential surface of exposure that shows ways things could evolve and number of potential outcomes could increase
An external pressure, applied over a period of time, and varying in details enough to self correct, provides an adequate exploration of the potential surface space, that results in having a system that doesn't create unwanted outcomes
We navigate casual-webs through maps. Maps are not the territory, but they are all we have - any representation of a thing is necessarily less complex than the thing it represents
Individuals who aim to discover aspects of the world are guided by maps, self-created and provided by others, to illuminate individual pathways within the larger complex maze of facts and relationships
The only guidance that an individual has corresponds to a predictable hypothesis, or a sufficiently believable explanation for a phenomenon
When sufficiently underscored by detail, this phenomenon of analysis and search is science, and the potential inherent in this phenomenon for overall error correction and continuous improvement through better prediction and feedback makes it a strong candidate for a way of thinking that has high potential for eventual discovery of the world
The search for predictable hypotheses corresponds in size and complexity to the world itself, over long enough timespans, but by itself could also lead to creation of models that have varying error bars in terms of comprehensiveness
Our ability to create models of the world around us is slowly built up and improves over time
Causality is impossible to measure objectively, in the sense that you're never removed from the process of observing a causal interaction
Correlations can be spurious, especially as the complexity in a system grows through exponential increase in component parts and number of interactions
Identifying and removing spurious correlations requires multiple experimental workarounds, designed to identify and remove the other potential causal factors
Due to the level of interdependence and component interaction in most systems this becomes exponentially difficult as complexity increases
Models, affected therefore by our inherent humanity, therefore are built with biases that reflect our preconceptions, ideas or worldviews, visible or not
This scientific reasoning, built upon hypothesisation, evidence gathering and experimentation, allows us to build models about the world that are factual and experimentally verified
However it doesn't protect from false information, or from ensuring that we have taken all necessary variables that affect the phenomena under question
The only cure to this we have is to run multiple experiments, each as independent as possible, in the hope that together they cancel out any inherent biases
The models have varying degree of validity, due to correspondence with the underlying reality, and ability to create accurate predictions
We can only ever know a few of the ways in which the variables within the world intersect with each other before the complexity grows so large as to make complete analytic calculations or estimations impossible
This implies that a certain degree of error bars is an inevitable feature of a complex world and cannot be undone
As such, any attempt at complete comprehensiveness is, philosophically speaking, a fool's errand
The entropy of information content regarding an object, a collection thereof or a phenomenon, suggests groupings that are fluid amongst fundamental constituent components
The only perfect thinking that exists is mathematical logic, which is tautological. For all others, reasoning remains imperfect as a mechanism to get to the truth
Inherent biases exist within our ways of thinking, often crafted through evolutionary pressures and fundamental properties that affect information input
Lack of information and/or false information can, knowingly or unknowingly, affect conclusions
All knowledge exists as probabilities which shift between 1 and 0 according to evidence, at the lowest levels of reality
Absolutist stands therefore are always incorrect, and situations need to be assessed on their own merits, taking into consideration the unique circumstances
This includes the proposition that absence of evidence is not the same as evidence of absence
For any non-analytic process therefore, the difficulty of measurement, and consequent prediction, creates a degree of error
All measurement choices have associated errors, and consequently, thinking of measurement results as sacrosanct is incorrect
As you move from measurement to prediction the uncertainty bars widen significantly, and in most cases widen enough to become unhelpful
Since most types of decisions do not occur frequently enough to be truly applicable to statistical reasoning (at least not to those making the decision), this substantially increases the application of rhetoric and to downplay any statistical significance
Which could mean that when the outcomes are not statistically predictable with high enough frequency, casting aspersion on the very method is also a reliable method of convincing others
Statistical reasoning emerges as a method of performing estimations in a world that's less amenable to complete analytic theories, either because the causal-webs involved are too complex and entangled to be decipherable, or because the underlying phenomenon itself has inherent properties that make causal arguments inapplicable
Complex systems, organic or otherwise, have an exponentially large interaction surface area where the effects may be seen or affected, across the network of interaction points amongst its individual factors
Large changes in degree of scale within a system is equivalent to change in degrees of scope, indicating a change in the system
Any particular component therefore that changes and morphs over time leads eventually towards the end of a spectrum, and to a fundamentally different construct
However part of it is also because phase shifts happen within particular phenomena, especially ones with feedback loops embedded inside
The complexity is built up of multiple layers of abstraction, each affecting each other through those feedback loops, and our knowledge that gets built through concepts and models are thin slices
Imagine a piece of rock
Now imagine you have a laser cutter that's so fine that you can slice multiple sheets of that rock
The rock is old and has multiple generations of strata within it
Every slice therefore reveals certain truths about the rocks origin
One shows how it was involved in a volcanic eruption
Another about how part of it is soil nourished by an ancient sequoia
And another about how the pressures varying in its surroundings created different minerals. And so on and on
Each facet is true. Each facet is accurate. Yet they are each incomplete.
This is our state when trying to understand any multidimensional interlinked object
Each assessment or analysis is like a thin slice of the object, revealing relationships and truths that are insufficiently comprehensive
As these networked systems increase in size, hierarchies emerge within the overall system
There are multiple types of networks that can evolve based on underlying phenomena and their interactions
Couple of the key factors affecting a network's performance are the types and strength of connections amongst it's nodes and edges, distributed in a particular space
As an example, for a hub and spoke model, degrees of connections are highly centralised, whereas for small world ones where the connectivity is more distributed, they are more resilient to network breakdown in any particular part
For hierarchical systems where there are positive feedback loops to reinforce existing behaviours, the hierarchies are likely to ossify over time
The ossification brought about by the feedback loops will make it resilient to known factors and brittle to unknown external factors
Negative feedback systems create a way for the system to create stability, and therefore builds resilience
Positive feedback systems however push a system's growth to its natural limits, and tend to create systemic fragility, at least insofar as it interacts with specific macro conditions
Parsimony is a global constraining factor within any system that creates explanatory power, and is simultaneously also a necessary result of increasingly complexity within systems
Explanations necessarily have to summarise and elucidate with a smaller footprint than the phenomena - otherwise the map is the same as the territory
The complexity factor therefore becomes a limiting factor as explanations go up hierarchical levels
e.g., Within a Darwinian system if the amount of energy that can be consumed by an entity is constrained in some way, then there will be a push towards minimizing the energy required for mere continuation while maximizing the energy required for reproduction
Part of the issue is due to the imprecision of language which tends to group concepts together that would then reduce the mental model complexity required to converse at a higher level
Understanding any phenomenon fully requires understanding of its antecedents and potential causal factors, in essence its whole causal web
The model is essential to test both a fuller understanding of the simplified causal-web, and ideally therefore allow prediction of future movements based on specific input criteria
Not all factors are assessed equally; some have much larger weight and impact
Prediction is primarily dependent on specifying what’s being predicted and the complexity of the analytic processes required to be computed; this is also difficult when it comes to highly complex phenomena which have vastly variant underlying distributions (e.g., economic markets) without also specifying clearly the timeframe and the input characteristics
This also means that for almost all phenomena there is a declining curve in terms of prediction accuracy, the steepness of which depends on a) the complexity of the causal-web of the event being predicted, and b) changes in the underlying variables (including time)
Fully untangling and describing this complex web of causation is, in most cases, computationally impossible
This stands true except perhaps highly academic or artificially limited cases
Most forms of experimentation do fall into this category, where we artificially limit contexts to thereby limit the size of the causal-web
Therefore we can only make best efforts to get a statistical answer to whether the causes identified have a large enough impact, causally, on the phenomenon
This means that for every phenomenon there exists a chance that we have not fully incorporated all relevant information and/or data into the model, though the chances might become infinitesimal as the phenomenon continues to be better understood
Mental models, necessarily, decrease the complexity of the event and focus only on the most salient features, with salience determined both by predictive ability regarding the phenomenon and relationship with other existing models
You can further disentangle this causal-web through exceptionally careful experimentation, but the efficacy of such efforts decrease exponentially as you reach higher levels of conviction
There is limited visible difference between systems which are so complex that it's difficult to see parts that affect others, and systems where the number of actual actions that can be taken are constrained, though the actual mechanics are still incredibly complicated
It's not possible to fully understand the nuances of a phenomenon as the number of permutations, complications and interrelationships are extremely high
This implies that due to practical considerations there is a necessary level of detail that people can grasp before they are able to make summarising statements, which varies from subject to subject
The particular boundary that decides whether the depth to which you have analysed a topic, or the number of hierarchies that have been dug deeper into the topic, can be analysed by trying to create predictive analysis based on the summarised version that you understand at any point, and testing that against reality
Emergent complexity is also, in fact, an anti fractal view of the system, where as you zoom in into deeper components, the self similarities break down rather fast, and in fact it becomes operationally impossible to predict the larger behaviours from the sampler components
Optimisation adjustments within a system in response to its environment creates self similarity wish creates fractal properties, and fractal properties create increased dimensionality that increases efficiency
Applied in the case of an organism, the evolutionary pressures, along with the environment, creates selection pressures that act upon it (across multiple hierarchies) to increase its odds of survival
This implies that creating certain internal models can increase the survivability; and since most models are incomplete, the individual survivability of one model doesn't necessarily alter its overall viability
The pressures will shape both its external appearance, behaviours, and ability to create sophisticated internal models
The success of a collection of the above characteristics can be judged by its ability to survive long enough to reproduce
However this also means that separate characteristics could be considered equally successful if they succeed equally in increasing the survivability of the organism until reproduction (if not necessarily afterwards)
Real life phenomena often require algorithms to assess its own condition in order to create guiding principles and respond to stimuli and circumstances, which it does through creating an internal simulation, which then helps the algorithm survive in the future
With complex systems, as long as there are both positive and negative feedback loops, it's possible that the systemic entropy will be a function of those aspects, which indicate an eventual lifespan for most systems, lifespan being determined by the timing with which the entropy progresses to such an extent that the organism ceases input/ output functioning
This shows the inherent drive to have a need for an active substrate to create a dynamic system
Otherwise the whole apparatus will be at rest with no factors propelling it forward to changing it in any way
Also, a substrate cannot be too dynamic, otherwise it'll overwhelm the system altogether
It's the small perturbations, which might on average cancel each other out, that gives rise to larger phenomena that it seeds
The changes that are wrought therefore helps create the map; and the map helps guide the organism, and that's what creates a reinforcing loop
Within us, this process helps create an individual's belief structure, an output built on top of an entire edifice of underlying beliefs, lived history, facts, stories and hypotheses
Editing a belief requires method of comparing existing belief structures with a proposed one, through some form of communication to exchange information about the structures
Every statement for example that's made in this effort has to be regarded in terms of the new facts it brings to play to illuminate an observation and the inherent logic by which it relates to other existing facts and observations
This means, for instance, that social interaction has to be separated from problem solving interaction in order to achieve the best results
Editing a belief therefore is equivalent to restructuring an entire thought-edifice
This means that when people differ, one way to find a middle ground is to try and find out the following (non exhaustive) things
How much of each statement is factual
How much of each position is relevant to solving the problem at hand
Any appeal to authority has to be backed by some evidence of the reliability of the authority
There are problems inherent in the comparison which cannot be seen or understood through first hand experience, and therefore requires specific expertise
This creates problems for belief, because it shifts the burden of belief from first hand to second hand experience
This also requires understanding of probabilistic systems of belief with the concomitant risk of being wrong vs deterministic systems of belief
To try and solve this through reliance on experts creates its own issues as we now have the problem of reliance on experts
However this adds the difficulty of collectively or individually judging expertise to the problem
Collective mechanisms have the ability to compare track records or prior performance to gauge
Most people have an internal mental structure that determines their individual beliefs
Education is a mechanism to cultivate and shape this structure
Conversations and debates are ways to edit or prune branches of this tree in order to understand a topic, including the implications
Large part of all arguments stem from misidentification of things. Therefore precise definitions are necessary if a conversation is to progress anywhere.
However the precision of a definition is rarely feasible to an accurate enough degree since most clusters aren't tightly enough clustered
Our perception of the world around us, as reflected in our individual mental-maps, incorporate sufficient information and feedback from the world that any current zeitgeist is also part of the map
The zeitgeist being part of the mental map doesn’t necessarily cause it to be positively or negatively regarded, just as part of the existing infrastructure with the inertia that this implies
The world is not seen as a separate objective reality, but the border between our perception of the world, the societal perception of the world, and the objective reality of the world, all commingled together through our perception systems
The mental-map is an internal representation therefore that includes a point of view on the world, and also meta-responses to the world, including self-referential feedback loops that explore own and third-party societal reactions to the world, and own beliefs regarding the world (i.e., the mental map itself)
There are multiple such loops including loops of loops, though practically speaking there doesn't seem to be an infinite number of them - while it might seem like it's turtles all the way down, it only seems that way and isn't actually that way
Belief therefore requires both presentation of evidence plus ability to integrate with existing mental structure, which results in assessing not just the evidence but also the evidence alongside its congruence with the mental-map as it exists, i.e., believability
To do this en masse would also require a relatively systematic way of imparting information about useful parts of the web
This is education, which both helps illuminate parts of the causal-web for all, and also helps provide an illuminated path to explore it further
Therefore most of education is an effort to instil a sufficiently accurate set of facts and principles which can form the basis of a shared foundation amongst most of society's mental-maps
Understanding of the world is crafted through the multiple layers of simultaneous inputs, each of which gives rise to a partial view that together makes up an internal representation of the outside world, i.e., an internal simulation we call a mental-map
From early on in life the creation of this mental-map comes about through the slow exploration of the world around and the consequent sequential creation of maps which are interlinked.
An example from what a baby goes through:
Starts with blurred vision, along with tactile and auditory input, mainly as a survival tool towards feeling closeness and hunger
Then moving one's limbs, haphazardly then purposefully, to action small impetus' linking the vision and tactile systems together
Then control over the body, rolling over, sitting up, crawling, for spatial mobility
Talking, babbling, responding to faces and emotions, getting closeness through human connection
There is a continual demonstration of intentionality as all of the abilities continue to grow, linking the abilities together to enable ask, get, and respond to various things and people
This mental-map forms the basis of interaction with the outside world, and our prediction of potential interactions within the model is taken to be the yardstick to assume predictive ability in the world
The constraints in the map revolve around the number of potential options available for action based on the input + output + analysis
These available courses of action are also dictated by deeper internal drives, which are usually more primal needs, such as hunger, curiosity, boredom and so on, each of which create specific courses of action that can be taken
Truth, as it's understood, comes via connecting multiple points of this experiential system where a story, or a narrative, touches upon an underlying phenomenon and creates a level of congruence with the outside world as perceived
The mental maps created don't just comprise the totality of information relationships, but are also path dependent and therefore also contains variations due to the history of the acquisition of each node and edge
Mental map networks are malleable in all parts, meaning they are affected based on changes in the quantum, size and sequence of any inputs
However the creation of multiple interrelationships within it means that the networks are highly resilient to most external pressures in the form of information input
Editing a mental map network therefore requires a precise calibration of information dissemination that both controls the content and the delivery method in order to affect all reinforcement structures within the mental map simultaneously
Repetition of information helps to create malleability within an existing mental map, to force information through the network
The errors that creep through in our creation of internal mental map representations can have pernicious effects that aren't immediately visible
These 'cognitive' effects are impacted by inherent biases and shortcuts used in reasoning
Understanding these mental maps require an effort to understand the intricacies of our internal representational hardware
Most of the time people are unaware of the exact representation and inherent processes underlying the phenomenon when it comes to any individual aspect within the mental map
Mental maps distinguish the world into categories and definitions (which are themselves fluid and flexible)
The world is better explained through differentiation amongst its constituent items as analogues in biology, where all living things are related to each other through a tree or interrelationships, rather than a dictionary definition whereby we can define the boundaries of an object, a concept or a theory perfectly
For example, this means that while the difference between an American football game and a rugby game is obvious to some observers, there are a substantial number of ways where they're similar, and they're still highly context dependant and path dependant
Defining any differences therefore becomes a norm built up through years of automated pattern recognition, and relies on large quantities of common references and base context
While not necessarily entirely path dependent, to an external observer there is a substantial amount of knowledge that isn't codified and therefore not easily transmissible to another
Perception also happens across categories simultaneously, through the same neural network albeit through different (and differentiated) pathways, each individually focused on specific levels of aggregation and finding or looking up meaning, to create a coherent picture together
This explains, as an example, why we read sentence by sentence or phrase by phrase, rather than letter by letter, though we're perfectly capable of doing so if willed
The maximum level of aggregation we're capable of incorporating is limited by a) our capacity to take in certain quantum of information, and b) our capacity to analyse that information
Analysis of information means comparing it, and components of it, to other, more established, prior pieces to understand similarity, doing logical operations on ensuing meaning
These same categories then underpin further iterative reasoning about the world, creating strengthened feedback loops for the underlying structure
The act of continued survival ensures that existing mental-map structures strengthen over time - though whether this is a strong foundation or pure ossification within each individual case is tough to determine a priori
The difficulty of explaining a mental-map to another is due to the lack of communication and comprehension methods adequate to the task, thus ensuring narrative forms are the best we have as map modifier modalities
And as a corollary, as methods of documentation and communication increases, the number of possible versions of ourselves that are documented increases
The outsourcing of thought and memory only serves to reinforce the information and also the narrative underpinning it
The creation of searchable databases still requires us to have metadata around the data that needs to be searched, however this doesn’t change the requirements to remember, recollect and create syntheses
Facts and beliefs are similar in their identity of being comprised of a group of sub-units which alter in their correspondence with the mental map
They do differ in their level of accuracy against the real world, though they get benefits from being held as beliefs insofar as they do not directly and immediately reduce it's ability to survive
Their ability to take hold within our minds is also therefore related to it's inherent order and the potential for distortion of an existing mental-map
Their functionality is closer to that of building bricks that strengthens an existing structure, with limited view of choosing a materials that only reduces its integrity
Axiomatic beliefs are required because several events are not repeatable for most people for most events, which means that not all beliefs can be verified empirically, especially for beliefs that directly impact the human 'subjective' experience
The difference between different forms of belief (e.g., dogmatic belief vs not) is that though they start at the same point, in that there's an observation that needs to be made fit into a coherent system of thought, and with coherence being the overwhelming imperative, that system then is not subjected to further tests to prove/ disprove
These systems of thought, or beliefs, have varying degrees of truth in them, defined as correspondence to the underlying world and the ability to create predictions should we be able to follow through on the chain of reasoning
The aspects that are easiest to understand necessarily relate best to the human scale of being, while aspects that are much larger or smaller becomes harder to identify and observe
It's like viewing a slice of reality from a multidimensional object, with a coherent cohesive picture only emerging as individual points that comprise it are brought to light, with each one highlighted only by its correspondence to a particular theory or narrative of the world
Striving to survive, with its biological imperative, pushes towards the creation of an accurate mental map over evolutionary timescales
All knowledge, and therefore the internal mental maps, comes about as a result of embodied information gathering and analysis, as inputs, outputs and analyses through cognition and any knowledge gained, is all irreducibly linked to an individual's body
Base goal sets, which for living organisms is expected to be survival and reproduction
Intermediate goal sets that get created through feedback loops between base goal sets and actions that undertake to fulfil that help further the goal sets
Immediate environment scan and response algorithms, which provide higher survival potential, and also includes other non-conscious calculations required to function in the world, e.g., locomotion
Stimuli inputs through multiple ways, each of which has different quantum of information density and requires specialized processing techniques to understand
Internal representation of an external world, to which the stimuli inputs and processing hooks on, tries to keep up to date and adjusts
Growth in this fashion can use existing neural pathways to craft specialized calculation centers for individual tasks when they're themselves complex, probably in a fractal fashion to the overall schema or something equivalent
Moreover, communication amongst individuals result in exchange of information regarding all the states above, which becomes input to other neural networks
Without an imperative to guide action, aspects of the map remain static, with no striving to make the representation better
The imperative is therefore what causes any physical representation of a mental map to act in a directed fashion, as without an inbuilt mechanism to force a direction of activity maps remain static. For humans the drives to survive and procreate drive us forward
It isn't necessary for the outcomes of the evolution of mental models to be similar to the drive itself. Instead the imperative is useful mostly to push the entire process forward and provide the time and space where selection pressures can occur
If a particular model increases the survivability and another helps generate adaptations that will help increase survivability, the model drives both as mutually beneficial as together they're not antagonistic and is part of the same 'engine'
The eventual shape of the pressures that drive tend towards discovering those spaces within the overall environment where the organism (which holds the model within it) discovers a local maxima of survivability that satisfies both a survivable path condition (the path taken to reach the local maxima has to not have substantial negative dips so that the organism gets to the maxima, and through that process crafts an external input/ output mechanism for survival that is adapted to energy acquisition and continuation that's a mirror of the environmental conditions
The large number of potential solutions to such an optimisation problem means that the path taken cannot be easily predicted either, since it depends on the overall environment within which the optimisation calculation takes place
Any activity taking place in a constantly evolving macro environment, especially one subjected to change through the behaviours and actions of the agents themselves, will find itself in a similar bind, as can be seen in multiple other human endeavours which all deal with complex systems
All mental models therefore also naturally adapt and evolve according to external feedback and stimuli
This process forces any model to remain aligned to reality in any population insofar as non-conformance would remove the model from existence
The fact that models strive towards accuracy in this fashion is coupled with the fact that they strive also to be energy efficient
Any behavioural imperative also acts as a selection criterion which adds to the pressure that forces the model to evolve and adapt
If the purpose of life is survival, happiness, and purpose, then those together drive internal impetus for all humans
The internal view of the output of the mental map, seen from one nested level of understanding to another, is the feeling we have of consciousness - where one aspect of our mental model observes the simulation of other aspects, including stepwise computational aspects, and "names" them; this particular arrangement of the mental-map within an individual defines an inner-state
This arrangement, in conjunction with the limitation of sensory input mechanisms, necessitate the existence of compensative algorithms that pre-select areas of maximal “interest”, which consequently determines focus
The focus could be because of a) direct attention to help gather input to solve an issue, b) accidental scan to help gather relevant input if any based on no predetermined criteria, c) pre-selection to maximise a subconscious objective without clear visibility of the exact area of focus
The combination of focus, atop particular inner state, is what allows computation to occur and intelligence to emerge with its tell-tale sign as being able to create and follow a thread of reasoning through its branches and, crucially, to come to a conclusion, inductive or deductive, around courses of action
Human intelligence however incorporates key inputs into its focus algorithms and inner-state definitions at its base from being "embodied", and consequently is completely wrapped up in results relying on the human body
It also therefore the emergent properties from the physical body (and consequentially the outcropping, also being self referential) which is our culture
It's difficult to have proper experimentation in real life since there are too many competing factors, so the effect has to be artificially assessed by looking at the probability that the decision could be changed
All inner-states are subject to mutation based on all lived experience, and are subject to selection pressures across all hierarchies of information transmission
Inner models are the effective representations of the world that we respond to, with the input mechanisms are primarily ways to bring new information into the model to add to it or verify it
The inner-states exist as a compilation of various levels of agglomerations of 'nodes', which might be facts, associations atop individual facts, collections of facts, assumptions or deductions arising from combinations of facts or assumptions or opinions, and any combination of the above
As you increase the number of nodes and links, you increase the complexity of any specific phenomenon so identified
The selection pressures that mould the overall shape of any process, and consequently the inner state or the mental map, have to work across multiple levels of the hierarchy
Therefore any applicable selection pressure has to be simple and common enough to apply globally across all hierarchies, rather than the local effects at any individual node
The major methods of bringing information in is via individual sense perception, which is defined per individual, and more importantly via communication with others, which brings different models and differing information directly in contact with each other
Survivability is therefore the ultimate selection criterion that ideas, propensity to believe ideas, and the rest of mental apparatus gets selected upon
This selection pressure exists across all layers of analysis simultaneously, e.g., across each individual step of the neural network computation, and is not necessarily only a bottom-up or a hierarchically generated behaviour
We create systems of thought in order to create a pattern that enables us to understand the macroscopic behaviours that emerge from underlying interactions, which serve to explain specific aspects of underlying reality
Combining model accuracy-maximisation and surprise minimisation for an organism creates ways for efficient exploration and prediction within the environment
With directionality of actions provided through external means, and learning embedded in looped neural networks internally, there needs to be a low latency method of crafting feedback between action, anticipated or otherwise, and the necessary analysis thereof
This low latency feedback creation method creates the backbone for our emotions
While the clinical sense will distinguish the term emotion from other proto-senses that act as internal building blocks, we are assuming that they primacy of the concept emotion is sufficient here
Without emotions we'll need some equivalent feedback loop to relate the explicit or implicit goals we hold in our minds vs the immediate outcomes, and the linkage that will enable speedier responses or motivation creation that guides action
Emotions therefore act as selection criteria that could provide feedback loops to govern action
Each major emotion therefore can be seen as providing direct feedback of a specific nature. Some examples being:
Happiness is acceptance and joy, of something good happening and it needing to be repeated
Sadness is the reverse, an acknowledgement of something bad that happened, to be avoided if possible
Disgust is the reflex to avoid something
Anger is impetus to change something in the world, especially when it falls awry of existing preconceptions against which we view the world
Fear is to stop exploration of any phenomena, in the prediction that it might be dangerous
While there are several more major emotions, they are only so because of commonly understood naming conventions, there also exist combinations and variations creating many minor emotions as well
The network of emotions therefore help create a quasi-shortcut in understanding and responding to the external world
The main method of communication we have is comprised of methods by which you communicate information in as dense a form as possible, while still retaining enough flexibility w.r.t subject matter and decipherability. This describes language, and its interlinkages with all other modes of nonverbal communication and contextual understanding.
Transmission of information in any meaningful way requires an assessment of the transmission mechanism and an assessment of all other competing pieces of information that also needs to be transmitted
This capacity to transmit information is the bandwidth
The requirements include necessity to include enough informational content while ensuring it's still optimizing the bandwidth required
Increasing information density in this scenario is easiest when there is a high degree of shared vocabulary or shared context so that you can reduce expository aspects
This naturally leads to specialized lingo in all fields of human endeavor, such as law, engineering, economics etc.
This also creates within any system of language a phenomena where same words/ phrases or equivalent have multiple meanings, often only understandable through context
Communication includes therefore key attributes of brevity, information density and mental resonance to the listener that’s judged through emotional similarity
Information to be transmitted comes in three different forms, often combined together in any individual communication
Factual -> which is information about the state of something, including both first and second hand information
Inferential -> where the information is a conclusion at the end of a thought process, and consequently is dependent on the entire web of thought to be counted as true. This also includes hypotheses, ideas or suppositions
Normative -> where the information is the description of a framework itself, including a meta-assessment of the way the world should be according to some a priori principles
All communications therefore is a combination of a) the information transmitted, b) the communication medium, c) the communication source including his/her/its history of veracity and therefore historic probability of accuracy, and d) the context surrounding the communication (reasons for communication, timing, and other metadata)
All communications are filtered as received also therefore through multiple layers, with the believability threshold applied to all layers, along with the eventual decision
Language is comprised of language-snippets such as words, phrases or sentences, which together refer to a meaning-cloud that can then be attempted to transmit to someone else
The meaning-cloud is created through a) the object it references in the world, b) the generally understood perception as prevalent in the society at large which creates a context surrounding the usage of the word, and c) the individual mental representation of the object under question
The description of the cloud is necessarily process dependent, in that the creation of that cloud depends on the process followed
These disparities become more apparent the deeper down the information heirarchy that you go, and the depth to which you go is defined by the impetus behind the conversation
To understand causality its essential to go deeper into the causal factors to fully identify and describe the process
To have a social interaction with another requires only superficial alignment, which deeper questioning can only chip away at
This combination of factors is what creates the cloud since words do not always have clearly defined boundaries definitionally, and instead have fuzzier definitions where its applicability to an object becomes more or less appropriate and apt depending on circumstance and usage
This difficulty of defining boundaries also creates issues with creating definitions, which thereby only serves a function in tautological or axiomatic scenarios
Therefore language, due to it's imprecision when it comes to logic, definitions and relationships, often will fail to convey the full extent of an individual's thought process and its complex branches
Almost all communication includes an assumption of sufficient common ground, which remain as unspoken assumptions that all parties agree to
In the absence of this, or when this is assumed incorrectly, large parts of subsequent communication remains vapid
The chance of communicating successfully decreases dramatically based on the potential to find shared ground to build a conversation around
It is possible that there might be common axioms that exist to underpin certain beliefs, but their distance from the conclusion along the logic tree determines the likelihood of finding them and using them as the foundation of the overall argument
Since language is difficult as a medium to communicate complex ideas in a sufficiently crisp fashion, we use non-verbal communication overlays
This includes things like body language and demeanour to help the communication recipient feel a specific emotional state that makes them more receptive to certain messages
This also includes context, persona of the communicator, authority judgements etc
Understanding is always a continuous process
Understanding includes multiple self-referential loops of ensuring coherence in internal thought process within any defined mental-map
This necessarily also includes a loop that continually assesses the believability of a particular assertion or proposition
Propositions and assumptions are incorrect when they do not correspond to reality
Believability of a proposition is a function of its coherence with an existing mental-map
Believability also requires continuing congruence as the mental-maps are updated continually
This coherence is dependent on its relationship with reality, but not solely so
This means that it's perfectly possible to believe something incorrect, where being incorrect means it does not correspond with reality, if it fits the mental-map sufficiently well
E.g., it's possible to believe the sun rises in the West, if you also simultaneously believe that east and west are reversed. It's possible to believe we live on a flat world if you also believe in other compensating forces that creates illusion of gravity, and other theories to back up credible observations
The issue is not resolved through conference because narratives are constructed relatively easily once you know the base, conditions, and the idea that needs justification
There is a clear preference for a simpler theoretical explanation over a complex one a priori, though this is in practice often difficult to distinguish in any case
This coherence is usually broken when a) it corresponds with evidence that directly and sufficiently strongly contradicts the belief, and b) the believability of the evidence is high
The belief in such propositions usually increase over time, as the absence of evidence over a period often has similar impact on individuals as evidence of absence, as this predilection is often useful in survival, and focusing efforts on immediate and visible phenomena
The propositions could continue to have high enough predictability potential regardless of its relationship with reality
Predictions in the real world often have a high degree of error associated with it, when looked at in isolation, without concomitant increase in a sufficiently large number of scenarios to make statistical predictions stand out
This is because the number of potential outcomes that's predicted in any one situation is usually bounded, and may even be binary (e.g., the sun will rise tomorrow, or it will not), which increases the possibility of even a wrong propositional belief resulting in accurate outcomes
The increase in number of outcomes, at a more granular level, comes alongside a decrease in confidence level associated with predictions, since increase in granularity means that you need to go down several levels of aggregation
Due to fuzziness inherent in object and word definitions, and the difficulty of ensuring a mental-map that’s congruent with reality in all aspects, the reactions to any event are likely to be as much about ensuring the congruence of the mental-map as questioning aspects of reality
This means that all analyses are subject to error bars due to a) them being "aggregate" assessments of underlying phenomenon, and b) the deviations from predicted values are unpredictable, i.e., the deviations from the predicted statistical shape happen spontaneously
The ability to generalise inferences is what enables us to understand what any "thing" is
This means, for example, that we don't learn what a baseball is purely by seeing several examples and connecting the visual or auditory input with the word itself
It starts rather from first understanding what an object is, or could be, then moving to creating internal belief in someone else's word as representative of an object, and associating the two together through multiple sub-groupings
For example, it would look at a hierarchy of an object > round objects > balls (used for play, of multiple sizes, and connected to sporting endeavours, which are separate to other round objects like oranges) > particular characteristic of baseball
The representations of information as it fits into a narrative is what gets stored as memory, which itself lives on within the mental map linked to all similar concepts
This also indicates that there are multiple types of memory; a) causing changes in the mental map that's permanently wired, and b) ones which create representations that are more malleable. This is a difference in scale than ends up being a difference in scope
Later on, after learning and integrating several such associations, the ability to infer itself is generalised, thus moving from visual things like ball to conceptual ones like the solar system, for example
Application of any specific thought pattern onto an aspect of the hyperdimensional complex world might create potential to predict future movements, and that suggests a certain degree of "truth quotient" for that pattern
Our experience in inference however comes from the world, as do our laws and rules that are derived to also pertain to the world
There's no easy way to create finalized rules to guide behaviour that stay true through all scenarios, since the bounds of the scenarios are what's seen in the world around us
Our practicality that allows survival stops us from being mathematically rational in multiple calculated scenarios
Knowledge of a thing is a belief like any other, with limited distinction between fact and fiction if they're reasonably congruent with immediately visible reality
Theories have to be consistent with all available empirical information, not just a subset, which is how they generate a model
In the absence of a single fully descriptive or predictive model, which cannot be because the map is not the territory, one is forced to rely on multiple overlapping models to help understand and describe any phenomenon
Increasing number of models increases the potential ways in which they might overlap, the number of ways in which one could be wrong therefore due to increasing levels of errors that compound as they interact
The human ability to reason is a mould that will continue to reshape the form through which we see the world even as we attempt to tinker with it
It isn't a logical deductive process but rather often an inductive process that tries to marry perfect narratives to a necessarily incomplete set of facts and weave that narrative into an overall story which seems to fit within the generally held viewpoint about the world
If multiple explanations are likely to explain certain set of facts as understood, then each mental-map would be biased towards choosing the explanation that best suits their individual priorities
This devolves the argument from being about a particular conclusion to being about whether you have taken all relevant facts into account to make a conclusion based on
It becomes particularly tricky when the facts themselves are conclusions of other processes, and therefore causes multiple iterative loops that affect the ultimate explanation in question
The structure of information, data and opinions, being hierarchically created maps as above, therefore create internal hierarchies of expertise which can credibly investigate different levels of knowledge or knowhow
This implies that distinctions that are explored in the world, such as science vs superstition vs religion vs philosophy, are often primarily conversations across hierarchies, and therefore doomed to a certain level of failure until there exists an understanding of a common base to work from
Any natural variation in individual ability, thought or information architecture, lend themselves to two phenomena - clustering, where they form clusters that thereafter can be referred to by custom names, or ongoing divergence due to positive feedback loops that create a gap
The pursuit of completeness within any individual domain usually promotes the need to create an all-encompassing theory within their own and adjacent domains
The questions that are purportedly asked within the domains itself are defined normatively, and at least at the start are not emergent phenomena coming from the investigation itself
Each one of these domains would therefore have aspects of the knowledge-map that's well thought out and logically connected, while they create "loose" theories around areas that aren't directly under their purview
This creates conversations at cross-purpose amongst the varied domains quite often, without understanding that these are different forms of investigation that are, mostly, parallel to each other
To know whether your mental map is correct therefore requires comparison with reality to assess its congruence, and also to assess the "brittleness" of the decision to changes in the mental-map
This means that decisions don't just have to be right, post hoc, but rather has to be resilient to changes in assumptions - this is one way to identify if what you're deciding is indeed a local maxima or a random outcome
Testing of the accuracy of a mental map can be done through predictions and observations to assess the outcomes of those predictions
The potential fallibility of the observations, both in the specific sense of being incorrect in the particulars it identifies or misidentifies, and in the fact that deciding what is to be observed in the first place is a meta-decision that’s subject to great bias, creates difficulties in the process of doing predictions and fine tuning a mental map
To properly understand another's reasoning you need to understand their mental-map to a sufficient degree
With sufficient complexity of existing mental-edifice, an output is indistinguishable from premeditated reasoning
Every communication is a synthesised representation at the end of a thought process
Therefore each communication contains multiple levels of information, not all of which is communicated in the same way, necessitating parallel processes of decryption
Incorporation of new information and generation of conclusions about the world rely on existing mental infrastructure to "run" the analysis through, and therefore are themselves path dependent
To create decision frameworks within this networked world will require guides to be created
To create a guide would need, ideally, the creation of rules that govern our actions
However following a simple rule is possible only if it unravels against the entire tapestry of the world
And since it's not easy to create rules that are specific enough to stand true across multiple permutations of the networked world, we have to rely on heuristics regarding navigational behaviors within the tree and simpler guides to behaviors which generally result in good outcomes, without overwhelming the inherent computational capacity
Shared belief amongst people is what gives rise to a common framework that can be used to create societally helpful fictions defining civilization
Shared beliefs emerge as a resulting of competing narratives that aim to simplify and synthesize a complex reality
We gain insights and lessons from data backed inductive and deductive analyses, i.e., logic, and stories which are metaphorical but indicative
These are different methodologies though they both affect belief and knowledge equally
The narratives most likely to succeed as the "best" in any scenario are the ones which have the highest potential for both explainability of all supposedly relevant facts and probabilistic potential to not disturb or destroy the existing mental-map that's been constructed
This also implies that "clichés", as they arise, are often a result of such accepted "wisdom" which contains a kernel of narrative truth
However it's only through lived experience that one can often fully understand the provenance of the narrative, as its internal structures have sunk invisible beneath the "smoothness" of the narrative
This is because a narrative ultimately always obscures parts of the overall phenomenon, creating focal points to focus on and suggests other points to obscure
Arguments and debate are essential methods to understand reality as reflected in mental-maps in the absence of a method to discuss networks in its entirety
Debate amongst ideas works as a method to get to a mutually agreeable mental-map that's ideally predicated on the same set of facts, and therefore are congruent to reality
The primary power behind it is because it forces all parties to craft arguments and narratives which could help weed out incorrect strands of arguments
Due to the necessity of communicating only the synthesized aspects of any piece of knowledge, since full details are neither available nor communicable, arguments have the drawback of being linear narrations that necessarily simplify complexities of any situation
Conversations therefore not only communicate ideas and specific pieces of knowledge, but also outputs from and synthesis of internal models, and credentials to help communicate the veracity and accuracy of those models
They exist as successful methods only because humans think best when rationalizing viewpoints which work best as arguments
Individual arguments are by themselves only components because they rely on there not being external facts that impact the argument
This holds true as arguments are narratives which inherently favour linearity
This form of inductive argumentation therefore often works best since it's about crafting the most compelling narrative for any particular grouping of facts, thus increasing the seeming narrative explanatory power
We play amongst societal norms and structures that define the implicit environment which we then interact with and live within to create our lives
And in a sense since the environment becomes a participant in the creation of an individual life, the echoes of an action can have societal reverberations that go beyond an individual lifetime
This is indicative of how societal memories can be said to have evolved
This also creates the common framework that defines existing 'norms', which essentially defines what the shared belief is at any point, and how it evolves over time
Creating a linear narrative atop what is a complex adaptive system necessarily creates difficulties of interpretation, because the system itself is mutable while linear narratives are not necessarily similarly flexible
All narratives, especially the logically consistent ones, rely on the unspoken assumption that only the chain of events that are described in the narrative matters
When you don’t know enough details about any topic, with all its complex branches of information, it becomes difficult to create a meaningful opinion since you could always disagree about which parts of the network is most important to reach any conclusion
Articulation also forces a narrative structure on a set of facts and therefore makes aspects of life even more unreliable
Narratives therefore illuminate one particular sequence of logical deductions and rely both on the relevance of the nodes of information it touches, and the logic it holds as if true
Narratives are often created through the creation or identification of an inductive reasoning chain is potentially related to the deductive logic chain
However the applicability of the narrative is not necessarily correlated to the existence inductive chain
At most instances there can exist multiple inductive chains, all of which are logically plausible for any set of nodes in the network, even if they're incongruent with the reality because they do not seek to explain all the relevant variables
Metaphors and stories have power exactly because they are open, and not easily interpretable as expected - this forces the reader into constructing their own reality, where similar metaphors have to apply, and therefore acts as a shortcut whereby an entire worldview can be transposed, as opposed to purely a unit of information
There are epistemic challenges to understanding the very world we live in, and as such would present inferential problems to our ability to gather and develop knowledge about the world. However despite the challenges since there are extant laws that we're obeying, even if in ignorance, it indicates that a certain degree of understandability follows, as evidenced by the fact that our predictability is > 0
All supposed knowledge exists at the very precipice of an own known mountain of buried and unanalyzed precepts, both created and borrowed, and this means that knowledge as it's supposed today is only the tip of the iceberg
The fact that we live in a certain environment and have our lives and perceptions around it means that it's an overall web within which we're situated
It's a novelistic dimension of how the world is - whether or not each individual segment is mathematically correct, it's the link that together creates a mind's tapestry
Cognitive and computational capabilities per node are limited, and requires narrative definitions to carve out a space within the overall web which can then be sub-analysed
Narratives are influenced by our inner states, which affect both the focus that we pay to individually supposedly relevant facts, and affect our propensity to believe in certain narratives that solidify the existing mental-map
Since mental-maps are complex, having multiple interconnections in all directions, to run any "query" through it would require instructions to also guide the query
There exists within each decision the sum total of weightage that is crafted through the network, including its own historic antecedents
Emotions act both as cognitive shortcuts to help make quick decisions regarding a particular circumstance without needing to make detailed computations
Emotions therefore have the potential to connect parts of the mental-map that's traditionally not close together, by ensuring the triggering of a particular sequence of nodes
Intellectual nuances are the exploration of branches within the mental-map that's related to any matter-at-hand, where the matter-at-hand is defined as the focal point of attention
Since emotion functions as a shortcut to decisions, it is naturally antithetical to most nuances
Incredulity by itself is not an adequate explanation for belief or lack of belief
This implies that narratives therefore can be adjusted and arranged across multiple layers of information, itself related to each other
Base layer is around information regarding the state of the world, which are most likely shared across most populace
Secondary layer around assessment of the state of the world, expressed through emotional triggers, which are far more culture and person specific
Both of these are intertwined to create sophisticated models of reality inside our minds
Dealing with external world is therefore subject to reinforcement pressures for existing mental-maps alongside utility in specific, material, terms
This implies that narratives that we hold on to as cherished or dear oftentimes serve the purpose of helping us reinforce existing mental-maps, rather than provide utility by improving predictability of the future and alignment with reality
The availability of information, or data, itself cannot change and narrative, because narratives are not built only on one layer of information or data, but rather also include synthesised opinions on top of which the entire argumentation is built
As a jigsaw puzzle piece fits in a slot, a piece of knowledge we develop or create also fits into a slot against the shape of the outside world and environment. It's a mutual discovery of both the negative space and it's constituent components
The ability to learn through narrative frameworks is the only way to create a sufficiently robust mental map and framework, though the narrations themselves are incorrect and incomplete
The coherence of the system is the biggest existing structural reason that allows there to be larger and larger levels of internal dependence amongst its factors
Due to the level of importance placed on internal system coherence, perceived unfairness in a system is punished more harshly than what could be expected purely based on rational calculations for any individual act or decision
This implies also the creation of multiple processes to construct a record of and keep intact the internal coherence of the system
There's a distinction between explicit information about a particular topic vs a decision making process that makes it different to pure "expert" opinions which are often formed within black boxes of own complicated thinking
The virtue inherent in explicitly stating assumptions and crafting a clear and auditable decision is not obvious or always extant
Since the underlying processes that the approaches are trying to mimic, or predict against, are themselves unknown, it's unclear whether you can say upfront which method is more liable to work
However it can be said that for more complex mental maps, or networked decisions, a linear or explicitly detailed assumption based method is liable to be less accurate
The qualia resulting from the type of understanding that comes from following any process, of creating a particular sort of mental-map, is indistinguishable from each other
This means that the qualia that emerges as the result of solving particular types of puzzles, or creating specific forms of explanatory narratives, are indistinguishable from each other purely on the basis of its congruence with reality
Implicit decision making, which is also called intuitive decision making, is mostly correlation based, based on pattern recognition and quick response, however explicit decision making is often causal in nature
Part of the causality could be determined by deep understanding of correlations amongst individual components which are collectively highly complex, but there're still multiple layers of abstractions of concepts and relationships
These are named and correlated to the real world itself, which creates a layer of understanding about the world, on top of which you can have 'bespoke' pattern recognition metrics
This complexity, and necessity of congruence with the real world with feedback loops to make sure that not only is the final output correct, but the intermediate steps are also correct, makes explicit decision making different, and also interestingly something the AI of today cannot do
Decision making is split between instinctive reactions to events that arise as a result of the external input, existing mental-substrate that does the processing, and the immediate output, and more systematic and thorough investigations of a phenomenon to come up with explicit arguments
Immediate, instinctive, mental outputs, and more conscious, defined, cognitive processes, both work on the same mental-model substrate
The distinctions in their performance w.r.t accuracy and speed are indicative of the fact that these are distinct processes that, however, should be used in conjunction with each other
We make decisions under highly complex and interdependent environments
We're conditioned to respond taking the entire environmental complexity into account
Therefore it's unsurprising that even "controlled experiments" in social sciences show evidence of evolutionarily adaptive behaviours, such as in-group bias
They're both emergent phenomenon that arise from internal calculations, from whatever base substrate exists to process the calculations
The substrate doesn't change from one decision to another, since the underlying "infrastructure" doesn't change - therefore the conscious thinking, whether it's done in a sophisticated fashion or done purely as a reflexive action - doesn't necessarily change the infrastructure, only the web of signals sent through the existing network
The different methods of thinking are more a function of internal reflection to assess the inherent logic of the decision, done often as a function of time vs depth of the graph that's examined, and create a continuum of conscious/rational thinking
It's a paradox that while you have to guide intentional thoughts, they have massive limitations in what it's able to process, and the depths to which it can go
Therefore you need to train the unintentional thought process, the neural network in the brain, to respond accurately to the world as you perceive it
If you're able to temper the immediate impulse with a bit of a corrective push, that perhaps that can be called rational. Conscious correction is a phenomenon that by itself is highly error prone, due to issues of speed of computation amongst others, but that's also true of all computation
However this continuum is not necessarily aligned to increasing accuracy, since the conscious thinking, just like subconscious processing, is subject to myriad of biases, availability of data, and processing sophistication
Decisions have not just an immediate maximisation function amongst alternatives available, but also a temporal maximisation where there's an implicit question about what the long term impact is likely to be, which together determines any final changes in the mental map regarding belief
So your decision to spend money, as an example, can be amongst multiple alternatives, but also includes not spending it and/or investing it (in cash or kind)
There are limited number of degrees of freedom available w.r.t most decisions, since most decisions are not necessarily situated in an infinite continuum, but are rather more of a discrete multiple choice
A belief, or a conclusion, is almost always the equivalent of choosing one amongst several alternatives, simultaneously judged against the probability of the thought-process that leads to the belief being accurate, and the outcomes that come from having the belief being acceptable
Beliefs are built upon facts, and logical relationships amongst those facts, which together give rise to potential choice of actions
A fact is a representation of a particular stage in a mental-map, at a particular layer of aggregation, which is resilient to reality
The facts have to stand up to scrutiny across a wide variety of circumstances and experiments, which aim to verify whether the fact is able to provide predictive and descriptive rigor
The description which becomes a fact can be a node in any aggregation level - and the network that it represents becomes a fact, though necessarily with fuzzy boundaries since it's the reflection of a portion of a network, rather than an independent standalone fact
This implies that beliefs are most likely to be held when a) they are congruent with reality AND has a direct, visible and measurable impact on ones state of being, b) when it provides an explanation that creates enough sense of logical connectivity that it seems to simultaneously satisfy the need for simplicity in ones theories while providing maximum potential predictive ability, for a certain value of mental input, or c) when the existence of a world where the belief is true provides maximal benefit to you personally due to the knock-on impacts both mentally and actually, which also pushes one towards proselytization of that idea
Feelings and facts are not separate, but rather nodes in the same mental-map, with facts, hypotheses, ideas or narratives being networks that is expected to map to a part of reality, or a way of understanding a part of reality, and imbuing them with a sense-check of how it could impact the meta-framework that is then impacted by the fact, hypothesis, idea or narrative.
There are therefore multiple ways for you to be wrong, even assuming it’s along a continuum where wrong isn’t just one state but rather a series of possibilities - and since the number of potential decisions at any point are restricted, the reasons for being wrong are as important, or more, than the seemingly binary fact of being wrong
Unless the conclusions from a networked mental-map, for any particular inner state and with relevant focus, can bring forth a new, surprising, out-branch to enough people's disparate mental-maps, or fully disrupt existing pathways, communication by itself cannot help shift the patterns
The largest source of confusion is regarding definitions, both regarding questions and their potential answers
The "non-sense" nature of the statements engender confusion, wherein at least one component of the statement has to be false, or unprovable, when looked at in a bottom up fashion
The creation of such a vast array of groupings, across all dimensions, also creates multiple identities which themselves are under mutual construction and evolution
Your identity would therefore itself change and evolve through these macro evolutions that happen across hierarchical boundaries, with boundaries that aren't necessarily distinct, and could even be fractal
The co evolution of multiple humans together also create the rules which together form the overarching system within which we deign to operate
If your idea of your self and your actions are not directly linked to consequential adherence to the rules themselves, then the rules get bent a little, which has negative consequences
Therefore we need to have counterfactual consequentialism to solve for it - however it's analytic burden means you need something more easy to ascertain as a heuristic.
All reasoning is social
Doing a thing because it's always right or moral according to you also has to take its consequences into account, and your contracts with the rest of the web of humanity has to be taken into consideration
Development of this understanding is the fundamental role of philosophy
Since human thought is what is behind human philosophy, you have to craft philosophies that answer to the essential human nature, and plays a part in helping lay bare part of that contextual web within which we live
Analysing normative answers in the absence of an understanding of the antecedent social contract makes any idea output from it potentially unusable and even nonsensical
Doing immediate good in and of itself isn't a useful, if utilitarian, goal - eg while it might make some moral sense to give one of your kidneys to a stranger, at a point in time, considering the obligations you have to the rest of your social circle, family and broader society to live life in a certain fashion, you shouldn't do it unless it fulfils those demands equally as well
It stops the flourishing of humanity, and because of it there is no progress. The progress here has to come from the entire human web as an emergent phenomenon
Knowing your place in the overall web and being cognizant of the ability to affect specific nodes within that web means that you're also able to better focus efforts that have higher impact potential
Furthermore due to the humanistic tendencies within us, developed through our history, the tangible and immediate have much higher premiums placed on them unconsciously rather than the low probability but higher impact actions, though mathematically they may be equivalent, since the collective numbers required to make the statistical mathematics work don't translate into individual lived experience
Social covenants that bind us have direct impact on our reasoning ability and create the bedrock belief system that then gets used and edited
This is since the inputs, the factors creating any impact and any pre-assessment of the outputs are all social
The history and ethnography of the society can't be extinguished or swept aside as they're a key component of any future changes, since society is path dependent
This means that our obligations to each other and the demands that society in general, and individual decisions in particular, place on us are unlikely to sway the fundamental tendency
This means that unless there's a groundswell of heightened movement in specific tendencies the societies themselves don't move in its centre, which is how Overton windows become an important way to gauge societal adjustments to specific movements
Considering therefore the network effects and feedback potential within the societal structure the only optimal way to behave is to be a counterfactual consequentialist while retaining the epistemic humility that leads one to understand that not every factor is known or knowable
To attend to any new course of action there has to be an attempt to identify the potential consequences thereof
There is a difference between discovery, to identify mechanisms that impact or explain specific phenomenon, that we can presuppose is possible or that it exists. vs the invention of new ways of thinking that's completely unknown to date where you could search with no hope of identifying even the key variables
The essence in either is to celebrate ignorance, specifically the ignorance of all possible outcomes, as that's the prerequisite for exploration
The question is how can we live well considering so much of the future is unknown, and indeed unknowable
The only ability therefore that we possess is to try to reduce the uncertainty through experiments or observation, and use this increase in epistemic certainty to push forward
In the absence of knowledge of the consequences itself, knowledge of the shape of the possibilities of said consequences could itself be valuable
For instance, you might not be able to tell the outcome of a particular course of exploration, but you could make a better guess of which course is likelier to bear better fruit, or which courses are least likely to be dangerous
While not fully accurate, since we cannot make a map without knowledge of the territory, the meta-knowledge of the terrain itself could be valuable information for a counterfactual consequentialist
This creates an incentive to behave in a fashion that creates better outcomes for most through a measured and reasoned outcome, since that's what's been primarily selected for within the overarching evolutionary process
Since collaboration and social reasoning is co-evolved with other primary needs for individual organism fitness, as evidenced by multiple details of altruistic or socially beneficial behaviour, this creates an incentive to behave ethically above and beyond the fact that we have evolved to have ethics, to be kind to our fellows, since there's negative consequences, short or long term, to behaving unethically
This fairness doctrine is just a solve for longer term iterated prisoner's dilemma game theories
The longer-term indicator here also suggests that while it might be a longer term optimised theory, there is significant movement potential within shorter time horizons, and since the game itself can be modified, it cannot be relied upon as a law
Counterfactual consequentialism, when it is expanded upon, shows the degree of freedom that people have in their actions
The degree of freedom can also be correlated with the eventual impact they can have
Both together creates a way to measure the potential for actions, and the ability to navigate the causal web to create changes
Due to the large number of incoming inputs there might be created a new collection, creating a concept, that puts together a large number of related network nodes and edges (or individual pieces of information and their relationships), an internal mental model, and this creates separate feedback loops as well
So you might see people measuring their relative social positioning rather than absolute
Similarly, breaking norms within however is a way within this network to break the rules of the game, which changes the playing field and network configuration, which changes the nature of any existing preferential attachment, and which consequentially enables new outgrowth to emerge as a paradigm to follow
Discussion about this post
No posts
This is the most encompassing breakdown of knowledge from first principles I have ever read. Thank you so much for putting this out there.
From a very satisfied reader!
Have you ever read Wittgenstein's Tractatus Logico-Philosophicus? I ask because it is written in outline form and it's opening statement is: "1 the world is all that is the case." That's rather like your opening statement, no? It's final statement, however, is somewhat different: "7 Whereof one cannot speak, thereof one must be silent."
https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus