COMPLEX THINKING & AI

OVERVIEW

The acceleration of increasingly more complex interactions between a multiplicity of natural, biological, social and technological risks in multiple ecosystemic levels is driving multiple social and ecological crises, with impacts at both local and global levels. This increases the pressures on knowledge production systems to generate relevant understanding of the processes underlying such crises and on how to design and manage interventions and change processes. Recent advances in Artificial Intelligence (AI) open new possibilities for the management of and access to large amounts of information, which may be relevant for generating more complex knowledge. Large Language Models (LLMs) such as ChatGPT have also opened new possibilities for Human-AI interaction towards systems of augmented intelligence. On the other hand, the practice of complex thinking, organisationally congruent with the properties that organise complex natural and social systems, has beeAn proposed as a mode of thinking more capable of tackling complex problems and of leading to more ecosystemically positive and sustainable outcomes. It is hypothesised that more complex modes of thinking may lead to creative and abductive leaps capable of guiding effective interventions and the process of managing change in “real-world” complex systems, in conditions of uncertainty and risk.  The CT & AI project will explore possibilities and limits of the interaction of a framework for the practice and promotion of Complex Thinking (CT) with AI tools based on LLMs (e.g. Chat GPT, Gemini). It will develop and evaluate preliminary protocols to guide the integration of methods and tools for promoting CT with the use of AI tools towards generating complex understandings for practice and research. Finally it will explore stakeholders (policy-makers, practitioners, scientists/academics) stances regarding the use of AI in relation to CT.

Keywords: Complex thinking; AI; Chat GPT; Prompt engineering; Co-augmented intelligences

BACKGROUND

The acceleration of increasingly more complex interactions between a multiplicity of natural, biological, social and technological risks in multiple ecosystemic levels is driving multiple social and ecological crises with impacts at both local and global levels. This state of affairs increases the pressures on knowledge production systems for generating significant impacts on our understanding either of the processes underlying such crises or on how to design and manage interventions and change processes. The need for more complex modes of thinking has been affirmed and some guiding principles have been formulated (Morin, 2005, 2014). The development of methods and tools to support them is of critical importance. A theoretical framework for the practice of complex thinking has been proposed that lists a critical set of critical 9 dimensions and 24 properties (Melo, 2020) informing the development of preliminary methods to support the practice of complex thinking applied to the management of change in relation to complex problems (Melo & Caves, 2020ab; Melo et al., 2023; Melo & Campos, 2022). Those properties can be conceived as thinking movements, where each property can be executed in isolation, with different levels of complexity, or choreographed in relation to each other, as in a dance,  composing  configurations of properties that also vary in their complexity. These configurations can be more or less associated with creative or abductive emergence capable of guiding effective actions in conditions of uncertainty and risk.

Developments in the field of Artificial Intelligence have been explored in relation to the need to tackle complex challenges and to generate social good. As stated by Cowl and collaborators (Cowl et al., 2021, p. 114)  “designed  well,  AI  technologies can foster the delivery of socially good outcomes with  unprecedented scale and efficiency”. Recent advances in Artificial Intelligence (AI) opened the possibility of managing large amounts of information, in ways that may contribute to building complex knowledge. Large Language Models (LLMs) such as Chat GPT have also opened new possibilities for Human-AI interaction towards the assemblage of systems of augmented intelligence. ’Large language models (LLMs) from the field of machine learning (ML), a sub-branch of Artificial Intelligence (AI), are statistical models (varieties of artificial neural networks, ANN) of the associations between words (in general tokens) built from large text corpora (e.g. books, papers, web pages, social media etc). Generative LLMs are configured to generate output in response to a prompt. A milestone in LLMs came in November 2022 with the public release of ChatGPT (Chat tool based on Generative Pre-trained Transformer LLM) developed by OpenAI, designed to produce text in response to user questions. The text generated is of a quality close to one produced by a human. This tool (based on GPT v3.5), using the largest training dataset to date, and including some human fine-tuning, crossed a threshold of acceptance/utility with users, and (along with other similar tools) has become very popular, ushering in the (latest) era of AI promise, offering new possibilities for human AI-Interaction (Roumeliotis &  Tselikas, 2023). The capacities of this model, including its ability to learn, enable an intuitive type of communication through human-like text (Ib.). Goar et. al (2023), review some of the main advantages and limitations of ChatGPT in relation to other models: (i) it is highly customizable to particular domains (ii) can be highly effective in dealing with large amounts of text data which can increase its accuracy (iii) it’s flexible to the extent that it can have several applications; (iv) it has high accuracy in relation to other models); (v) it is consistent and (vi) accessible (Goar et al., 2023). Generative AI can create novel content that is not just strictly text. Other tools (e.g. DALLE-2, Dreamfusion, Codex) are capable of creatively working not only with text but converting text to image, audio, video, code or vice-versa  (Gozalo-Brizuela & Garrido-Merchán, 2023).

AI models are not without limitations. They pose different types of challenges and even risks (Saghiri et al., 2022). Several concerns have been raised about the implications of a widespread use of these tools, for example, in relation to teaching or academic production (Susnjak, 2022; Castellanos-Gomez, 2023; Romero-Rodríguez et al., 2023). Ethical issues have also been raised (Ray, 2023; Zhuo, 2023), for example regarding data privacy and security, authorship, bias and fairness, transparency, misuse and even abuse, influence on human decision-making, amongst others. 

Gore et al. (2023) list limitations such as: lack of ‘common sense’ and inability to make decisions based on it;  limited understanding of context (although this is an area of significant advance); bias in training data; difficulty understanding complex questions and reasoning; lack of creativity; difficulty in understanding emotions and in abstracting concepts, leading to inaccurate responses. It also has difficulty maintaining a context of a conversation and dealing with ambiguity (Ray, 2023).

Research on the capabilities and limitations of ChatGPT is rapidly growing  (Roumeliotis &  Tselikas, 2023) with its applicability being tested in several domains. While the accuracy of the responses provided by ChatGPT are sometimes classified as hallucinations, research has shown that it can be improved, for example, through interface with another source of knowledge (Bang, et al., 2023). The model calls for continuing improvements and considerable progress is already taking place (Ray, 2023).

Improvements to tools such as ChatGPT can be performed at the level of the training but there are also possibilities to be explored related to the nature of the prompts (Sarrion, 2023). Prompt engineering (Ozdemir, 2023) is one way of improving the quality of responses, improving the user experience and guiding the AI to generate more effective and useful responses. Ray (2023) reviews some recommendations for the use: (i) starting with clear and specific prompts, (ii) providing context and background information; (iii) specify format and structure; (iv) apply constraints and limitations; (v) use iterative prompting. The applications of these kinds of models, with the necessary improvements, are multiple: from health services, to business to generating content, education and training tackling complex datasets, problem solving, hypothesis generation, amongst others (Ib). 

State-of-the-art LLMs can be very expensive to build, both in terms of collecting and curating data and in the computational (and human) resources required to train and tune the models. However,  LLMs may be steered to be more domain-specific, either through the use of tailored prompts, or by adding additional domain-specific data which can be used to finetune the output of an existing model (e.g. Retrieval Augmented Generation - RAG ) (Lewis et al., 2020). LLMs may be built using different types of data (e.g. text, images, voice, music); recent (multimodal) models combine these into one tool (e.g. Gemini).  As these models are improved the horizon of possibilities expands and so do the challenges and pathways towards further improvements (Ray, 2023). Some of the innovations can affect research, with an increased capacity of these models to tackle vast amounts of information across domains. Some models will also be further trained to be more specialised in particular domains. The improvement of its understanding of context will increase its accuracy. Amongst, others, the risk of over-reliance on models like ChatGPT can be offset by guidance (i) for the user to evaluate the complexity of the responses; (ii) for the user to use prompts capable of eliciting more complex responses; (iii) for the user to strategically use the models to scaffold its own thinking processes and the performance of complex thinking properties. 

These models might not, per se, generate sufficiently complex knowledge, and are restricted in the mode of coupling with the wider world. Their interaction or usage, guided by principles of complex thinking, may support the assemblage of systems of (co)augmented intelligences with increased capacities to tackle complex problems. On the other hand, they may be useful tools to scaffold the complexity of human thinking when prompted in particular ways that either stimulate the performance of complex thinking by the human, or embed some of its principles. Hence, there is a pressing need to explore the possibilities and limitations of these interactions and of moving towards the development of more complex tools and resources to scaffold the development of both human and artificial intelligence. 

Different studies have been conducted identifying a variety of factors that affect the acceptance of AI (e.g perceived usefulness, attitudes, trust, effort expectancy or cultural factors, experience, facilitating conditions) (Kelly et al., 2023; Romero-Rodriguez et al., 2023), which are affected by media discourses (Roe & Perkins, 2023). However, more research is still needed for a more complex understanding of the nature of the stances and relations established with AI tools, exploring the implications for potential programs and interventions guiding users to effectively manage the potentialities and limitations of those tools. In this context,  it is critical to explore the difference that a complex thinking perspective could make and how it could affect the stances of a variety of agents and observers (e.g. policy-makers, practitioners, scientists/academics) in relation to the use of AI models.

AIMS & OBJECTIVES

Overall, the Complex Thinking & AI project aims at:

A1. Exploring the possibilities and limits of the interaction of a framework for the practice and promotion of Complex Thinking (CT) with Artificial Intelligence (AI) tools based on Large Language Models (LLMs).

A2. Developing preliminary protocols to guide the integration of methods and tools for promoting CT with the use of AI tools in generating complex understandings to support practice and research.


Specifically, it aims at:

O1. Developing a general protocol to guide the integration of CT with LLMs (e.g. ChatGPT, Gemini) for practice and research;

O2. Developing protocols for an LLM-assisted Relatoscope method and tool for promoting CT, e.g. through “prompt engineering”;

O3. Developing protocols to guide the use of the AI tools informed by CT (complementary to O2);

O4. Mapping the complexity of the responses/conceptualisations resulting from AI assisted-CT guided processes;

O5. Exploring the requirements for the selection, use and/or design of visualisation tools to support effective CT-AI interaction;

O6. Identify critical specifications and requirements towards the development of AI-assisted software, interface(s) and visualisation tool(s) for CT;

O7. Exploring the representations and stances of different stakeholders (scientists, citizens, practitioners, policy-makers) regarding the use of AI tools in practice and research and the impact of a training workshop on AI and protocols for CT on those.



FUNDING

Scientific Sponsorship by Stefan Pernar