Journal article | Is it time to take learning Design to task?
Griffiths & Inman (2017) Journal of Adult & Continuing Education
Context drives learning design, but too often context is not considered within learning design literature. Reported skills gaps lead to considerations around learning design and the security, completeness and depth of learning developed by learning designers in adult learning. Frequently learning is linked to context, where learning environments are often described as complex. However, rarely do studies consider disorder, created by a lack of consideration for multiple contexts or learning domains (simple, complicated, complex and chaotic) and their governing principles. We draw upon literature from the fields of learning theory, complexity theory and risk governance to illuminate this challenge, suggesting that a lack of security, completeness and depth of knowledge attainment could impact the fitness of learners entering uncertain labour markets. After signposting gaps in existing thinking, we introduce a direct content analysis of a systematic review, comprised of 384 aspects of literature from the fields of pedagogy and andragogy, to demonstrate a lack of conscious consideration for the multiple learning contexts that exist in adult learning design. This gap illustrates an opportunity for learning designers in the field of adult learning to improve the depth, completeness and security of learning experiences. To this end, we present a new Learning Awareness Framework as a feedback mechanism to guide learning design. Our approach is distinctive in the way we link literature in the field of learning to complexity theory and risk governance, where our perspective appears to be unique to the field of adult learning. Finally, our recommendations for a Learning Awareness Framework to guide learning design according to multiple contexts seems to be the first of its kind.
Keywords: learning design; knowledge domains; adult learning; Learning Awareness Framework
Our research is a rare exploration of context driven (simple, complicated, complex and chaotic) learning design, as per Snowden and Boone’s (2007) widely cited Cynefin framework. Our study was stimulated by labour market challenges and potential links to the security, depth and completeness of learning in relation to learning design within tertiary education and work-based learning and development. Here, we found adult learning literature often assumes that most learning takes place in complex contexts (e.g. Blashke & Hase, 2016). But we also find little, if any, conscious consideration for what this means regarding the nature of, or governing principles for, complexity. In speaking of governing principles, which we will discuss in detail later, we are referring, for example, to the way in which the content of learning creates context, which influences feedback in learning design (e.g. the correct response to, “2+2=?” is always “4”, where feedback is corrective, based on there being one correct answer, whereas the question “what impact will Artificial Intelligence have on the hospitality industry over the next ten years?” brings about advanced conceptual feedback, where the answer is not yet known (Hattie & Yates, 2014). Furthermore, there is seemingly even less discussion about how variation in characteristics of simple, complicated and complex real world contexts (e.g the difference in teaching the correct way to conduct an emergency shutdown of a piece of machinery compared to conducting a strategic environmental analysis of a business) influences learning design. While multiple learning contexts are often alluded to in learning literature, the classification of these contexts, along with an articulation of their nature and implications for learning design appears to be scarce.
Our research draws upon theory and practice to explore links between adult learning, complexity, knowledge domains or contexts, and risk governance. We discover gaps that bring us to question whether learning designers are unconsciously developing shallow, incomplete and insecure learning experiences. We also find, in an environment where adult learners are often reliant on self-directed learning opportunities, a need to create a greater awareness of the governing principles of these multiple learning contexts to improve the operationalisation of learning in real-world conditions. Our enquiry led us to a direct content analysis of a systematic literature review, incorporating 384 aspects of learning literature, to explore how often multiple learning contexts are considered, as an indicator of conscious understanding of their governing principles as a driver for learning design. If there is a lack of understanding, then it is fair to suggest that learning designers could be unconsciously developing graduates who will struggle to find fitness with uncertain labour markets.
Our findings lead to recommendations for a new Learning Awareness Framework as a feedback mechanism to guide learning design according to contexts driven by multiple real-world knowledge domains.
2. The problem
If context drives learning design and context is not being adequately considered in the design process, then we can expect to see indicators of shallow, incomplete or insecure learning (Hattie & Yates, 2014). Such indicators appear to exist in the general environment, where we are not claiming poor learning design to be the root cause for such indicators, but we do suggest that poor learning design could be a contributing factor. For example, Roegiers (2016) reports that the 2008 financial crisis resulted in the loss of 20 million jobs globally, where a predicted recovery has been limited by skills shortages. This position is supported by the Chartered Management Institute in the United Kingdom, who found that skills gaps result in 33% of working graduates being underemployed in low paying jobs. Furthermore, the Office for National Statistics (ONS, 2016) found that UK productivity is lagging by 18% in comparison with other G7 nations and 25% of people in employment report a gap in their skills and those required to be successful in their role. Significantly, the World Economic Forum (WEF, 2016), suggest that critical skills gaps exist, where employers, employees and graduates fail to recognise the fast changing nature of the labour market landscape and the alignment of available skills.
A wide range of occupations will require a higher degree of cognitive abilities—such as creativity, logical reasoning and problem sensitivity—as part of their core skill set. More than [52%] of all jobs expected to require these cognitive abilities as part of their core skill set in 2020 do not yet do so today, or only to a much smaller extent. (p. 24)
What is concerning is that skills such as those outlined by the WEF (e.g. logical reasoning) should surely be developed as an outcome of learning experiences designed to ensure the fitness of adult learners for today’s labour market landscapes. If people are not finding fitness with such landscapes, which seems to be the case, then it seems logical to consider whether those who design learning experiences are developing appropriate control mechanisms (learning experiences) to prepare learners for the contexts they will experience in real-world conditions – in other words, are learners adequately prepared to recognise the simple, complicated or complex context they find themselves in and respond accordingly (e.g. apply appropriate learning strategies)?
2.1 Learning design
Here, we are interested in learning design related to adult learning, regardless of setting. In speaking of adults, we adopt Knowles, Holton, and Swanson’s (2015) psychological definition, where a person is considered an adult when they become self-directing or have an awareness of self-concept. In considering learning design, we are speaking of the design of learning activities that underpin the traditional Alignment Principle, specifically the alignment of curriculum, teaching methods, learning environment and methods for assessment in the design of learning (Biggs & Tang, 2011).
While definitions of Learning Design vary, the main elements tend to include greater focus on ‘context”’dimensions of e-learning (rather than simply “content”), a more “activity” based view of e-learning (rather than ‘absorption’), and greater recognition of the role of “multi-learner” (rather than just single learner) environments. (Dalziel, 2003, p. 594).
Our assumption is that alignment happens by design when the governing principles or structure of the context for learning, is understood by the learning designer or, as with self-directed adult learners (Knowles et al., 2015), the learner themselves. As such, we are seeking to discover how governing principles within a given context drives the design of learning to enhance the attainment of intended learning outcomes. Dalziel (2003) notes that learning design is mainly discussed in the context of e-learning, which, arguably is still the case today. However, Knowles et al. (2015) contend that learning design principles are transferable and have relevance in both physical, and digital environments, and, as such we adopt learning design as the focus for our discussion. Moreover, learning design approaches contribute descriptive frameworks (e.g. the Larnaca Learning Design Framework) to guide teaching and learning activities, linked to the co-creation and review of learning experiences, by educators and learners alike (Dalziel et al., 2016). In order to better understand context driven learning design, where good design improves the depth, completeness and security of learning, we need to consider underpinning adult learning theory.
2.2 Learning theory
Knowles et al. (2015) argue that self-concept and self-directedness, or an understanding of being responsible for one’s actions, decisions and life, differentiates andragogy (adult education) from pedagogy (education of children). Furthermore, the authors set out six adult learning characteristics for consideration in adult learning design:
learner’s need to know why they need to learn
self-concept of the learner (autonomous, self-directing)
prior experience of the learner (mental models)
readiness to learn (real world context)
orientation to learning (problem centric)
motivation to learn (intrinsic value, personal payoff). (p. 4)
Kenyon and Hase (2001) extend andragody through the concept of heutagogy, repositioning self-directedness as self-determinedness, being learning that takes beyond the realms of formal teaching, where learners learn how to learn and, linking to connectivism, “[learning] is concerned with learner-centred learning that sees the learner as the major agent in their own learning, which occurs as a result of personal experiences” (Hase & Kenyon, 2007).
There are several considerations here. First, if real world contexts influence adult learning design, then it seems logical to consider a need for learning designers to understand the governing principles of these contexts. Furthermore, if adult learners are self-directing, then, again, it appears logical to assert that learners need to understand the governing principles of the contexts within which they are operating; in doing so, they can develop better alignment between personal learning strategies and the nature of real world contexts. This awareness is critical, where authors such as Hyde and Phillipson (2014) contend that most adults are engaged in non-formal (assessed, but non-certified) or informal (non-assessed and non-certified) learning in the workplace. Beyond informal learning, education bodies, such as the Higher Education Academy (Ryan & Tilbury, 2013), have called for a greater focus on the co-creation of learning experiences between learners and educators, where learners, therefore, need to understand the governing principles of the context within which they are co-designing. Such calls require the learner to understand not only what is being learned, but how to learn (Hattie & Yates, 2014). It is, therefore, equally important to consider the way in which learners construct schema and mental models, where under-preparation of the learner to learn could negatively impact learning attainment (Knowles et al., 2015). Here we are suggesting a lack of alignment between a learner’s mental model and the characteristics of the learning domain could inhibit the security, depth and completeness of learning, potentially leading to reports of skill gaps in the labour market, where employers could be referring to skills gaps when it is actually a failure on the part of the employee to transfer learning experiences or apply appropriate learning strategies to the context they find themselves in.
A discussion on how mental models and schema influence learning, leads to a consideration of cognitive load theory. In its simplest form, cognitive load theory proposes learning to be about the construction and automation of mental schemas (de Jong 2010). Such schemas are regulated by the intrinsic load, or nature of the context, influenced by the level of interactivity or ambiguity between elements within a learning domain (Hattie & Yates, 2014). De Jong (2010), explains that low interactivity material, consisting of single or simple elements, can be learned in isolation. However, high interactivity material consists of multiple elements, which can only be understood via a holistic approach, where the complexity of learning is increased. Such discussion is significant, as it directs learning designers to consider variation in learning strategy or design according to the number of variables and their interactivity within the learning context. Therefore, if learning designers fail to consider variation in these governing principles (variation in number of and interactivity between elements), it is fair to suggest that they could create disorder for the learner, in relation to the development of schema and mental models.
The need to clarify context as a driver of learning and learning design has been widely discussed over the last two decades. For example, Herrington and Oliver (2000) cite multiple studies in reporting that knowledge gained in formal education settings is often not retrieved in real life because people ignore “the interdependence of situation and cognition” (p. 23); a challenge some are addressing via problem-based and project-based learning experiences (e.g. Krajcik & Blumenfeld, 2006). For example, an inability to retrieve knowledge could indicate incomplete learning, where the learner, faced with a real world situation, does not understand how to bridge the gap between what they know and its operationalisation in a real world setting, resulting in employers reporting skill deficiencies. Alternatively, a lack of knowledge retrieval might indicate misconceived or incorrect knowledge, where the learner’s beliefs do not align with real world contexts (Hattie & Yates, 2014). If Herrington and Oliver (2000) are correct, then it is necessary for the learner or learning designer to understand the conditions under which knowledge was misconceived (Chi 2008), which leads us to question the potential causes for such a lack of congruence between what is known and its real world application. The problem is that when learning is incomplete or in conflict with real-world context, it is insecure (Hattie & Yates 2014), which could be part of the reason for the challenges being reported in today’s labour markets. Such challenges of variation in context and, therefore, schema brings us to consider wider perspectives on learning and how they influence design. Greeno, Collins and Resnick (1996) signpost three key perspectives on learning, associationist, cognitive and situative. Our interest is in how these perspectives influence learning design, as opposed to a critical analysis of perspectives concerning adult learning theory, which led us to a valuable overview provided by Mayes and de Freitas (2004).
The Associationist perspective is grounded in behaviourism, exemplified by the work of Gagne (1985). Gagne focuses on task analysis, where learning tasks build according to the complexity of the learning context. For example, considering Gagne’s approach to the teaching of intellectual skills, simple components are identified, and learning related to these components become prerequisites for more complex tasks (Gagne 1985). Gagne’s assumption seemingly being that the context can be fully known and the variables for learning isolated and sequenced. We find such assumptions to be problematic, our reasons for which will become clearer as our discussion progresses. For example, if de Jong (2010) is correct and complex, nonlinear, higher interactivity variables need to be viewed holistically, then how can learning be reduced to a linear sequence. Here, it would seem that Gagne’s approach aligns with the needs of low interactivity contexts, but could create disorder in high interactivity contexts.
The cognitive perspective draws upon cognitive research, focusing on the process of interpreting and constructing meaning through activity, which links to Dalziel’s (2003) definition of learning design. The cognitive focus is not about creating associations, but, instead, “knowledge acquisition [is] viewed as the outcome of an interaction between new experiences and the structures for understanding that have already been created” (Mayes & deFreitas 2004, p. 9). What is interesting here is the focus on “structures for understanding” that already exist, where, it seems fair to suggest that a lack of understanding of context on the part of the learner and learning designer could lead to insecure structures that would impact the ability of the learner to apply learning in the real-world. Such considerations link to the situative perspective on learning, where it is posited that the security, depth and completeness of learning increases when situated within real world contexts or social practice (e.g. project-based learning) (Mayes & deFreitas 2004) – put simply, learning and its successful application in real world conditions are driven by the context being experienced by the learner. As we will come to explain, we find the emphasis on the alignment between learning design and the characteristics of real-world context to be important. Of note, Herrington and Oliver (2000) offer a useful framework for the operationalisation of situative learning design, which we will return to later.
provide authentic contexts that reflect the way knowledge will be used in real life
Provide authentic activities
Provide access to expert performances and the modelling of processes
Provide multiple roles and perspectives
Support collaborative construction of knowledge
Promote reflection to enable abstractions to be formed
Promote articulation to enable tacit knowledge to be made explicit
Provide coaching and scaffolding by the teacher at critical times
Provide authentic assessment of learning within tasks (p. 25)
What is being established here is that context, which can vary according to the number of, and level of interactivity between, variables, is an important driver for learning design. If this is the case, then learning designers must consider whether there is a single learning cotext, with varying degrees of complexity or, as we will argue, multiple learning contexts. Multiple learning contexts are often alluded to in literature, but, equally as often, not explicitly identified or discussed, which leads us to consider whether there is a clear understanding of the influence of context on learning design.
2.3 Learning contexts
The context for learning is often described in adult learning literature as complex.
“In the twenty-first century, we are mostly faced with complex and chaotic environments in which events are rapidly changing and where the relationship between cause and effect is difficult to establish. This means that normal planning and problem-solving are inadequate” (Blaschke & Hase, 2016, p.29)
We agree that from a macro perspective most learning contexts could be described as complex. However, we believe that multiple learning contexts are revealed at the micro level (i.e. a single piece of content within a wider course or programme). If we are correct, then there is a need for learning designers and self-directed learners to consciously consider the governing principles for such contexts in order to improve the depth, completeness and security of learning.
“When designing a learning environment…there are a multitude of design decisions that must be made. Many of these design decisions are made unconsciously without any articulated view of the issues being addressed or the tradeoffs involved. It would be better if these design decisions were consciously considered, rather than unconsciously made”. (Collins, 1996, p.347)
To support our claim, we turn to the highly cited Cynefin model (Snowden & Boone, 2007), which illuminates variation in contexts according to simple, complicated, complex and chaotic knowledge domains. The authors also note a fifth domain, disorder, which occurs when people lack awareness of the context within which they are operating. As per Collins (1996), we argue disorder, in learning design, to be created through a lack of conscious awareness or articulated view of the governing principles of the learning context. Here, knowledge domains (Fig. 1) are governed by principles that, surely, must be consciously considered by learning designers interested in improving learning attainment and the trade-offs required to enhance the application of learning in real-world conditions. The following is an explanation of Snowden and Boone’s (2007) research and its relevance to our discussion.
Fig. 1: Snowden and Boone (2007) Cynefin Framework
In Fig. 1, the simple domain is an environment that is stable, outcomes can be known before action is taken, and cause and effect are apparent to all. This a domain of repeating patterns and best practice. There is usually only one right answer (e.g. which side of the road should you drive on when in England?), which is seen as self-evident, widely accepted or undisputed. Here, we are speaking of low interactivity between variables, where learning can be designed according to the linear principles set out by Gagne (1985).
According to Snowden and Boone (2007), in the complicated domain outcomes can again be known before actions are taken. However, this is a branching dimension, where at each decision point there are better or worse workable solutions to a given context. Cause and effect is not clear to everyone but can be known, and therefore subject matter experts are used to guide people toward better responses or practice (e.g. under typical conditions, which is the fastest route when driving from New York to Los Angeles? In response, a person could engage a logistics expert who regularly negotiates such a challenge). As with the simple domain, the sequencing of learning, where variables are known and structured as part of a hierarchical design approach, progressing learning toward knowledge of more complicated tasks, fits here.
According to Snowden and Boone (2007), the complex domain is one of emergent answers, dictated by a dynamic (changing) environment with a high level of interaction between variables. Cause and effect are unclear, interactions between variables are nonlinear, and outcomes cannot be known beforehand. The nonlinear nature of the domain creates an environment of emergent outcomes. The use of a single subject matter expert limits understanding of this domain, where a single field of view cannot provide the holistic view required to make sense of the world. Instead, multiple experts or feedback loops are required to identify variables, their interactions and potential outputs (e.g. predict who will win any given sporting event at the outset of a tournament or season). “Though a complex system may, in retrospect, appear to be ordered and predictable, hindsight does not lead to foresight because the external conditions and systems constantly change” (Snowden & Boone 2007, p. 70). As discussed earlier, these governing principles are problematic for the associative learning perspective and the work of Gagne (1985). However, Herrington and Oliver’s (2000) situative learning framework, does seem to respond to the characteristics of the complex domain. For example, the authors acknowledge the need for multiple roles and perspectives in the construction of knowledge, which is useful to learning designers tasked with developing the security, depth and completeness of learning in this domain.
In the chaotic domain, searching for right answers is pointless, where “the relationships between cause and effect are impossible to determine because they shift constantly and no manageable patterns exist—only turbulence” (Snowden & Boone 2007, p. 71) (e.g. predict the weather in London, for the month of December, five years from now). Snowden and Boone’s argument that the chaotic domain produces novel practice is interesting, but potentially misleading in the context of learning design. For example, there could be an argument for emergent practice in the complex domain producing novel outcomes that require novel practice also, which moves against the descriptor used by the authors. However, we find that the Cynefin framework still provides appropriate authority for our discussion. Moving forward, we will not be exploring the chaotic domain further as the notion that the search for right answers is pointless leads to debate that, while interesting, is beyond the scope of our research.
If learning designers are to consciously consider design decisions, then we argue Snowden and Boone’s (2007) descriptors to be useful for differentiating learning design according to multiple contexts or knowledge domains. For example, there are clear differences between the simple domain (known outcomes and one best answer) and the complex domain (unknown outcomes and emergent answers.) We contend that if general environments consist of multiple knowledge domains, the potential for unintended consequences or disorder exists where learning design or self-directed learning strategies lack alignment with the governing principles of these domains. Here, as per Collins (1996), we argue that variation in context needs to be consciously considered when engaging in learning design. To not consciously engage in learning design decisions, means that learning designers are relying on unconscious instinct, where the potential for unintended consequences from the operationalisation of potentially insecure, shallow and incomplete learning surely increases.
Our reference to unintended consequences from learning borrows language from complexity theory, where actions taken without a wider sense of the environment increase the potential for unforeseen consequences (Taleb 2007). We believe the term to be useful in discussing self-directed learning strategies and learning design. For example, Hattie & Yates (2014) claim that inefficient learners use learning strategies they are comfortable with, even though the outcomes of such strategies are overly rigid, fail to fit the context and produce inefficiencies. Within tertiary education, Treagust (2006) explores learning design in the sciences, finding that “research evidence also suggests that experienced teachers frequently do not appreciate the problems encountered by students in learning complex science concepts” (p. 6). However, Treagust’s response to the “complex” science challenge involves a two-tier multiple choice examination, where such a decision seems to demonstrate an example of how learning design can create disorder through a lack of understanding of the characteristics of the complex domain. Treagust’s solution to the complex challenge sees the learner select an answer from multiple options and then, in the second phase of the question, explain the rationale for their choice. Given the nature of the complex domain (Fig 1), it leads one to consider how a limited number of potential responses, with, apparently, one correct answer aligns with a domain that, by nature, is one of emergent answers, which can only be known in hindsight. Conversely, if Treagust is correct and learning attainment can be appropriately tested using a single correct response to a multiple choice question, it is inappropriate to describe the science concept as complex. What we are attempting to illustrate here is a situation from literature, where, as per the Cynefin framework, either the identification of the learning context is incorrect, or the context identification is correct, in which case the learning design fails to align with the context of the domain.
Considering Hattie and Yates’s (2014) signpost toward the risk of inefficient learning strategies on the part of self-directed learners, as well as considering Treagust’s (2006) misalignment of language or practice to be indicative of wider practice, there seems to be a need for a feedback mechanism to assist in driving alignment between the Cynefin knowledge domains (multiple contexts) and learning design/strategies. Such alignment is important when considering the calls for co-creation of learning between learning design experts and learners. For example, Hattie and Yates (2014) find that too often learners are expected to participate in high-level decision-making without being given guidance relating to the use of appropriate tools. The authors also find that feedback mechanisms create social learning models that are more durable, secure and valid. Hattie and Yates provide further support for our position, claiming such a feedback mechanism “enables the individual to move forward, to plot, plan, adjust, rethink, and thus exercise self-regulation in realistic and balanced ways” (p. 66). While Dalziel et al. (2016), finds that feedback mechanisms scaffold learning design through descriptive frameworks that guide the design of learning experiences.
To progress our thinking, we need to consider whether the current body of learning literature considers the alignment of learning design according to to the contexts set out in the Cynefin knowledge domains. However, having introduced the potential for unintended consequences from learning, we first need to understand better the risk associated with the misalignment of learning design against Snowden and Boone’s (2007) Cynefin domains.
2.4 Risk and alignment of learning design
As discussed, many adult learning approaches are founded on the premise of self-determination. However, in adult learning literature, one could form the belief that all adult learners can engage in self-determined learning, with little or no scaffolding from subject matter experts. For example, Hase and Kenyon (2007) argue, “[adult learning] is self-determined, the path to learning is defined by the learner and is not established by the teacher. As a result…learning happens in a non-linear format” (p. 28). Also, “the teacher might think that he or she can control the learning experience, but we think the teacher’s role is limited to the transfer of knowledge and skills” (p. 112-113). Such thinking appears limited and potentially in conflict with Dalziel’s (2003) view on learning design. Furthermore, Knowles et al. (2015) caution against such beliefs, arguing that adults can exhibit very different learning behaviours according to the learning domain. For example, learners with a low tolerance for ambiguity could feel confident in a simple learning domain (Fig. 1), with its high levels of certainty and lack of conflict, but require support in a complex domain, with its high levels of ambiguity and conflict. Of greater concern, Hattie & Yates (2014), discussing high-performance teaching practice, find, “the idea that secure knowledge emanates automatically from personal discovery is flawed and incorrect” (p. 77). Here it could be useful to speculate as to what could happen if an adult learner took a self-determined approach to learning to drive a car. In imagining the potential risk associated with such an approach, is it inappropriate to provide subject matter experts to guide learning experiences? We find ourselves asking whether such a self-determined approach develops appropriate feedback and alignment between learning strategy and the nature of the context dimensions or learning domain or whether, as in the case of organisations, it slows the rate of learning, thereby impacting safety and quality (Lekka & Sugden, 2012)? Instead of focusing on the value, or lack thereof, of teacher-led intervention, perhaps there is a need to inform the readiness of the learner to learn by providing a framework or feedback mechanism that illustrates the way in which the real-world context they find themselves in drives their learning strategy. Furthermore, it logically follows that adult learners and learning designers need to understand better the risk of insecure, incomplete or shallow learning as an output from misalignment. Such thinking brings us to the International Risk Governance Council’s (IRGC) Risk Governance Framework (2012).
The IRGC guides tactics for mitigating risk or uncertainty within what the Council refer to as, simple, complex, uncertain and ambiguous domains. Specifically, the IRGC segments the general environment (Fig. 2), which resonates with the Cynefin framework (Fig. 1). Moreover, the way in which the IRGC explore the mitigation of risk in each domain (Fig. 2) is congruent with the need to align learning design with the governing principles of multiple real world contexts (Mayes & de Freitas, 2004).
Fig. 2: IRGC (2012, p. 20) stakeholder involvement framework
When comparing the IRGC framework to Cynefin, the complicated domain has been labelled complex and the complex domain uncertain (Fig. 2). We are not debating the IRGC labels here, but it is possible to map IRGC (2012) descriptors against Cynefin (Table 1). In Table 1, we have also considered the alignment of learning design, utilising Herrington and Oliver’s (2000) situative framework, introduced earlier. In contrasting Fig. 2 and Table 1, it is apparent that the scope of engagement with subject matter experts increases according to the level of interactivity between variables according to variation in context. We also believe it to be evident that these perspectives provide a framework for learning design activities. However, we argue that these links are far too discreet in adult learning literature if they exist at all.
Table 1: learning alignment with IRGC (2012, p. 20) risk governance framework
IRGC (2012) recommendations for risk mitigation by domain (Fig. 2) drove us to a deeper consideration of the underpinning theoretical rationale for the governing principles of the multiple knowledge domains that drive learning design, which led us to Ashby’s (1956) Law of Requisite Variety.
Ashby (1956) emphasises the need to look at the whole, including the wider environment, to understand cues requiring a response from any mechanism designed to regulate a system. Here, we see learning design as the mechanism designed to regulate learning and cues being the level of interactivity between variables (simple, complicated and complex domains). We argue that if the cues alluded to by Ashby are not consciously understood by the learning designer or self-directed learner, the potential for insecure, shallow and incomplete learning increases, which could result in reports of skills gaps, where learners are not able to achieve fitness with labour market landscapes.
Ashby (1956) sets out the conditions necessary for learning design, as the regulating mechanism for learning, to be successful via his Law of Requisite Variety: “If a system is to be stable, the number of states of its control mechanism must be greater than or equal to the number of states in the system being controlled” (p. 207). Therefore, we propose learning design that lacks the requisite variety, or identified preconditions (conscious consideration for the level of interactivity between variables), within its scope has the potential to underperform or fail. Our use of Ashby resonates with Collins (1996), where the multitude of design decisions facing learners and designers requires the conscious consideration and articulation of the implications associated with decisions taken. For example, referring to Treagust’s (2006) use of multiple choice questions, it again needs to be considered whether multiple choice assessment provides the requisite variety to assess knowledge and understanding of variables and their interactions within a complex context. When using the descriptors set out in Table 2, we fear the answer to be, no.
To illustrate the challenge further, we return to Blaschke and Hase (2016), who, in discussing learning design and emerging pedagogies, introduce Cynefin as an “elegant model” for learning design. The authors cite the work of Chattopadhyay (2014) as authority for their claim, which is a blog where there is little discussion related to learning design in multiple contexts; Chattopadhyay claims that people “learn then work” in the complicated domain and “work then learn” in the complex domain, which is confusing, as it suggests that complexity only occurs in a work environment and that people do not learn before engaging in work. Chattopadhyay also claims that learning in the complicated domain involves top down learning, which we believe to be unhelpful in guiding learners to better understand learning contexts and the alignment of personal learning strategies. Finally, Blashke & Hase (2016) lay claim to “heutagogy as a holistic model for advancing lifelong learning within multiple contexts” (p. 27). However, we struggle to find any reference to differentiation in learning design according to the multiple contexts defined by the Cynefin domain, which appears to highlight a lack of understanding of governing principles attached to the these contexts and the risk of shallow, insecure and incomplete learning that could be brought about by following such approaches in learning design.
Such thinking brings us to consider whether learning design literature aligns with the nature of complex environments, particularly where complexity is put forward as a dominant context for learning. We believe an exploration of adult learning literature could reveal whether authors such as Collins (1996) are still relevant, in that design decisions are being made without conscious consideration for the context or the required tradeoffs. If such authors are found to be still relevant, then it would suggest that learning design could be failing to align with the contexts that drive it. Such misalignment, brought about by a lack of requisite variety, could create shallow, incomplete or insecure learning experiences or learning strategies, which could increase the risk of conflict in, or unintended consequences from, the operationalisation of learning and the ability of the learner to maintain fitness with labour market landscapes. We hope practitioners would agree that such an outcome, if avoidable, is unacceptable and, as such, a feedback mechanism to assist in aligning learning design with multiple contexts could be both necessary and important, which is supported by recent learning design literature (e.g. Dalziel et al., 2016)
2.5 Research questions
How do existing learning frameworks and teaching frameworks take into account alignment of teaching, learning and assessment according to the governing principles of multiple knowledge domains, as set out in the Cynefin framework (Snowden & Boone, 2007)?
How do existing learning design frameworks and personal learning strategy narratives take into account alignment of teaching, learning and assessment according to the governing principles of multiple knowledge domains, as set out in the Cynefin framework (Snowden & Boone, 2007)?
To hone our enquiry, we set three sub-questions.
When different knowledge domains or contexts are introduced in literature, which ones, in relation to the Cynefin framework, are discussed?
Where the complex domain is discussed in literature, does the definition of the nature of the domain align with Cynefin descriptors?
Where the complex domain is discussed in relation to learning, does the learning strategy, teaching framework or learning design align with IRGC (2012) recommendations for mitigating risk in the complex domain – specifically, engagement with a breadth of expertise?
We adopted directed content analysis (Zhang & Wildemuth, 2009) to analyse a systematic literature review, conducted using Khan, Kunz, Kleijen and Antes (2003) five stages: framing question(s); identify relevant publications; assessing study quality; summarising evidence; interpreting findings.
During the research, we expanded the literature review to include all learning literature, as we discovered much of the extant literature did not distinguish between andragogy and pedagogy when discussing adult learning or learning in general. This lack of distinction resulted in a change to our research questions, where we removed specific references to adult learning. We believe this change increases the relevance of our study to those interested in pedagogy.
The following sets out the sampling process, coding method and inter-rater reliability. First, to confirm that such an enquiry did not already exist, we conducted a search using Google Scholar, BASE, CORE, CiteULike and Eric, using the search terms ‘systematic literature review of “learning domains”’; “systematic literature review of learning strategy”; and “systematic literature review of learning design”. None of the returns responded to our research questions.
3.1 Literature selection
The systematic literature review was developed using Google Scholar, BASE, CORE, CiteULike and Eric search engines, applying the following search terms, including the use of “AND” as a Boolean Operator to surface links between concepts within the terms: “teaching methods”; “learning frameworks”; “adult learning theory”; “andragogy research”; “heutagogy research”; “learning domains”; “learning strategy”; “learning strategy frameworks”; “teaching, learning and assessment practice”; and “learning design”. The search spanned the period 1990 to 2016, as we wanted to allow for the emergence of the situative learning perspective in the early 1990s (Mayes & de Freitas, 2004). We recorded the nationality of the lead institution/author, to explore the potential for differentiation in approach between countries. We then filtered the literature according to the following criteria:
The subject of the literature had to relate to learning strategy, teaching frameworks, theory building or learning design.
Literature relating to, for example, machine learning, physiology, psychological schema categorisation, and game learning was excluded.
Journal and conference papers were required to be peer reviewed.
Books or book chapters had to be published by mainstream publishers, with self-published books/chapters being excluded.
Duplication, where multiple returns of a single source were removed from the data set.
Dew (2006) stresses that the cultural and historical bias of the researcher, or selectivity bias, could impact data selection. We attempted to overcome this through the breadth and depth of document selection and by employing a mixed sampling strategy. To enhance the credibility of our approach, we subjected our search parameters and coding protocol to expert audit triangulation (Patton, 2002), with no issues to report.
3.2 Sampling strategy
The size of the literature pool available to us was unknown. We, therefore, used a sample size calculator to ascertain the sample required to develop 95% certainty with a 5% confidence interval, within a domain of unknown size. The analysis returned a requirement for 384 pieces of literature, which were collected and analysed between June 2014 and May 2016. We used a selective sampling strategy (Coyne, 1997), informed by “a decision made prior to beginning a study to sample subjects according to a preconceived, but reasonable initial set of criteria” (p. 628). To minimise selectivity bias we employed elements of randomness within the literature selection process (Tranfield, Denyer & Smart, 2003). For example, where search engines such as Google Scholar, provided multiple useful returns on a single page, a random number generator was employed (www.randomizer.org) to select 50% of the articles from said page, selecting 23 aspects of literature (6% of the study); this approach was limited, as much of the literature in our sample was researched through academic databases, encompassing a broad variety of disciplines, which required us to hand select articles.
3.3 Coding protocol
We applied Fereday and Muir-Cochrane’s (2006) six-stage coding framework: develop a coding manual; test reliability of the codes; summarise data and identify initial themes; apply the template of codes and additional codes; connect codes and identify themes; corroborate code themes. We developed descriptive codes (e.g. the literature acknowledges multiple knowledge domains) and In Vivo codes (e.g. the literature states that learning takes place in “complex” environments)(Saldana, 2016). In developing a descriptive code, we applied a constant comparative approach to our analysis (Strauss & Corbin, 2015).
We submitted the coding manual to expert audit triangulation (Patton 2002), where two Higher Education subject matter experts reviewed our coding manual with no issues to report. As recommended by Mayring (2000), our coding protocol was further checked after 10% of material had been coded, again, with no issues to report.
Coding outputs were subject to triangulating analysis (Patton, 2002), where an additional analyst explored the same data set, which produced 40 queries. We resolved all queries through negotiation, where all were found to be recording errors, which were resolved by both analysts conducting a second reading of the material in question.
4 Findings and discussion
The systematic literature review returned 239 (62%) journal articles, 83 (22%) conference papers and 62 (16%) books or book chapters. The sample was divided into themes, with 60 (16%) aspects of literature being focused on learning strategies, 77 (20%) on theory building, 139 (36%) on teaching or delivery frameworks and 108 (28%) on the construction of learning environments/objects/assets. The literature included representations from North America (135 or 35%), Oceana (111 or 29%), Europe (96 or 25%), Asia (28 or 7%), Africa (13 or 3%), South America (1 or 0.25%). We note a bias toward Northern Hemisphere literature and invite other researchers to assist in expanding the research to explore literature from Asia, Africa and South America. We tracked the date of publication of resources, but our analysis did not surface any discernible patterns for discussion when location, publication type or date were considered.
Of the 384 pieces of literature, only 147 (38%) discussed differentiation in learning contexts, which is concerning, given the variation in knowledge domains set out in the Cynefin and IRGC frameworks. An example of such discussion comes from Kendal and Stacey (2001), citing Thompson (1992), who stated that “although the complexity of the relationship between conceptions…and practice defies simplicity of cause and effect, much of the contrast in teachers’ instructional emphases may be explained by differences in their prevailing views…” (p.145). Taking this subset of 147, 90 (61%) discussed methods for differentiation in learning design practice. However, within this sample of 90, only 15 (9%) presented learning design recommendations that aligned with the governing principles of the complex domain, as defined by Snowden and Boone (2007), or practice for limiting the risk of incomplete learning, as per IRGC (2012) recommendations. This is concerning and led us to conduct a deeper exploration of literature that cited complexity in relation to learning.
Of the 384 aspects of literature, 166 (43%) were In Vivo coded as describing learning environments as “complex” or being informed by “complexity”. For example, “many instructional design guidelines argue for breaking down complex structures into smaller sizes” (Strobel, Loerisen, Cote, Abram & Bether 2011, p. 801). Of these 166 pieces of literature, only 51 (31%) provided a definition or explanation of complexity that aligned with that of Snowden and Boone (2007). This lack of consideration for defining the governing principles for the complex domain is problematic, for if the domain is not understood, it is surely not possible for literature to guide learning designers and self-directed learners in making conscious decisions regarding the mechanisms they use to regulate the learning experience. Ultimately, in finding that 69% of this subset did not articulate a definition, we find authors such as Collins (1996) to be still relevant, in that consciously articulated views of complexity and the trade-offs involved in developing TLA alignment are lacking. With this being the case, we find there to be a gap in the knowledge base, where there appears to be a critical need for a feedback mechanism to assist learning designers and self-directed learners in aligning design of the learning experience with the context of the learning domain. Furthermore, in investigating evidence for conscious consideration of other Cynefin domains, only 3 of 384 aspects of literature (1%) described learning environments as “chaotic”, 11 (3%) referred to “complicated” environments, and 39 were coded as discussing “simple” contexts. Interestingly, and potentially of concern, 165 (43%) did not consider learning design according to any named knowledge domain, and zero (0%) considered all of the knowledge domains set out in the Cynefin framework (Fig. 1).
Finally, returning to the subset of 166 pieces of literature coded as explicitly acknowledging complex learning contexts. Within the 15 (9%) that provided a link to learning design practice aligned with IRGC (2012) recommendations for actions in such a domain, there was evidence of well-considered research. For example, Abraham and Jones (2011), in discussing authentic assessment in adult education, state “There is…a need to fully communicate with students about the rationale for different assessment tasks and types” (p.5). The authors cite McMillan’s (2000) framework for effective assessment (e.g., good assessment uses multiple methods) as the basis for their study, though, interestingly, McMillan (2000) does not align methods to the governing principles of the different knowledge domains.
“[the assignment allowed students] to gather and present information in a variety of ways, and from the viewpoints of different individuals and different groups…Although each group had to come to a decision, there may not be a single, clear-cut solution, thus encouraging further enquiry and debate” (Abraham and Jones, 201, p. 9).
Abraham and Jones’s (2011) assessment design also reflects practice recommended in the IRGC framework (Fig. 2) (i.e. engagement with multiple viewpoints), which also appears to reinforce the relevance of the IRGC framework to learning design practice.
Positives aside, we were disappointed to find Abraham and Jones example to be an outlier, where most considerations for alignment within the subset were discrete. For example, Hase and Kenyon (2007) discuss complexity, but their suggestion for “collaborative learning” (p. 115), while coded as acknowledging learning design needs in a complex environment, did not directly link learning design to the context of the complex domain. Therefore, we find, in the main, even where the nature of complex environments is acknowledged, the risk of learning designers inducing insecure, incomplete or shallow learning remains high.
Our findings lead us to conclude that the existing adult learning literature does not adequately take into account the nature of multiple knowledge domains, as per the Cynefin framework, when discussing learning design. And, in the main, multiple knowledge domains are not articulated or consciously considered in existing literature, leading us to conclude that warnings from authors such as Collins (1996) are still relevant and warrant attention. Here, we suggest that learning designers could be disrupting the learning experience, creating disorder in the alignment between design and the learning context, leading to insecure, shallow and incomplete learning, which could negatively impact the ability the learner to maintain fitness with labour market landscapes. Furthermore, though we do not claim it to be the sole cause, we posit that reports of labour market skill gaps could be an unintended consequence brought about by a lack of conscious consideration for the number of states, the variation in interactivity between variables, which need to be governed by learning designers as the designers of the control mechanisms that guide learning.
Our findings suggest the need for a feedback tool that prompts the alignment of learning design and self-directed learning practice according to Snowden and Boone’s (2007) multiple knowledge domains.
By reconstructing the Cynefin framework (Fig. 1) as a Learning Awareness Framework (LAF) (Fig. 3), incorporating IRGC Framework descriptors (Fig. 2) we believe it is possible to improve the alignment of learning design and limit the risk of shallow, incomplete or insecure learning. We believe the LAF to be relevant for use in education institutions, in situations where a learner is self-directed, or where the learner is required to operationalise learning where the tolerance for failure is limited (e.g. in the context of high-reliability organisations, where there is a critical need to accelerate learning to mitigate the risk of catastrophic outcomes (e.g. the unnecessary death of a patient in a hospital) Lekka & Sugden, 2012).
Fig. 3: Learning Awareness Framework (adapted from Snowden & Boone, 2007; Hattie & Yates, 2014; IRGC, 2012)
To emphasise how the learning design feedback framework could inform the design process at an early stage, thereby preventing disorder, we return to Treagust’s (2006) example of multiple choice questions in assessment (Table 2). Though a simple demonstration, the LAF clearly demonstrates that such assessment fails to align with the context of the complex domain and, therefore, should either not be used or used with caution.
Table 2: LAF use case in considering multiple choice assessment questions as proposed by Treagust (2006).
Successful learning design improves the depth, completeness and security of learning. Yet we find little research in the area of multiple knowledge domains and their influence upon learning design practice. We contend that where learning design or learning strategies, as the governing mechanisms for the attainment of learning, are in conflict or misaligned, within a given knowledge domain there is an increased risk of developing outcomes that lack congruence with the contexts for the operationalisation of learning. Where this is the case, we argue that the risk of unintended consequences from the operationalisation of learning in real-world contexts, or insecure, shallow and incomplete learning, increases.
We find that multiple knowledge domains or contexts drive learning design, which means they are an important and a necessary consideration when looking to enhance performance in tertiary education or work-based learning and training. To assist in this work we have developed a unique response to the challenge, in the form of a Learning Awareness Framework (LAF). The LAF is designed to help learning designers to consciously develop learning experiences that acknowledge context cues. We argue that by using the LAF, learning designers can decrease the risk of unintended consequences from the operationalisation of learning and, by improving the ability to understand the contexts for learning, improve the fitness of learners to meet the demands of uncertain labour market landscapes.
At a base level, we believe the LAF could help learning designers to improve the attainment of learning outcomes through context driven learning design. For example, the LAF could assist heutagogical researchers to overcome the challenge of developing a holistic response to multiple real world contexts. Alternatively, it could be used by aspiring high-reliability organisations to accelerate learning and improve challenges associated with safety. Furthermore, the LAF has the potential to guide self-directed, non-formal learning, which logically decreases the risk of unintended consequences from the operationalisation of learning. Importantly, the LAF contributes to existing learning design literature; for example, the Larnaca Learning Design Framework (Dalziel, 2016), where the LAF can guide the creation of learning experiences that align with the governing principles of real world contexts or knowledge domains.
Our study has attempted to present a novel perspective on learning design in the context of adult learning. However, the principles we have introduced should remain constant and, therefore, transfer to all aspects of learning design, whether tertiary or work-based training or learning in, for example, high-reliability organisations. We, therefore, call upon researchers and practitioners to provide feedback on our work. We also believe that our findings provide direction for case studies relating to the operationalisation of the LAF as a feedback mechanism for the alignment of learning design in a variety of learning environments.
The authors received no financial support for the research, authorship, and/or publication of this article.
Abraham, A., & Jones, H. (2011). Using assignment scaffolding as a blueprint to support authentic assessment and learning in accounting education. Accounting Education or Educating Accountants: Proceedings of the 2nd RMIT Accounting Educators’ Conference: Melbourne, Australia, 14 November 2011.
Ashby, W.R. (1956). An introduction to cybernetics. London: Chapman & Hall
Biggs, J., & Tang, C. (2011). Teaching for quality learning at University, 4th ed. Berkshire: Open University Press.
Blaschke, M.L., & Hase, S. (2016). Heutagogy: a holistic framework for creating twenty-first century self-determined learners. In B. Gross & Maina, M (Eds.), The future of ubiquitous learning (pp. 25-40). New York: Springer.
Chattopadhyay, S. (2014). Heutagogy, self-directed learning and complex work. http://idreflections.blogspot.co.uk/2014/03/heutagogy-self-driven-learning-and.html. Accessed 12th January 2017.
Collins, A. (1996) Design issues for learning environments. In S. Vosniadou, E. De Corte, R. Glaser, & H. Mandl (Eds.) International perspectives on the psychological foundations of technology-based learning environments (pp. 347-361). Mahwah NJ: Lawrence Erlbaum Associates.
Corbin, J., & Strauss, A. (2015). Basics of qualitative research. 4th ed. California: Sage
Coyne, I. T. (1997). Sampling in qualitative research. Purposeful and theoretical sampling; merging or clear boundaries? Journal of Advanced Nursing, 26, 623–630.
Dalziel, J. (2003). Implementing learning design: the learning activity management systems (LAMS). In G.Crisp, D.Thiele, I.Scholten, S.Barker and J.Baron (Eds), Interact, Integrate, Impact: Proceedings of the 20th Annual Conference of the Australasian Society for Computers in Learning in Tertiary Education. Adelaide, 7-10 December 2003. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.130.4886&rep=rep1&type=pdf
Dalziel, J., Conole, G., Wills, S., Walker, S., Bennett, S., Dobozy, E., Cameron, L., Badilescu-Buga, E. & Bower, M. (2016). The Larnaca declaration on learning design. Journal of Interactive Media in Education, 2016(1).
de Jong, T. (2010). Cognitive load theory, educational research and instructional design: some food for thought. Instructional Science, 38(2), 105-134.
Fereday, J., & Muir-Cochrane, E. (2006). Demonstrating Rigour Using Thematic Analysis: A Hybrid Approach of Inductive and Deductive Coding and Theme Development, International Journal of Qualitative Methods, 5(1), 80-92
Gagne, R. (1985). The conditions of learning. New York: Holt, Rinehart & Winston.
Green, J.G., Collins, A.M., & Resnick, L. (1996) Cognition and learning. In D.C. Berliner & R.C. Calf (Eds.) Handbook of educational psychology (pp. 15-46), New York: Simon & Schuster.
Hase, S., & Kenyon. C. (2007). Heutagogy: a child of complexity theory. Complicity. An international journal of complexity and education 4(1), 111-118.
Hattie, J., & Yates, G. (2014). Visible learning and the science of how we learn. New York: Rutledge.
Herrington, J., & Oliver, R. (2000). An instructional design framework for authentic learning environments. Educational Technology Research and Development, 48(3), 23-48.
Hyde, M., & Phillipson, C. (2014). How can lifelong learning, including continuous training within the labour market, be enabled and who will pay for this? Looking forward to 2025 and 2040 how might this evolve? Government Office for Science. (Dated December 2014, Published July 2015). Retrieved from https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/463059/gs-15-9-future-ageing-lifelong-learning-er02.pdf
IRGC (2012) IRGC Risk Governance Framework. Retrieved from http://www.irgc.org/risk-governance/irgc-risk-governance-framework/
Kendal, M. & Stacey, K. (2001). The impact of teacher privileging on learning differentiation with technology. The International Journal of Computers for Mathematical Learning, 6(2), 143–165.
Kenyon, C., & Hase, S. (2001). Moving from andragogy to heutagogy in vocational education. Proceedings of Research to Reality: Putting VET Research to Work: Australian Vocational Education and Training Research Association (AVET), Adelaide, SA, 28-30 March. Retrieved from http://www.psy.gla.ac.uk/~steve/pr/Heutagogy.html.
Khan K.S., Kunz R., Kleijnen J. & Antes G. (2003). Five steps to conducting a systematic review. Journal of the Royal Society of Medicine, 96(3), 118–121.
Knowles, M., Holton, E.F., & Swanson, R. A. (2015). The adult learner, 8th ed. Burlington MA: Elsevier.
Kosbie, D., Moore, A. W., & Stehlik, M. (2017). How to prepare the next generation for jobs in the AI economy. Harvard Business Review online article, June 05, 2017. Retrieved from https://hbr.org/2017/06/how-to-prepare-the-next-generation-for-jobs-in-the-ai-economy
Krajcik, J. S., & Blumenfeld, P. (2006). Project-based learning. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 317–334). New York: Cambridge.
Lekka, C., & Sugden, C. (2012). Working towards high reliability: a qualitative evaluation. Hazards XXIII, 544-550.
Mayes, T., & de Freitas, S. (2004) Review of e-learning theories, frameworks and models. London. Joint Information Systems Committee. Retrieved from http://www.jisc.ac.uk/whatwedo/programmes/elearningpedagogy/outcomes.aspx
Mayring, P. (2000). Qualitative content analysis. Forum: Qualitative Social Research, 2, 1-28. Retrieved from www.qualitative-research.net/fqs-texte/2-00/2-00mayring-e.htm
McMillan, J. (2000). Fundamental assessment principle for teachers and school administrators. Practical Assessment, Research and Evaluation, 7(8), 1-8.
ONS (2016). International Comparisons of Productivity – Final Estimates: 2014. Statistical bulletin, February, 2016
Patton, M.Q. (2002). Qualitative Research and Evaluation Methods. 3rd ed. Thousand Oaks, CA: Sage Publications.
Roegiers, X. (2016). A conceptual framework for competencies assessment. Current and critical issues in the curriculum and learning, UNESCO Bureau of Education, June, 2016, 4
Saldana, J. (2016). The coding manual for qualitative researchers. 3rd ed. Los Angeles, CA: Sage.
Snowden, D.J., & M.E. Boone. (2007). A leader’s framework for decision making. Harvard Business Review, 85(11), 68-76.
Strobel, J., Loerisen, G., Cote, R., Abram, P.C., & Bether, E.C. (2011). Modeling learning units by capturing content with ILS LD. In K. Klinger (ed.), Instructional design concepts, methodologies and applications (pp. 789-808). New York: Info science Reference.
Taleb, N.N. (2007). The Black Swan. 2nd ed. London: Penguin
Thompson, A.G. (1992). Teachers’ beliefs and conceptions: A synthesis of the research. In D.A. Grouws (ed.), Handbook of Research on Mathematics Teaching and Learning (pp. 127–146). New York: Macmillan.
Tranfield, D., Denyer, D., & Smart, P. (2003). Towards a methodology for developing evidence-informed management knowledge by means of systematic review, British Journal of Management, 14, 207-222.
Treagust, D. F. (2006). Diagnostic assessment in science as a means to improving teaching, learning and retention. In UniServe science – symposium proceedings: Assessment in science teaching and learning (pp. 1–9). Sydney, NSW: Uniserve Science. Retrieved from http://science.uniserve.edu.au/pubs/procs/2006/treagust.pdf
WEF (2016). The future of jobs: Employment, skills and workforce strategy for the fourth industrial revolution. Global challenge insight report, January, 2016
Zhang, Y., & Wildemuth, B.M. (2009). Qualitative analysis of content. In Wildemuth, B.M. (Ed.), Applications of Social Research Methods to Questions in Information and Library Science (pp. 308-319), Westport, CT: Libraries Unlimited.