Next Article in Journal
Complexity of Solutions Combination for the Three-Index Axial Assignment Problem
Next Article in Special Issue
A Context-Sensitive Alternative to Hick’s Law of Choice Reaction Times: A Mathematical and Computational Unification of Conceptual Complexity and Choice Behavior
Previous Article in Journal
Family of Distributions Derived from Whittaker Function
Previous Article in Special Issue
Rational Degree of Belief Ceases to Be Additive When the Dutch Book Argument Is Transported to a New Environment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Varieties of Selective Influence: Toward a More Complete Taxonomy and Implications for Systems Identification

1
Department of Psychological and Brain Sciences, Indiana University Bloomington, Bloomington, IN 47405, USA
2
Department of Psychology, Vanderbilt University, Nashville, TN 37212, USA
*
Authors to whom correspondence should be addressed.
Mathematics 2022, 10(7), 1059; https://doi.org/10.3390/math10071059
Submission received: 26 February 2022 / Revised: 17 March 2022 / Accepted: 21 March 2022 / Published: 25 March 2022
(This article belongs to the Special Issue Mathematical and Computational Models of Cognition)

Abstract

:
All science, including psychological science, is subject to what Townsend and Ashby have called the principle of correspondent change which ensures that experimental manipulations act as informed agents with respect to predictions and testing critical theoretical features. Mostly, this type of program goes unspoken. Within the general field known as the information processing approach, S. Sternberg invented the additive factors method in which the aforesaid feature plays a major and explicit role. We call this approach a theory driven methodology because the scientist formulates a set of theories or models and then formulates experimental variables that will permit strong tests among the hypothetical alternatives. Our term for the general approach is systems factorial technology. Often, these tests can be accomplished with qualitative, non-parametric, distribution free methods, but our so-called sieve method advocates, once the initial qualitative steps are accomplished, a move to assessing more detail parametric versions of the model classes. Over the decades, the meta-theory underpinning SFT and like approaches has evidenced dramatic growth in both expanse and depth. Particularly, the critical assumption of selective influence, testable to some extent, has received extensive and sophisticated treatment. The various central allied concepts are interlinked but do not form a simple linearly-ordered chain. This study carries on exploration of the central concepts and relationships and their implications for psychological research.

1. Introduction

An essential aspect of mathematical models in psychology is that the structure and parameters of the quantitative ingredients be related to specific aspects of nature, that is, measurable features of stimuli and physiological or behavioral responses. Anything short of that smacks of triviality, metaphor or, at best, curve fitting. Although almost always unspoken, almost all hypothesis testing proceeds by attempts to provoke expected reactions via the effects of experimental manipulations on suspected structures and mechanisms. We have characterized this vital element as the principle of correspondent change [1].
In the early heydays following the birth of the information processing approach, one very popular direction hypothesized that a larger processing workload, such as presenting a longer list of n items for search in short-term memory, would produce a linear increasing mean response time (hereafter RT) function of the load size n if the system was acting as a serial processor [2]. Scores of experiments were run on this basis, although it was soon pointed out that these tests were weak in the sense that reasonable models from the other class were mathematically equivalent to the serial models, and vice versa [3,4,5]. Despite the demonstrated frailty of this argument, identical experimental designs, with the exception that the probe was presented before the longer list which appeared in a visual display, began to recur in vision science within two decades of the original S. Sternberg designs in short term memory (e.g., [6]). An early warning of pitfalls of that approach in a visual context had been already voiced in Atkinson, Holmgren and Juola [7]. However, once again, this track of faulted reasoning led to a plethora of conclusions of questionable validity (see also the discussion in [8,9,10]).
A much more redoubtable use of the principle of correspondent change was proffered, also by S. Sternberg, termed the additive factors method in the late 1960s [11]. This version of the principle stipulated that if individual factors only influenced separate psychological processes arranged in a series that did not overlap in their processing times, then the mean RTs must be additive functions of the experimental factors. It stands as a ground-breaking exemplar of the principle of correspondent change, alluded to above. Sternberg’s conceptualization of selective influence operates at what we may refer to as a qualitative level. That is, specific parameterized models are avoided. However, parameterized models may and, in modern days, usually are also subjected to and tested principles of selective influence. For instance, models of signal detection assume that bias parameters should be influenced by experimental variables such as reward structures and frequency of stimulus presentations, whereas parameters reflecting sensitivity should only be affected by variables such as stimulus intensity, their similarity and so on. This concept has been generalized and studied by R. Thomas [12]. A number of popular parameterized stochastic models capable of predicting both accuracy and response times have been subjected to tests of selective influence in an important study by Dutilh et al. [13].
The notion of selective influence is perhaps the deepest and most critical aspect of the later theory-driven methodology referred to as systems factorial technology, or simply SFT [14]. SFT adheres to the qualitative approach first espoused explicitly by S. Sternberg [15] as mentioned above. By now, SFT has been applied to almost all major disciplines of psychology and cognitive science. Introductions are now widely available, for instance, in Townsend, Houpt and Wenger [16], Algom et al. [8] and Harding, B., Goulet, M.A., Jolin, S., Tremblay, C., Villeneuve, S.-P. and Durand, G. [17]. Up to date applications and surveys can be found in Little et al. [18] and Houpt, Little and Eidels [19].
Two principal targets of selective influence manipulations are mental architecture and decisional stopping rule. It has turned out that the notion of selective influence as seen in psychology has deep connections to important concepts in other regions of science and mathematics, including physics, for example, quantum theory (e.g., [20]). Many of these versions do not require that the random entities be positive real numbers. For that reason, although we use language and notation suggestive of time, we will not always require that the observables be positive numbers. Another valuable dimension is how alterations in required effort affect efficiency of performance, usually as indexed by response times (RT) or accuracy and reflected in the workload capacity index [21] or the assessment function [22]. This latter dimension will not be under consideration in this study.
As noted above, Sternberg’s original primary objective was that of serial systems, wherein one and only one process could take place at a time [15]. Each constituent process, say the ith., starts immediately after process i 1 completes, continues without interruption until it is finished and then process i + 1 begins instantaneously. Of course, some of these axioms can be relaxed. For example, in some circumstances, lag between successive processes might be present without harm to the major predictions but the foregoing assumptions are now standard [8,16].
The notion, selective influence, seems eminently straightforward. However, it turns out to be a much more subtle and fascinating concept that has continuously evolved over the years. In fact, we think it now makes sense to offer a menu of several types of selective influence. In addition, some might be more observable than others in any given observational context. This philosophy will help guide us in the present enterprise.
The typical approach to utilization of selective influence has been to define if and how an external factor, usually experimental, might alter some dependent variable associated with a psychological process. That observable variable has historically usually been RT. We shall therefore concentrate on RT and, in particular, the way individual processes are affected by such factors. However, then we are immediately faced with the question of, assuming stochasticity of the processes, what aspect of the probability distribution is so affected? Initially, S. Sternberg [15] assumed that the means (expectations) of processing times were manipulated through the factors with each subprocess being uniquely affected by a single factor. This was sufficient to falsify or support the inference of serial processing. Sternberg assumed that an external factor A could slow down or speed up a process in such a way that the means would be affected in a detectable way. This operation could then be employed to affirm or deny serial processing. A huge number of studies were carried out applying the additive factors method (for limited reviews, see [1,8,23]).
Schweickert [24] subsequently applied the notion of selective influence to mean RTs of complex systems based on forward-flow graphs known in the engineering literature as PERT networks (literally, program evaluation and review technique). These were mostly deterministic systems, but there were extensions that included stochastic inequalities [25]. Egeth and Dagenbach [26] put the mean RT predictions for serial and parallel systems to effective use.
A stronger but also theoretically more powerful assumption is to simply assume that if two values of a factor, called A, are different, then the cumulative distribution functions (hereby designated by capital F ( t ) ) are also different in some sufficiently potent way. In fact, it was discovered around the same time in the 1980s that factorial influence could occur at the distributional level in the sense that if A H > A L where H = h i g h and L = l o w , then F A ( t ; A H ) > F A ( t ; A L ) , that is, the F ( t ) s would also be ordered [23,27,28]. However, then it transpired that other systems and their stopping rule could be distinguished also at the level of mean RTs but now including not only parallel systems but PERT networks more generally [23,29,30].
It was not observed until the late 1980s and 1990s that the same facets that could be employed to make predictions at the mean RT level actually propelled results at a much more formidable echelon. Thus, an assumption on the distributional orderings, given just above, led to predictions on the RT probability distributions themselves [14,31]. These findings provided tests of architectures and stopping rules that were in several important cases, invisible in mean RTs alone.
The influence of factors on a distribution function is certainly key, but in the great preponderance of experiments meant to identify architecture, the experimenter cannot espy the underlying joint distributions on the multiple processes directly (but see [32,33] for experimental designs that can). Thus, what is visible in data usually is an RT distribution on an architecture with a decision rule that yields overall completion time (plus a random variable, usually stochastically independent of the other sequentially arranged subprocesses, representing a sum of early sensory and later motor intervals) as a function of the underlying, individual processing times. We shall later visit those statistics in the required detail.
The raison d’être, at least originally, that motivated the development of the concept of selective influence was to identify mental architecture and stopping rule. Our treatment and implied taxonomy here envision a taxonomy that differs somewhat from the field’s previous way of thinking about selective influence, including our own. The prior philosophy, whether conscious or not, seems to have focused pretty much on finding “the best” definition of selective influence. While Sternberg noted that stochastic independence along with manipulation of mean RTs could assess serial processing, he mentioned that, in some cases, distributions might be mutually dependent. Examples, though not really suited for processing times, are joint Gaussian distributions with non-zero covariance. There, the means might be affected by experimental factors, but not the covariances, therefore rendering nice observable predictions as long as the covariances are not influenced by the experimental factors. We shall have cause to explore multivariate Gaussian distributions a bit further in our discourse.
Our survey of current and past theoretical studies and surveys, in conjunction with the present venture, convince us that there are several meaningful definitions of, perhaps, types of selective influence. Each variety might have its own strengths and weaknesses and serve somewhat distinct purposes. We will see how this plays out. As theoretical efforts on selective influence have blossomed into regions rather remote from response times, this proposition becomes more and more tenable (see, e.g., [32,33]).
The purview of systems considered herein will be circumscribed in certain respects to lay bare the essence in readily communicable fashion, with the expectation of subsequent expansion. This has proven to be a useful tactic in previous research such as the assumption of stochastic independence with an arbitrary number of subsystems, channels, etc. (e.g., [34]).
Suppose then that there are two processes with names S a and S b . We note that these S ’s are distinct from the later symbols we will use for the survivor functions, the plain “S”. In addition, within the current investigation, we assume that the systems will not terminate until the completion of both processes. That is, the systems employ an exhaustive stopping rule.
The names for processes S a and S b will not distinguish their architecture. We will assume that readers are familiar with the basic concepts of serial and parallel processing so that we need not use distinct notation for a serial processing time vs. a parallel processing duration (for a more formal tutorial on these concepts, please see Chapters 2, 3 of [1] or, more recently, [8]; those seeking an axiomatic treatment of fundamental concepts, which evinces the foundational principals at the level of measure theory, are referred to [35]).
T a and T b will be employed to exhibit processing time random variables for the corresponding processes, regardless of whether they are in serial or parallel conformation. To render this important distinction, “ f ( t ) ” and “ F ( t ) ” will denote the density and cumulative distribution functions for serial architectures, whereas “ g ( t ) ” and “ G ( t ) ” will denote those for parallel architectures. When needed, P P A R ( T a t a T b t b ) = P ( T a t a , T b t b ) can be used to indicate the joint distribution for a 2-channel parallel system and also P S E R ( T a t a T b t b ) = P ( T a t a , T b t b ) for a serial system. Note that these look the same but are not, since in the parallel case, the “t”s refer to processing that is ongoing simultaneously, but in the serial case, they refer to sequential and non-overlapping times.
Moreover, if a system is serial, we further assume that S a always goes first. That is, its operation always precedes that of S b . This assumption, that of a single processing order, specifies a special case of the more general situation where S a precedes S b with probability p, but S b precedes S a with probability 1 p . In general, it may be that p 1 or 0 (see [1], Definition 4.1, p. 50). Yet, in the current investigation, we introduce the key serial concepts confined to single order systems. That is, in formal language, p = 1 . We shall call this notion “1-Directionality”. In fact, this class of two-stage serial systems will serve to erect the foundational scaffolding that generates several of our theoretical inferences.
We shall also almost always focus our attention on systems with only two processors, n = 2 , and, without exception, assume that all our stochastic processes possess continuous densities defined on the positive real line, usually including the origin, 0, so T 0 . The exception will be a few examples where the random variable’s support is the entire real line. These postulates will avert a lot of pedantic notation. We shall hereafter assume that experimental factors, A and B, are effective in the sense that the probability distributions for the random variables T a and T b are actually changed with any change in A, B, respectively, in the sense that, for example, on some measurable interval of time, f ( t a ; A 1 ) f ( t a ; A 2 ) , and the same goes for the parallel densities. More consequential assumptions about how experimental factors affect our distributions will be made forthwith. Finally, in conditional probability elements, for example, in f b ( t b ; B , A | t a ) , to be visited below, it will always be assumed that this term is a non-trivial function of t a .

2. Definitions of Primary Concepts and Taxonomy

2.1. Selective Influence

We express canonical notions of serial vs. parallel processing in slightly different ways to emphasize the “forward motion” in seriality as opposed to simultaneity in parallelism. Of course, no harm could come from writing the terms in mathematically equivalent forms. Let the experimental factor that the researcher hopes will affect system S a be designated A and that for system S b be B.
Definition 1.
Canonical Serial Expressions for 1-Direction Systems. We say that the formula for the joint probability density function in a 1-direction serial system is in canonical form if it is written as:
P ( T a = t a , T b = t b ; A , B ) = f a , b ( t a , t b ; A , B ) = f a ( t a ; A , B ) f b ( t b ; B , A | t a ) , t a , t b 0 .
Rigorous measurement axioms are given in Houpt et al. [35] to ensure there is no measure that allows overlap in the serial processing durations.
Definition 2.
Canonical Parallel Expressions. We say that the formula for the joint probability density function in a parallel system is in canonical form if it is written as:
P ( T a = t a , T b = t b ; A , B ) = g a , b ( t a , t b ; A , B ) , t a , t b 0 .
Deeper postulates that ensure T a and T b possess non-zero probability of occurrence, until one finishes, over a shared time interval are given in Houpt et al. [35].
This notation is explicitly compatible with that of Townsend and Ashby [1] and Townsend [23]. Observe that, so far, f a might be a direct function of both factors A and B and similarly for f b . Our treatment of the concept of selective influence continues the evolution of our thinking as expressed in Townsend, Y. Liu and R. Zhang [36]. We start our forward march by putting down the Sternbergian definition of selective influence [15].
Definition 3.
The Sternbergian Definition of Selective Influence.
Consider two processes S a and S b with processing time random variables T a and T b . Sternbergian selective influence holds if and only if there exist two experimental variables, A and B, such that the mean of T a is affected only by A and the mean of T b is affected only by B.
It has long been tacitly acknowledged that the most quintessential form of selective influence occurs in the context of stochastic independence of the separate subprocesses. Our current thinking emphasizes that there may exist a spectrum of types of selective influence and therefore that selective influence cum independence should occupy a special place in this taxonomy.
Definition 4.
Pure Selective Influence
Pure selective influence is in force in the serial 1-direction systems if we can write
f a , b ( t a , t b , A , B ) = f a ( t a ; A , ) f b ( t b ; B )
That is, the joint probability density function for T a and T b is independent in these variables and each term is a function of its respective factor.
Pure selective influence is in force in the parallel systems if we can write
g a , b ( t a , t b ; A , B ) = g a ( t a ; A ) g b ( t b ; B ) .
That is, the joint density function for T a and T b is independent in these variables and each is a function only of its respective factor.

2.2. Nonselective Influence

We now move to present the major types of nonselective influence. efinition
Definition 5.
Direct and Indirect Nonselective Influence.
In the 1-direction serial systems,
(1). If both direct and indirect nonselective influence are in force, the joint probability density function will be expressed in canonical form as
f a , b ( t a , t b ; A , B ) = f a ( t a ; A , B ) f b ( t b ; B , A | t a ) .
(2). If only direct nonselective influence is afoot in process S a , then we have f a ( t a ; A , B ) for that stage. That is, f a ( t a ; A , B ) is a function of the “other” factor B. If only direct nonselectivity in S b , we have
f b ( t b ; B , A | t a ) = f b ( t b ; B , A ) .
That is, f b ( t b ; B , A | t a ) is a function of the factor but is stochastically independent of processing-time variable of the other process S a .
(3). If only indirect nonselective influence occurs in a 1-direction serial system, it can only happen in stage S b to yield f b ( t b ; B | t a ) by definition.
In the parallel systems,(1). If both direct and indirect nonselective influence are in force, we have
g a , b ( t a , t b ; A , B ) = g a ( t a ; A , B ) g b ( t b ; B , A | t a ) = g b ( t b ; B , A ) g a ( t a ; A , B | t b ) .
Unlike in serial systems, similar remarks occur for the opposite order.
(2). If only direct nonselective influence is afoot in process S a , then we have g a ( t a ; A , B ) as a function of B with no dependencies on t b . If only direct nonselective influence is afoot in process S b , then g b ( t b ; B , A ) is a function of A with no dependencies on t a .
(3). If only indirect nonselective influence is present in Channel S a , then we can write
g a ( t a ; A , B | t b ) = g a ( t a ; A | t b )
which is a function of t b but not B, and similarly for Channel S b .
The reader may have noticed that due to our empirical distinction between seriality and parallelism, when only one order is possible in the former, we always write f a ( t a ; A , B ) f b ( t b ; B , A | t a ) rather than f a ( t a ; A , B | t b ) f b ( t b ; B , A ) . This is a convention and has no impact on further developments.
Earlier work established how stochastic dependence of T b on T a (that is, indirect nonselective influence) could perpetrate a visible failure of selective influence in the sense that mean response times would be seriously affected. Thus, Townsend and R. Thomas [37] showed how such indirect influences could devastate the expected response time signatures of serial vs. parallel processing through failure of selective influence. In fact, the latter investigation demonstrated that, at the mean processing time level, one could obtain over-additivity or under-additivity by either negative or positive interactions and in either parallel or serial systems. Even if it turned out that a failure of indirect selective influence somehow failed to affect architecture identification, we now take the position that someday and in some experimental designs, perhaps with the aid of neuroscientific procedures, it may be possible to detect such probabilistic perturbations (i.e., T b is influenced by the current-trial value of T a ).
Dzhafarov [38] has chosen to use the term conditional selectivity for what we have always labeled indirect nonselective influence. He also proves a number of theorems on that topic including conditions for its satisfaction. As far as names go of course, neither is either right or wrong. Logically, any restriction on the influences of the factors could be styled either as a type of selectivity or, in contrast, as a version of nonselectivity, as long as pure selective influence is not obeyed in our lexicon. We prefer to adhere to our original terminology while noting the chimerical essence of the concept. It is important to not conflate this concept with his much later new definition of selective influence, though both include the term “conditional”. We introduce the latter momentarily.
At this juncture, we call up the definition of selective influence proffered by Dzhafarov [39]. To date, this definition is the most potent in terms of still perpetrating useful consequences, such as identification of architecture and stopping rule in a number of valuable architectures, while affording a type of stochastic dependence. It is particularly germane in the present context because it implies, but is not implied by, a primary target of our investigation—a type of marginal selective influence which will be dealt with in detail in what follows.
Definition 6
([39]). The Conditional Independence Definition of Selective Influence. In serial systems, two factors selectively influence S a and S b if and only if the joint density of the two associated times can be expressed as P ( T a = t a , T b = t b ; A , B , C = c ) = f a ( t a ; A | c ) f b ( t b ; B | c ) f c ( c ) , where C is a random variable stochastically independent of T a and T b and functionally independent of A, B.
In parallel systems, two factors selectively influence S a and S b if and only if the joint density of the two associated times can be expressed as P ( T a = t a , T b = t b ; A , B , C = c ) = g a ( t a ; A | c ) g b ( t b ; B | c ) g c ( c ) , where C is a random variable stochastically independent of T a and T b and functionally independent of A, B.
This definition clearly generalizes the demands of pure selective influence to include a certain type of dependence. Observe the distinction between this concept and his earlier term “conditionally selective influence” which is our “indirect nonselective influence”. The earlier one requires statement of any type of stochastic dependence between the random variables of the processes, while the later one only allows the dependence between the processes through a third source. There exists other, but mathematically equivalent, ways of defining the concept (e.g., [33]). We view this one as the most pellucid and intuitive. The notion of conditional independence is one found in many areas of applied statistics including psychometrics in general and test theory in particular as well as in machine learning (look for the so called naive Bayesian approach), economics and physics. Dzhafarov and colleagues (e.g., [33,39]) have established conditions that can test for selective influence (see definition in [39]). However, these methods typically assume that the joint distributions of the random variables are observable. Given the quintessential importance of selective influence in identification of psychological mechanisms, those results are quite fundamental. Yet, in many psychological milieus, as intimated earlier, the joint distributions are not observable, and this has been the situation in most, if not all, cases where the uncovering of mental architectures is the goal through the employment of response times (e.g., as in [11,14,19,23,24,29,30,37,40]).
A simple example propaedeutic to Dzhafarov’s definition offers some intuition. In fact, Townsend and Ashby ([1], Chapter 11, pp. 368–370) considered a special case of that concept, where we conceived of the random variable C as denoting capacity. This capacity was distributed between S a and S b in a serial system where A and B directly affected their appropriate processes and, just as in Dzhafarov’s definition ([39]), f a and f b were, given C = c , conditionally independent. It was demonstrated that the canonical prediction on the mean interaction contrasts ( M I C ) for serial systems with direct selective influence was fulfilled, namely M I C = 0 (see Propositions 8 and 9). Dzhafarov and colleagues subsequently proved their more general results and made conditional independence a cornerstone of their approach to factor selectivity.
We next move on to one of the key concepts in the present discourse as alluded to above. It is called marginal selective influence, or marginal selectivity for short. It is implied by some other types of selective influence, most pointedly, the aforementioned Dzhafarov definition, and in some circumstances may allay certain avenues of nonselective influence in discriminating mental architectures. It seems to have made its first formal appearance in Townsend and Schweickert ([29]; also see [30]). It turns out to be an intriguing condition.
Its presence or absence may bear consequences for systems identification. In addition, though, it might be useful in its own right. For instance, when a person is asked to make two responses on experimental trials, the marginal selectivity would perforce be immediately available for inspection and offer critical information concerning the underlying processing mechanisms. A recent example of a new design in systems factorial technology which employs two responses is Howard, Garrett, Little, Townsend and Eidels [41].
Definition 7.
Marginal Selective Influence.
In 1-direction serial systems where S a always occur before S b , marginal selectivity is said to hold if and only if(1) f a ( t a ; A , B ) = f a ( t a ; A ) and(2) f b ( t b ; B ) = 0 f a ( t a ; A , B ) f b ( t b ; B , A | t a ) d t a .
In parallel systems, the marginal selectivity is said to hold if and only if(1) g a ( t a ; A ) = 0 g a ( t a ; A , B | t b ) g b ( t b ; B , A ) d t b and(2) g b ( t b ; B ) = 0 g b ( t b ; B , A | t a ) g a ( t a ; A , B ) d t a .
We were surprised to discover, as in the next definition, that the concept of marginal selective influence can be further disassembled into two sub-types and that these differ in their implications for systems identification.
Definition 8.
Strong and Weak Marginal Selective Influence.
(1).Strong marginal selectivity restricts the effect of any factor to its impact on the appropriate marginal distribution.(2).Weak marginal selectivity, in our sense, demands that one or more of the experimental factors influences the joint distribution outside their impact on the pertinent marginal distribution.
As was the case for indirect nonselective influence, the presence of marginal selectivity may be of import for other types of influence as well as drawing inferences regarding mental mechanisms.

3. Key Propositions on Satisfaction of Marginal Selectivity

This section will focus on when marginal selectivity is or is not satisfied (Definition 7), without regard to strictness conditions (Definition 8). In preparation for Propositions 1A and 1B, and Propositions 2A and 2B, we continue reference to Townsend and R. Thomas [37]. In general, their manifold demonstrations of failure of selective influence, even at the level of means, implied that the distributions themselves were severely perturbed. That is, marginal selectivity was destroyed by their examples. We can exhibit the havoc, even at the distributional level, that can be wrought by indirect nonselective influence alone on T b .
Example 1.
Suppose A , B > 0 , S a ( t a ; A ) = E x p ( A t a ) and S b ( t b ; B | t a ) = E x p ( B t a t b 2 ) . Note that there is no direct affliction from A, but the second stage is a function of t a indicating indirect nonselective influence. Then, we can see that the longer stage S a takes, the faster stage S B will be, in a stochastic sense, provoking a negative correlation on these stages. We can intuit that since A makes the first stage faster, there should ultimately be discovered a negative relationship between A and T b . Thus, deriving the marginal survivor function for T b , we have
S b ( t b ; B ) = 0 A E x p ( A t a ) E x p ( B t a t b 2 ) d t a = A B t b 2 + A .
Figure 1a shows how the survivor function for T b conditioned on T a is a function of t a . Then, Figure 1b exhibits how the marginal for T b is a function of both A and B. Obviously, since S b is an increasing function of A as we suspected, the slower we make S a go, the faster will go S b (because the survivor function on T b will be smaller), and this happens both at the within-trial as well as the across-trial levels. The reader will note that an even simpler example works in principle, that is, where S b ( t b ; B | t a ) = E x p ( B t a t b ) . However, this model suffers from not possessing a mean in S B .
Proposition 1A(a) indicates the trivial-to-show but interesting feature that, with both 1-directional seriality and no direct nonselectivity from the second to first stage in a 1-direction serial system, overall selectivity is found on T a . Proposition 1A(b) exhibits the rather patent result that if pure selective influence (see Definition 4) is true, then marginal selectivity perforce occurs. Proposition 1A(c) establishes the somewhat curious but important outcome that if perturbation of selective influence is both direct and indirect, then marginal selectivity can also be valid. Finally, Proposition 1A(d) discovers that this type of cancellation can go in both directions, a result that we will see also pertains to parallel systems.
Proposition 1
(A). On Satisfaction of Marginal Selectivity in 1-Direction Serial Systems.
Under the assumptions of 1-directional seriality,
(a). If direct nonselective influence of B on S a is absent, then it is impossible that the marginal distribution on T a be a function of the “wrong” factor B in either the weak or strong sense. Hence, indirect nonselective influence on S a is ruled out and selective influence of A on S a is assured. Thus, f a ( t a ; A ) is no function of B and marginal selectivity holds for T a .
(b). If neither indirect nor direct nonselective influence are present in the S a S b or the opposite direction, then strong marginal selectivity on T b is true.
(c). Suppose that f b ( t b ; B , A | t a ) is a non-trivial function of A, B and t a . Then, even strong marginal selectivity on T b may succeed.
(d). We can construct a case where both factors cancel out and we are left with no influence, selective or nonselective at all.
Proof. 
(a): For every value of t a , f b ( t b ; B , A | t a ) , integration over all values of t b yields 1, leaving the marginal distribution of t a to be a function only of A. So, marginal selectivity is determined in Stage S a if there is no direct nonselective influence there.
(b): Obvious, because the hypothesis is tantamount to stochastic independence plus direct selective influence, that is pure selective influence is in power. This could be seen as a corollary to part (a).
(c): It is helpful to picture the integral of the product f a ( t a ; A ) f b ( t b ; B , A | t a ) over all values of t a as a linear transformation T in continuous function space, arising from the Markov kernel f a ( t a ; A ) . Then, we require that the marginal on T b be invariant over the factor A: f b ( t b ; B , A ) = f b ( t b ; B ) = 0 f b ( t b ; B , A | t a ) f a ( t a ; A ) d t a .
An example shows that this is possible. One is straightaway found by setting f a ( t a , A ) = A E x p ( A t a ) and S b ( t b ; B , A | t a ) = E x p ( A B t a t b 2 ) . Integration over the interval from 0 to finds the marginal survivor function to be S b ( t b ; B , A ) = 1 1 + B t b 2 which is obviously a function only of t b and B and therefore satisfies marginal selectivity. Rewriting the joint distribution function using the copula trick (detailed in the next section) of inverting the marginals finds that no factorial influence outside of the marginal distribution resides.
(d): Prove with an example. Let f a ( t a ) = A B E x p ( A B t a ) and S b ( t b | t a ) = E x p ( A B t a t b ) . Now, calculate the marginal of T b :
0 f b ( t b ; B , A | t a ) f a ( t a ; A , B ) d t a = 0 E x p ( A B t a t b ) A B E x p ( A B t a ) d t a = A B 0 E x p [ A B t a ( 1 + t b ) ] d t a = 1 1 + t b
which is a function of neither factor A (no direct nonselectivity), B (no direct selectivity) nor t a (no indirect nonselectivity). □
Figure 2 illustrates Proposition 1A(c) by exhibiting that the conditional survivor S b ( t b ; B , A | t a ) is a function of A, B and t a but then reveals that its marginal form is a function only of B.
Now, taking up parallelism, since processing is going on simultaneously, we cannot obtain the hypothesis in Proposition 1A(a) through any principle such as causality. However, we can just impose something like it by fiat in Proposition 1A(a).
Proposition 1
(B). On Satisfaction of Marginal Selectivity in Parallel Exhaustive Systems.
Under the assumptions of parallel exhaustive processing:
(a). If T a is afflicted by neither indirect nor direct nonselective influence, then its marginal distribution is selective. As intimated in the prologue just above, we simply impose an absence of direct and indirect nonselective influence on T a . Then, integrating over T b yields a marginal on T a that is selective.
(b). If neither indirect nor direct nonselective influence are present, then marginal selectivity on T a and T b is true.
(c). Suppose that either g a ( t a ; A , B | t b ) is a non-trivial function of A, B and t b , or g b ( t b ; B , A | t a ) is a non-trivial function of A, B and t a , then even strong marginal selectivity may succeed.
(d). As in Proposition 1A(d), if both channels are directly and indirectly influenced by the “other” factor, then the result could have complete freedom from either type of influence on either channel.
Proof. 
(a).: The marginal distribution of T a is:
0 g a b ( t a , t b ; B , A ) d t b = 0 g a ( t a ; A ) g b ( t b ; B , A | t a ) d t b = g a ( t a ; A ) 0 g b ( t b ; B , A | t a ) d t b , ( direct and indirect selectivity ) = g a ( t a ; A )
(b): Obvious. Because the hypothesis is tantamount to stochastic independence plus direct selective influence, that is, pure selective influence is in power.
(c): We can employ a similar example to that used in Proposition 1. Let g a ( t a ; A ) = A E x p ( A t a ) and the survivor function G ¯ b ( t b ; B , A | t a ) = E x p ( A B t a t b ) . Then, the marginal survivor for T b is:
G ¯ b ( t b ; B , A ) = 0 G ¯ b ( t b ; B , A | t a ) g a ( t a , A ) d t a = 0 E x p ( A B t a t b ) A E x p ( A t a ) d t a = 1 1 + B t b = G ¯ b ( t b ; B )
which proves marginal selectivity. An analogous tactic to that in Proposition 1(c) works to infer that this is the strong form of marginal selectivity.
(d): We perform the demonstration for S b . The other direction is the same. Let g a ( t a ; A , B | t b ) = A B E x p ( A B t b t a ) and G ¯ b ( t b ; B , A | t a ) = E x p ( A B t a t b ) . Then, the marginal survivor for T b is:
0 E x p ( A B t a t b ) A B E x p ( A B t b t a ) d t a = 1 2 t b
which is a function of t b only. □
We find it intriguing the way in which the direct effect of factor A from S a can counteract the perturbing indirect effect of T a in both parallel as well as serial systems. We deem it of sufficient import to endow it with a name offset.
Definition 9.
Offset.
The offset effect is referred as an instance where the presence of direct and indirect nonselective influence from a process, say S a , on the other process, say S b , eliminate each other.
The reader may observe that, as intimated earlier, in a parallel system, there is the possibility that a double-cancelling offset can occur ending with no influence in the system at all (Proposition 1B(d)). Within the 1-direction serial systems, the next Proposition 2A(a) indicates the fairly evident fact that direct nonselective influence by itself can cause failure of marginal selectivity. Proposition 2A(b) states that marginal selectivity can fail if both indirect and direct nonselective influence intrude. That is, offset is far from mandatory. Finally, Proposition 2A(c) strengthens the Townsend and R. Thomas [37] developments by proving the rather startling result that marginal selectivity in the second stage is impossible in 1-direction serial systems if indirect nonselectivity does occur but direct nonselectivity does not. That is, without offset, and with the presence of contamination from T a , marginal selectivity is impossible. This will be the most curious, yet vital, result of the next proposition.
Proposition 2
(A). On Failure of Marginal Selectivity in 1-Direction Serial Systems.
Marginal selectivity of either type can fail through:
(a). Direct nonselective influence only on S a , or only on S b or both.
(b). Indirect and direct nonselectivity on S b with direct selective influence on S a .
(c). It must fail when only indirect nonselectivity on S b occurs. Note that we also assume T a is neither directly influenced by B or indirectly by T b .
Proof. 
(a): Obvious.
(b): Here, we see the intuitive statement that offset is by no means guaranteed. Again, let f a ( t a ; A ) = E x p ( A t a ) , but now let the conditional survivor function on S b be S b ( t b ; B | t a ) = E x p ( B t a t b 2 A ) . It will be found that the marginal survivor on S b will be S b ( t b ; B , A ) = A 2 B t b 2 + A 2 , revealing a critical dependence in S b on the wrong factor A. In fact, now, A slows down the second stage even more than in the example at the beginning of this section.
(c): Due to its importance and its additional subtlety, we briefly rephrase this issue and proceed with care.
Suppose f b ( t b ; B | t a ) f b ( t b ; B ) , that is, indirect nonselective influence is present, but A is absent from f b ( t b ; B | t a ) indicating that T b is not directly nonselectively affected by A. Then, marginal selectivity of T b is impossible. Consider then the transformation T [ f b ( t b ; B | t a ) ] = 0 f b ( t b ; B | t a ) f a ( t a ; A ) d t a . We can interpret f a ( t a ; A ) as a Markov kernel which maps A × t a into the measure space on B × t b . We also can regard the latter measure as coming immediately from the parametric density f b ( t b ; B ) .
Let the symbol 1 represent the constant function f ( t ) 1 . It is easy to see that T 1 = 1 is the constant function because that is just the integral over the ordinary density f b ( t b ; B | t a ) over the domain of t b from 0 to .
Now, assume marginal selectivity is to be true, that is, 0 f b ( t b ; B | t a ) f a ( t a ; A ) d t a = f b ( t b ; B ) , and observe that the right-hand side is constant across values of A. Then, it follows that T [ f b ( t b ; B | t a ) ] = f b ( t b ; B ) 1 . Note that the right hand side is invariant over all values of the A factor. We can re-express this fact as T [ f b ( t b ; B | t a ) ] f b ( t b ; B ) 1 = 0 [ f b ( t b ; B , A | t a ) f a ( t a ; A ) f b ( t b ; B ) 1 ] d t a = 0 .
Since T does not kill any function that is not already zero almost everywhere, then it implies that f b ( t b , B | t a ) = f b ( t b ; B ) . That is, this outcome contradicts our hypothesis that t a is effective in f b ( t b ; B | t a ) . Therefore, we conclude that marginal selectivity is false. □
An analogous proposition for the parallel exhaustive systems is presented in Proposition 2B.
Proposition 2
(B). On Failure of Marginal Selectivity in Parallel Exhaustive Systems.
Marginal selectivity of either type can fail through:
(a). Direct nonselective influence only on S a , or S b or both.
(b). Indirect and direct nonselectivity on S b with direct selective influence on S a ; or indirect and direct nonselectivity on S a with direct selective influence on S b
(c). It must fail when only indirect nonselectivity occurs either on S a or S b .
Proof. 
Proofs are similar to those in Proposition 2A. □
Again, we emphasize the part (c) of the above propositions that, in the presence only of indirect nonselectivity, marginal selectivity is impossible, as exemplified in earlier examples.
Sometimes, things that work for continuous functions do not for discrete, even finite cases. For instance, difference equations with certain properties expressed as functional equations do not always generalize to their “natural” continuous counterparts expressed in terms of differential equations [42,43]. In the present situation, the proven statement is actually a bit stronger in the discrete version, as we show in Appendix A (Proposition A1). The next observation, though obvious, is so important that we render it as a proposition.
Proposition 3.
Indirect Nonselectivity and the Dzhafarov Definition of Selectivity.(a). In 1-direction serial systems S a S b , if indirect influence only afflicts S b , then the Dzhafarov definition of selective influence fails.(b)In parallel systems, if indirect influence only afflicts either S a or S b , then the Dzhafarov definition of selective influence fails.
Proof. 
The falsity of marginal selectivity implies the failure of the Dzhafarov definition. □
Previously, we chose exponential distributions for Ts in the proof for Proposition 1A(c) and 1B(c) to show that systems having both direct and indirect nonselective influences can still have marginal selectivity. This is because the exponential distribution possesses high intuitive value in response time modeling and enjoys an exceedingly rich history in the employment of exponential distributions in that venue (e.g., [1,37]). Next, we show that a Gaussian example can also accomplish precisely this type of result.
In Sternberg’s original statements [15], it was allowed that selective influence on T b might still occur even with the stochastic dependence on T a . This is a natural impulse because of one of the backbones of classical statistical analyses: Multivariate Gaussian distributions possess mathematically independent parameters of means and covariances. Gaussian distributions, though ill-suited for response times (due to being defined for T < 0 ), figure heavily in theory and applications to non-response time data (e.g., [38,39,44]). This leads to an example where there exists both direct and indirect nonselective influence, yet marginal selectivity is found. For instance, consider a bivariate Gaussian distribution where the means are taken as the experimental factors and there is a non-zero correlation. The correlation could be a function of both factors as well. This is a case where we should consider the Ts as other than time variables so that they can take on negative values.
A key component of linear regression is that the mean of T b conditional on T a = t a is just the Gaussian distribution f b ( t b ; B , A | t a ) and thus is distributed as N ( μ b + ρ ( t a μ a ) , ( 1 ρ 2 ) σ 2 ) where μ a = A , μ b = B are the respective factors, σ is the mutual standard deviation and ρ is the correlation coefficient. Notice that the conditional distribution is a function of the experimental factor A and B and is dependent on t a . Therefore, under our defined structure, both direct and indirect nonselective influence are present. Yet, integrating over T a = t a would leave T b to be a function only of its variance and mean parameters ( σ a 2 and μ b ), thus satisfying marginal selectivity. Therefore, we have another instance of offset here. That is, a case of both direct as well as indirect nonselective influence, yet overall marginal selectivity, is found in the end.
Finally, for this section, a brief excursion shows how the Dzhafarov conditional independence conception of selective influence can be viewed as a case of offset with regard to marginal selectivity. To conjoin the concept of conditional independence to our general line of attack, let us write the critical conditional probability of the S b duration on T a in the context of the former condition. It will not do serious harm to the generality of our inference to assume that the random variable C is defined on the positive real line. Let us check the status of indirect nonselective influence: f b ( t b ; B , A | t a ) = 0 f b ( t b ; B , A | t a , c ) f a ( t a ; A , B | c ) f c ( c ) d c 0 f a ( t a ; A , B | c ) d c . This expression reveals that, in general, T b given T a = t a is not independent of t a or, equivalently, T b is a non-trivial function of t a , regardless of whether or not direct nonselective influence is existent. How about the latter, that is, is f b ( t b ; B , A | t a ) a function of A in general, after integration over values of C = c , as opposed to indirect dependence through integration of t a ?
Apparently, even if f a ( t a ; A , B | c ) = f a ( t a ; A | c ) for all t a and even if f b ( t b ; B , A | t a , c ) is not a function of A, still in general, f b ( t b ; B | t a ) can be a function not only of B but also of A. We rewrite the above integral to facilitate the reader’s scrutiny with these restrictions: f b ( t b ; B , A | t a ) = 0 f b ( t b ; B | t a , c ) f a ( t a ; A | c ) f c ( c ) d c 0 f a ( t a ; A | c ) d c . The reader will notice that f b ( t b ; B , A | t a ) is a function of both A as well as t a and therefore meets the requirement for offset (Definition 9).

4. Canonically Shaped Distribution Interaction Contrasts and Marginal Selectivity

As observed in the introduction, an absolutely essential aspect of almost any discussion of selective influence is that of its implications for identification of psychological structure and mechanisms and its own experimental testability. As also mentioned in the introduction, the root of SFT via its leveraging of selective influence in experiments lies in the original additive factors method as invented by S. Sternberg [15]. We have already limned in the way in which his early notion of selective influence not only provided for hundreds of studies testing for serial processing in mean RTs but instigated a deep and massive evolution of the notion of that concept.
A key attendant axiom in almost all applications of selective influence in SFT since the late 1970s is the added requirement that an experimental factor acts to order cumulative distribution functions on processing times of the associated entity (e.g., a perceptual, cognitive or motor subprocess involved in a task; see [23,27,28]). While the advent of distribution ordering turned out to carry with it much more penetrating consequences for system identification than did mean orderings alone, it is far less demanding than other even more intense forms of ordering but putting more constraints on nature. For example, an ordering of hazard functions implies the distributional ordering but not vice versa, and a monotonic likelihood function implies the hazard ordering but also not in reverse [28].
The present section will provide new developments regarding the relationships of distinct notions of selective influence on the canonical signatures of architecture recapped below. For the authors, the place in the armamentarium of SFT that marginal selectivity enjoys was especially intriguing. Candidly, we had long thought, without any proof whatsoever, that it might be quite formidable in its consequences for systems identification, particularly through the canonical classical SFT predictions. In the early stages of the present theoretical developments, we had anticipated that this tenet would be false. It simply seemed too coarse of a condition. As the story unfolds, we will find that the true nature of marginal selectivity vis-a-vis the survivor interaction contrast (or equivalently, the distributional interaction contrast, see below) predictions seems more complex and subtle than we had anticipated, and the denouement is not yet complete.
At this point, we need to repeat once again for the general reader, the basic concept of the double difference or so-called interaction contrast. Using only a finite number of statistical functions, it is straightforward to implement with data. However, it turns out to be highly useful in certain theoretical contexts, to utilize its continuous analogue, the second order, mixed partial derivative. This tactic permits the immediate employment of elegant tools from the calculus.
The mean interaction contrast ( M I C ; Figure 3) has been employed since its inception in the 1960s to test for the presence of serial processing. If selective influence holds in serial systems, then the M I C is predicted to be 0, irrespective of the attendant stopping rule [15]. As long as the means are ordered through the experimental factors, even without an ordering of the distributions themselves, this prediction is in force. However, even at the mean response time level, other architectures necessitate more impactful means of influence (e.g., [23]).
Definition 10.
Mean Interaction Contrast.
M I C = [ E ( T L L ) E ( T H L ) ] [ E ( T L H ) E ( T H H ) ] ,
where L = low salience level and H = high salience level.
Next, it is required to recall the vital statistical function that differentiates various stopping rules and architectures, the so-called survivor interaction contrast ( S I C ( t ) ). The survivor function itself is straightforwardly defined as S ( t ) = 1 F ( t ) where F ( t ) is the well-known cumulative distribution function as used above. That is, F ( t ) = P ( T t ) . Thus, interpreted at the level of F ( t ) and S ( t ) , the faster the processing time, the larger F ( t ) is and the smaller S ( t ) is for a given t. As formally defined in Definition 11, S I C ( t ) is just a double difference of the survivor functions under distinct combinations of the two experimental factors at different salience level, low (L) and high (H) [14].
Definition 11.
Distributional Contrast Functions.
(a). The survivor interaction contrast: Let A , B be the experimental factors and write the survivor function on the underlying processing times as S A B ( t ) . Then, the survivor interaction contrast function is defined as:
S I C ( t ) = [ S L L ( t ) S H L ( t ) ] [ S L H ( t ) S H H ( t ) ] , t 0 .
(b). The distribution interaction contrast: Using a similar notation as in (A), we can exhibit the distribution interaction contrast ( D I C ( t ) ) as:
D I C ( t ) = [ F H H ( t ) F H L ( t ) ] [ F L H ( t ) F L L ( t ) ] = S I C ( t ) , t 0 .
Figure 4 shows the well-known signatures for the prototypical 1-direction serial and parallel architectures with a conjunctive (AND, exhaustive) stopping rule under the assumption of pure selective influence. Note that the total processing time of the system is the sum of all stages for serial and is the maximum time for parallel.
Another highly beneficial tool from previous work, and as intimated earlier, is the substitution of second-order, mixed partial derivatives for the second-order, mixed differences (even used some by [14]). Basically, as intimated just above, we assume that what is going on at the macro level extends in the limit to the local-level derivatives for well-behaved processes. Thus, the signatures for the various architectures and their stopping rules follow the qualitative properties of the finite differences and vice versa.
Before advancing to in-depth exploration of marginal selectivity, we pause to exhibit a strong consequence of the Dzhafarov definition of selective influence (see Definition 6) which will play an intriguing role in what follows.
Proposition 4.
Single Signed Distribution Interaction Contrast. Consider any joint distribution on T a , T b and any well-defined function of T a , T b , say R ( T a , T b ) . Suppose pure selective influence for the architecture and stopping rule produce R that is single signed over all values of T a , T b . Under the Dzhafarov definition of selective influence, if the second order mixed partial derivative (mixed partial difference) on R ( T b , T b ) is single signed with the same sign over T a , T b values for all Dzhafarov mixing parameter C, then the same holds for the marginal distribution over C = c and the canonical D I C ( t ) function is preserved.
Proof. 
Consider P ( R ( T a , T b ) r ; A , B | C = c ) and then 2 [ R ( t a , t b ; A , B , c ) ] A B 0 without loss of generality. Then, the overall signature is the marginal, 2 [ R ( t a , t b ; A , B , c ) A B d c 0 as well. □
This rather obvious result has powerful consequences and immediately implies the canonical predictions, under the Dzhafarov condition, for parallel systems. However, by the same token, we will see that the absence of the constant sign bears portent for systems identification as well. The theorems of Dzhafarov, Schweickert and Sung [45] cover cases of distributional double differences which are not themselves single signed but by formulating integrals of these are. However, we shall adduce some new theorems below which broaden the spectrum of canonical D I C ( t ) s delivered through the Dzhafarov [39] condition.
Next, in order to explore the consequences of marginal selectivity for the signatures of architectures cum stopping rules, it is helpful, if not obligatory, to be able to generate compliant joint distributions without independence. As the reader may be persuaded by such attempts, it can be taxing to come up with distributions with non-trivial dependencies that do satisfy marginal selectivity.
Mercifully, a deus ex machina takes the stage in the form of copula theory. Copula theory, as we will shortly see, does not mandate marginal selectivity, but it does offer a rather crafty way of producing it, at least in parallel systems. Copula theory has become a valuable strategy in applied mathematics areas such as finance, actuarial theory, reliability engineering, turbulence and even medicine. It has recently been introduced into psychology and cognitive science by H. Colonius and colleagues [46,47,48]. The Colonius doctrine has contributed much in the way of rigorous theory and methodology to the field of redundant signals research in general and of multi-sensory perception in particular. He formulated powerful theorems regarding workload capacity comparing response times when an observer is confronted with, say, one vs. several signals, and detection of any determines a decision by assuming the marginal distributions remain invariant when moving from one to several signals ([48]; see also [49]). This is tantamount to marginal selective influence when the number of signals is interpreted as an experimental factor [36]. However, the usage to which he put copulas does not broach the strong vs. weak distinction. In any event, we again repurpose copulas here in such a way as to help provide answers to some of our critical issues, although not all derivations require their usage.
We have no space for even a tutorial here, but suffice it to say, this approach permits the development of joint distributions with specified marginals. Let G a ( t a ; A ) be the marginal cumulative distribution function for S a and similarly for the other process, S b , and then by the provenance of copula theory, the joint cumulative distribution function may be expressed as P a b ( T a t a , T b t b ; A , B ) = F a b ( t a , t b ; A , B ) = H [ G a ( t a ; A ) , G b ( t b ; B ) ] . This result, known as Sklar’s theorem (e.g., see [50]), can be viewed as a generalization to multi-dimensions of the so-called integral transform since the arguments of H can be interpreted as random variables U a , U b each uniformly distributed on [ 0 , 1 ] (see, e.g., [51]). The dependency structure of the original joint distribution P a b is inherent in H. Conditions for the existence of H are extremely weak and we shall make the further common assumption that P a b is built upon a continuous joint density function. The marginals of H are naturally just G a and G b , and the reader will observe that in the above formular, we have imposed our rule that the marginal distributions conform to marginal selectivity a fortiori. This constraint will be relaxed in some of what follows.
Recall that the distribution interaction contrast function for parallel models with exhaustive processing is always greater than or equal to 0. The very next result makes good use of copulas and might mislead the theorist into hoping for very great benefits with the satisfaction of marginal selectivity. There are, indeed, substantial benefits but perhaps not so all-encompassing as the theorist might wish.
The distinction between strong and weak marginal selectivity is instrumental in Propositions 5 and 6.
Recall that the canonical mixed partial difference (or derivative) of the cumulative distribution function for exhaustive processing is greater than or equal to 0. The implications from Proposition 4 are all to the good. The less appealing results emerge below.
Proposition 5.
When Experimental Factors Affect the Joint Distribution of Parallel Exhaustive Systems only Through the Marginal Distributions: The Case with Strong Marginal Selectivity. Consider the class of joint parallel distribution functions.
(a). Every parallel system with a joint distribution whose copula indicates that the experimental factors influence the distribution only through the marginals (strong marginal selectivity) predicts that the second order mixed partial derivative of the joint distribution itself will be non-negative.
(b). Every parallel system enjoying strong marginal selectivity predicts that the second order mixed partial derivative of the cumulative distribution on the maximum processing time will be non-negative. Thus, this situation entails the canonical D I C function.
Proof. 
(a). First, construct the copula for the joint cumulative distribution of a 2-channel parallel system: G a b ( t a , t b ; A , B ) . It can be expressed as H [ G a ( t a ; A ) , G b ( t b ; B ) ] . Then,
2 G a b t a t b = 2 H t a t b = 2 H G a G b G a t a G b t b = g a b ( t a , t b ) > 0 ( g a b ( t a , t b ) is the actual joint probability density function of T a and . T b )
By hypothesis that the experimental factors only influence the distribution through the marginals, G a t a and G b t b are > 0. Therefore, it must be that 2 H a b G a G b > 0 . Again, use the chain rule, we have
2 H a b A B = 2 H a b G a G b G a A G b B > 0 .
If the support is not the entire positive real line, then 2 H a b A B = 0 .
(b). Obvious. □
The next corollary can be regarded as part B of the Proposition 5. We restate it to emphasize its interpretation in terms of distributional contrast functions.
Corollary 1
(1). The second mixed partial derivative of the CDF: P ( T a t , T b t ; A , B ) = P ( m a x ( T a , T b ) t ) is 2 G M A X ( T m a x ) A B > 0 .
Proof. 
Obvious. □
In addition, the next corollary simply states the implied result for the finite D I C ( t ) function.
Corollary 1
(2). The D I C ( t ) ) for M A X ( T a , T b ) is
D I C ( t ) = [ G ( t , t ; A H B H ) G ( t , t ; A H B L ) ] [ G ( t , t ; A L B H ) G ( t , t ; A L B L ) ] > 0 , t > 0 .
Proof. 
Obvious. □
The upshot is that the canonical D I C (or S I C ) holds for any possibly dependent parallel model that satisfies marginal selectivity and selective influence only works through the marginal distributions.
The reader will espy that Proposition 5(a) establishes an even stronger result than that for which we sought. Namely, the mixed partial differences are always positive (or 0) even for the precise joint distribution function. Due to obscurations of processing times by residual processes and the like, we do not usually have these available, but they might be in some of Dzhafarov and colleagues’ designs (see, e.g., [52,53,54]) and other future extensions such as the RT designs with two responses on every trial, as intimated earlier.
This is all to the good—strong marginal selectivity forces the canonical signature for exhaustive parallel processing if the only locus where the experimental factors influence speed is through the marginal distributions. We must confess pleasant surprise by this finding. However, we soon discover that relinquishing that seemingly innocent constraint unleashes destruction on the canonical signatures.
Proposition 6.
When Experimental Factors Affect Joint Distributions in Ways Outside the Marginals: The Case with Weak Marginal Selectivity.
Consider the class of joint parallel distribution functions.
(a). If the experimental factors satisfy only weak marginal selectivity, then it is possible that canonical parallel predictions can fail.
(b)If weak marginal selectivity holds, then it may be that the canonical parallel predictions still are in force.
Proof. 
(a). Prove with a counter example. If the canonical prediction on D I C holds for the parallel system when experimental factors affect the joint distribution, then for any H a b ( t a , t b ; A , B ) which is monotone in A and B (and, of course, also in t a and t b ), its second mixed partial derivative must be > 0.
The Gumbel’s copula ([55], Vol. 2) is:
C α ( G a , G b ) = G a G b [ 1 + α ( 1 G a ) ( 1 G b ) ] , 0 G a , G b 1 , | α | 1 ,
where α is some function of A and B.
We derived that:
2 H a , b ( t a , t b ; A , B ) A B = α ( A , B ) A B D ( G a , G b ) + α ( A , B ) A D ( G a , G b ) G b G b B + α ( A , B ) B D ( G a , G b ) G a G a A + 2 C α ( G a , G b ) G a G b G a A G b B ,
where D ( G a , G b ) = C α ( G a , G b ) α .
Let us have the experimental factors influence both the joint and the marginals by taking α ( A , B ) = A + B , G a ( t a ; A ) = 1 e θ ( 1 + A ) t a , G b ( t b ; B ) = 1 e θ ( 1 + B ) t b . Following the derived formula for 2 H a , b ( t a , t b ; A , B ) A B , we have:
2 H a , b ( t a = t , t b = t ; A , B ) A B = 1 4 ( A + B ) 3 / 2 G a G b ( 1 G a ) ( 1 G b ) + 1 2 ( A + B ) 1 / 2 G a ( 1 G a ) ( 1 2 G b ) θ t ( 1 G b ) + 1 2 ( A + B ) 1 / 2 G b ( 1 G b ) ( 1 2 G a ) θ t ( 1 G a ) + 2 C a + b ( G a , G b ) G a G b θ 2 t 2 ( 1 G a ) ( 1 G b ) ,
where 2 C α ( G a , G b ) G a G b 1 α 4 , α 0 .
If a + b 0 , then 2 H a , b ( t a = t , t b = t ; A , B ) A B . If a + b = 1 and θ , then 2 C α ( G a , G b ) G a G b 3 4 and 2 H a , b ( t a = t , t b = t ; A , B ) A B + . Thus, the second mixed derivative 2 H a , b ( t a = t , t b = t ; A , B ) A B has a sign inversion with difference values of experimental factors A and B, which violates the canonical prediction on parallel systems.
(b). Prove with an example. Let us set a parallel exhaustive system whose joint cumulative distribution follows a Gumbel’s distribution with the joint part α ( A , B ) = B A :
P ( T a t , T b t ; A , B ) = G a , b ( t ; A , B ) = 1 e A t e B t + e A t B t B t 2 A
In such a system, the joint distribution is influenced by both experimental factors, yet the marginals are still influenced by their pertinent factors. That is, the weak marginal selectivity holds. As shown in Figure 5, the D I C ( t ) produced from this system follows the canonical pattern of the parallel exhaustive system.
This concludes the proof. □
The two foregoing theorems imply that if factors influence a parallel joint distribution only through the marginals and strong marginal selectivity is present, then all is well but may not be if the influence spreads beyond the marginals as in weak marginal selectivity.
We next have a corollary regarding implications for the mean interaction contrast ( M I C ).
Corollary 2.
Implications of where the Experimental Factors Affect the Joint Distribution for Mean Interaction Contrast Statistics in Exhaustive Parallel Processing.
Consider the class of parallel joint distribution functions.
(a). If the conditions in Proposition 5 are met, that is, the factors only influence the distributions through the marginal distributions, then the mean interaction contrasts are canonical: M I C < 0 .
(b). If the conditions in Proposition 6 are met, that is, the factors influence the joint distribution outside of the marginal distributions, then the canonical mean interaction contrast sign may also be violated.
Proof. 
(a). First, recall that the mean processing time is just the integral from 0 to of the survivor function: E ( T ) = 0 S ( t ) d t . Moreover, as shown in Equation (2), the distributional interaction contrast is simply the negative of the survivor interaction contrast: D I C ( t ) = S I C ( t ) . Then, as proved that the D I C ( t ) is canonical and single valued under the condition of Proposition 5, so in the S I C ( t ) . Thus, the mean interaction contrast’s canonical nature ensues: M I C < 0 .
(b). Prove with a counter example. Let us construct the parallel joint cumulative function following Gumbel’s distribution (see Equation (3) in the proof of Proposition 6) by having A L = 0.005 , A H = 0.01 , B L = 0.1 , B H = 0.8 and θ = 5 . Then, compute the MIC following Equation (1) and the result is positive (Figure 6), which contradicts the canonical prediction on M I C for parallel exhaustive systems. □
Now, the previous two propositions and their corollaries not only specify a range of when parallel systems can and cannot predict canonical D I C functions, they tell us how to formulate models which do vs. do not obey marginal selectivity. We can take note especially from Corollary 2(1) that even the M I C s are not assured to be proper if the conditions in Propositions 1A and 1B are violated.
How do these latest results relate to the Dzhafarov definition? We include known results from Dzhafarov and colleagues for completeness sake.
Proposition 7.
Relationship of how the Experimental Factors Affect the Joint Distribution and the Dzhafarov Definition of Selective Influence in Parallel Systems. Consider the class of parallel systems with an exhaustive stopping rule.(a). The Dzhafarov definition of selective influence implies strong marginal selectivity.(b). Strong marginal selectivity does not imply the Dzhafarov condition.(c). The Dzhafarov condition does not imply weak marginal selectivity.(d). Weak marginal selectivity does not imply the Dzhafarov condition.
Proof. 
(a). The proof that marginal selectivity is forced by the Dzhafarov definition of selective influence ([33]) in fact entails the strong version can be seen through the fact that the latter only allows factor effects through the pairwise combining of the random variable with a factor—there is no other accompanying factorial influence.
(b). Observe that lots of joint distributions in copula form may have dependencies not expressed simply via conditional independence.
(c). If this was not the case, then absence of weak marginal selectivity would imply falsity of the Dzhafarov condition, but we know this is not true because strong marginal selectivity and weak marginal selectivity are incompatible.
(d). If weak marginal selectivity implied the Dzhafarov condition then weak marginal selectivity would imply canonical D I C s, but we know this is wrong.
At first glance, one might think that Proposition 7 contradicts Proposition 4. However, the insertion of the extra perturbations caused by influences of factors on non-marginal aspects of the joint distributions, in fact, violates the original terms of Dzhafarov’s definition.
Now that we have valuable information concerning parallel systems, what is the situation for serial systems? At this point, we have not been able to effectively utilize copula theory for assistance with serial models. Compared with certain other topics, even independent serial processing (thereby fomenting convolutions) has not been much treated, and we are not aware of any results regarding stochastically dependent serial events in relation to selective influence. We have been able to discover some facts as follows, but the story is far from complete as we will learn.
Because the canonical signature for exhaustive serial systems is not single signed but rather guarantees an odd number of crossovers (see [56]), we fully expected to find that even in the presence of the Dzhafarov ([39]) condition, we could find violating D I C functions. We were rather astonished to discover that all 1-direction serial systems that obey the Dzhafarov dictum perforce generate canonical D I C signatures. The following result goes beyond previous theorems regarding Dzhafarov’s definition of selective influence.
Proposition 8.
1-Direction Serial Systems and the Dzhafarov Definition of Selective Influence. Consider the class of 1-direction, two-stage serial systems which meet the Dzhafavov ([39]) definition of selective influence. Then, the D I C will be canonical, that is, the number of crossovers will be odd.
Proof. 
Following Dzhafarov’s definition, the joint cumulative function of T of the 1-direction serial systems is: F a b ( t ; A , B , C = c ) = 0 t F b ( t t a ; B | t a , C = c ) f a ( t a ; A | C = c ) f c ( C = c ) d t a .
Then, the D I C ( t ) is:
D I C 1 - direction , serial ( t ) = 0 t F b ( t t a ; B H | t a , c ) f a ( t a ; A H | c ) f c ( C = c ) d t a 0 t F b ( t t a ; B H | t a , c ) f a ( t a ; A L | c ) f c ( c ) d t a 0 t F b ( t t a ; B L | t a , c ) f a ( t a ; A H | c ) f c ( C = c ) d t a + 0 t F b ( t t a ; B L | t a , c ) f a ( t a ; A L | c ) f c ( c ) d t a = f c ( c ) 0 t [ F b ( t t a ; B H | t a , c ) F b ( t t a ; B L | t a , c ) ] [ f a ( t a ; A H | c ) f a ( t a ; A L | c ) ] d t a .
Since f c is a density function for C, f c ( c ) > 0 . By definition, F b ( t t a ; B H | t a , c ) F b ( t t a ; B L | t a , c ) > 0 . When t < t * where t * is the crossover point of the density functions, f a ( t a ; A H | c ) f a ( t a ; A L | c ) > 0 . Thus, for small t that 0 < t < t * , D I C ( t ) must be positive. In addition, following the proof for Proposition 2.1 in H. Yang et al. ([56]), we can show that when t , 0 t [ F b ( t t a ; B H | t a , c ) F b ( t t a ; B L | t a , c ) ] [ f a ( t a ; A H | c ) f a ( t a ; A L | c ) ] d t a must approach 0 from the negative side. Therefore, the D I C 1 - direction , serial ( t ) of the systems satisfying Dzhafarov’s definition on selective influence must be a function that is positive for small ts and converges to 0 from the negative side for t + , for every value of c, which implies that it must have an odd number of crossovers.
Any function which starts negative and ends positive must have an odd number of crossovers. To see this is valid for the present circumstance, recall that the Dzhafarov definition of selective influence basically postulates a probability mixture on the pertinent density functions. Each one of these starts with a positive departure so the first one to be visible must also. The reverse holds in the sense that all densities in the mixture end with its functions being negative. So, will also be the last to appear. □
Together with Proposition 5, we have the pleasant result that the Dzhafarov selective influence condition ensures that both parallel and serial systems will evince canonical D I C signatures, as long as pernicious influences do not arise via affecting aspects of the joint (parallel) distributions outside of the marginal. Figure 7 takes an earlier example of a 1-direction serial system which obeys the Dzhafarov definition and demonstrates that its D I C function is canonical.
The next derivation, featured in Proposition 9, shows that marginal selectivity is sufficient to evince canonical M I C s with serial exhaustive processing.
Proposition 9.
Marginal Selectivity and 1-Direction Serial Systems. If even weak marginal selectivity is in force, regardless of satisfaction of the Dzhafarov condition, mean interaction contrast is canonical: M I C = 0 .
Proof. 
We provide the proof for exhaustive processing.
For exhaustive systems, the mean RT is:
E ( T ) = E ( T a + T b ) = 0 0 ( t a + t b ) f a ( t a ; A ) f b ( t b ; B , A | t a ) d t a d t b = 0 t a f a ( t a ; A ) d t a + 0 t b f b ( t b ; B ) d t b ( marginal selectivity ) = E ( T a ; A ) + E ( T b ; B ) .
Taking the factorial combinations of A L , A H , B L , B H and figuring the double difference clearly yields:
M I C = E ( T a ; A L ) + E ( T b ; B L ) [ E ( T a ; A H ) + E ( T b ; B L ) ] [ E ( T a ; A L ) + E ( T b ; B H ) ] + [ E ( T a ; A H ) + E ( T b ; B H ) ] = 0
So, weak marginal selectivity, a rather coarse type of influences, to say the least, is all it takes to guarantee a proper mean interaction contrast result in a 1-direction serial system. Naturally, strong marginal selectivity will also do the trick, but we do not bother to make that a formal statement.
Our final question for this study is still an enigma: Does strong marginal selectivity imply canonical D I C s in exhaustive serial systems? We do not know the answer. On the one hand, even strong marginal selectivity would seem to be incapable of forcing canonical D I C s. Why should a condition where the dependencies are integrated out imply something formidable on a generalized convolution (i.e., the sum of the two random variables but with stochastic dependence) of the joint distribution? Although it certainly did with the class of exhaustive parallel systems, the sum function appears less directly affected, for example, than in the parallel models (recall that just letting T a = T b = T altered the joint distribution into the distribution on the maximum function). In addition, the relative ease with which Townsend and R. Thomas ([37]) came up with dependencies that disrupted even the canonical M I C s would not seem to suggest very robust D I C s.
On the other hand, we have analyzed and performed computations with numerous dependent serial systems that obey strong marginal selectivity but that (so far) produce canonical D I C s. Some of the classic distributions in copula form, and made to satisfy strong marginal selectivity, form impressive examples. In addition, we do have our earlier example from Proposition 1A(c) which indicates a felicitous type of offset and evinces proper D I C s. In fact, we begin our final exploration by exhibiting the D I C s for several parameter settings of this rather classic example of offset, in Figure 8. In each of these cases, as predicted, the M I C s equal 0.
Next, we outline our reasoning with respect to features of what a counter example (i.e., possessing strong marginal selectivity but failing to reveal a canonical D I C ) might logically possess and end with an example discussing what happens when we try that with a modification of a classic copula-interpreted distribution. It ends in failure, but perhaps a reader may see their way through to a counterexample or even to a proof to the contrary.
One way in which to try to perturb canonicality would be to simply predict an incorrect M I C . However, that is a non-starter since, as already demonstrated, a canonical M I C in the case of seriality (and specifically here, exhaustive seriality) is guaranteed by strong marginal selectivity. To more readily appreciate the logic, consider first the special version where we can separate the first S a density from that for S b and assume direct selectivity on f a ( t ) . In addition, we start by assuming no direct nonselective influence of A on S b . We know that this cannot end with marginal selectivity.
First, set P ( T a = t a , T b t b ; A , B ) = f a ( t a ; A ) F b ( t b ; B | t a ) . Next, we transform this to the conditional convolution: P ( T a + T b t ; A , B ) = 0 t f a ( t a ; A ) F b ( t t a ; B | t a ) d t a and then D I C ( t ) = t a = 0 t [ f a ( t a ; A H ) f a ( t a ; A L ) ] [ F b ( t t a ; B H | t a ) F b ( t t a ; B L | t a ) ] d t a .
Observe that the second difference in the integral is always positive. In addition, in order to produce F a ( t a ; A H ) > F a ( t a ; A L ) for small t a , the first difference must also be positive. Under positive circumstances, this ensures that the D I C begins its trajectory as a positive function. We believe this type of machinery has to be overcome if a counter example were to be forthcoming.
Of course, when we insert direct nonselectivity in the second stage here:
D I C ( t ) = t a = 0 t f a ( t a ; A H ) [ F b ( t t a ; A H , B H ) F b ( t t a ; A H , B L ) ] f a ( t a ; A L ) [ F b ( t t a ; A L , B H ) F b ( t t a ; A L , B L ) ] d t a
the expression becomes even more complicated.
Now, we currently know of no algorithm to even produce marginal selectivity from such expressions, although we do know that both indirect as well as direct nonselectivity must be present to induce marginal selectivity.
One natural tactic, given the success with copulas with parallel processing, might be to take such joint distributions but now interpret the random variables as serial duration rather than parallel processing times. We can then convert the expression to a generalized convolution as just above. One starts with the joint distribution and differentiates with regard to t a . Then, substitute t t a for t b and integrate the expression (integrand) over t a from 0 to t.
This approach possesses the advantage of generating a number of cases with automatic marginal selectivity characteristics. Nonetheless, so far, we have not been able to find a counterexample or to use the copula technique to find a proof that even strong marginal selectivity entails canonical D I C functions. Yet, all our considered examples have turned out to engender canonical D I C s.
We now offer an example endowed with machinery that we had hoped might perturb the initial positivity alluded to above. Basically, we must transform the general expression P ( T a t a , T b t b ; A , B ) to P ( T a = t a , T b t b ; A , B ) via differentiation with respect to t a and then to convert to our generalized convolution and taking the double difference.
D I C ( t ) = 0 t { [ P ( T a = t a , T b t t a ; A H , B H ) P ( T a = t a , T b t t a ; A H , B L ) ] [ P ( T a = t a , T b t t a ; A L , B H ) P ( T a = t a , T b t t a ; A L , B L ) ] } d t a
The present copula distribution is slightly modified from the classic Gumbel’s bivariate exponential distribution (e.g., see [48]). Set:
P ( T a t a , T b t b ; A , B ) = 1 e A t a e B t b + e A t a B t b q ( A , B ) t a t b , w h e r e 0 q ( A , B ) 1 .
Then, we take the derivative of the joint CDF with respect to t a to calculate P ( T a = t a , T b t b ; A , B ) :
P ( T a t a , T b t b ; A , B ) t a = P ( T a = t a , T b t b ; A , B ) = A e A t a + ( A q ( A , B ) t b ) e A t a B t b q ( A , B ) t a t b
The cumulative distribution of the 1-directional serial exhaustive system: P ( T a + T b t ; A , B ) is calculated by replacing t b with t t a and integrating P ( T a = t a , T b t b ; A , B ) with respect to t a from 0 to t:
P ( T a + T b t ; A , B ) = 0 t { A e A t a + ( A q ( A , B ) ( t t a ) ) e A t a B ( t t a ) q ( A , B ) t a ( t t a ) } d t a
We now try to find a counterexample by countering the earlier tilt toward a D I C . We do this by searching for a function q ( A , B ) that satisfies its constraint but tends to lessen the early density’s ( t a = 0 + ϵ , when ϵ is small) trend to be larger for A H than A L . We performed calculations of D I C assuming, say, q ( A , B ) = B A n such that B A n [ 0 , 1 ] . Several of these are given in Figure 9. Clearly, this track did not pay off.
Appendix B presents other distributions that possess well-know copula form (and thereby immediately satisfy marginal selectivity). Putting them into proper form of 1-direction serial systems finds that they also induce canonical D I C s. We still suspect there must be a counterexample there. Two other examples based on the copula strategy are offered in the appendix. Perhaps one or more readers will solve this puzzler.

5. Summary and Discussion

Our introduction outlines some of the previous history and scientific philosophy regarding selective influence. We began with the apparently indisputable claim that any theoretical venture in psychology that goes beyond pure description or in a quantitative sense, curve fitting, must impart some notion of what we have called the “principle of correspondent change”. The latter, of course, includes symmetry and conservation laws of modern physics which can arguably be referred to as the linchpins of the science.
Due to the inextricable relationship of selective influence and mental architectures, we began our more rigorous developments with definitions of canonical serial and parallel systems (see Definitions 1 and 2). On seriality, we confine ourselves to “1-direction serial systems”, which assume a single processing order, say, “ S a S b ”. We believe this tactic helps unveil essential structures and serial-parallel distinctions in a reasonably clear and cogent fashion. When in the realm of initial forays into complex territories, we have found this approach to be useful, as in the investigation of D I C s (recall, distributional interaction contrasts) for the number of channels or stages greater than two. Multi-order serial processes, self-terminating stopping rules (e.g., single-target, self-terminating and minimum time) and more-than-two subprocesses in the system (see, [56]) await extension of the present results.
The groundbreaking concept of selective influence was first formally proposed by S. Sternberg ([15]). The formulation indicated that if experimental factors selectively influence two or more successive and temporally non-overlapping processing durations, then the mean response times would be an additive function of those experimental factors. Our next definition (Definition 3) captures what we now refer to as Sternbergian selective influence. We move to our definition (Definition 4) of “pure selective influence” which is more general than the Sternbergian version in that it pertains to either a parallel or a serial architecture but, unlike the latter, explicitly assumes stochastic independence.
Perturbations caused by nonselective influences come in two major forms, direct and indirect, which are defined in Definition 5. Direct nonselective influence occurs when a probability distribution on processing time is an explicit, deterministic function of the “other” factor acting as parameter. Indirect nonselective influence is propagated by a process duration being stochastically dependent on the duration of a separate process.
When we give up pure selective influence, it might seem as if the proverbial doors to perdition itself might be opened. However, a robust generalization of the conception of selective influence which permits a type of stochastic dependence was proposed by Dzhafarov ([39]). We refer to this as “conditional independence selective influence” (Definition 6). We reviewed some of what is known about its ability to expose architecture and stopping rules and prove some novel ones. The reader is referred to the numerous references to his and his colleagues’ work for detail.
Yet another species of selective influence comes in the guise where the marginal distributions are only functions of the “proper” factor and are irrespective of potential nonselectivity of either the direct or indirect sort (Definition 7). The basic intuition is that, although the processes may be stochastically dependent and possibly even subject to direct nonselectivity, somehow the pernicious influences average out. It was lurking in the background at least back to Townsend ([23]) and, as admitted earlier in the present study, we incorrectly (and implicitly) believed if it held, then canonical D I C s would perforce always be propagated. Subsequently, it began to be explicitly named “marginal selective influence”. When more than two subprocesses are present, Dzhafarov ([39]) introduced a more strict definition of marginal selectivity that implies the version for n = 2 . It turns out that there is another way to further deconstruct marginal selectivity which can impact systems identification as exhibited in Definition 8. Definition 8 breaks down the concept of marginal selectivity into two separate and mutually exclusive cases, strong vs. weak marginal selectivity. Marginal selectivity in either guise is not ordinarily accessible in standard RT experiments. However, there exist designs where it can itself be assessed (e.g., [52,54]). Its relationships with other forms of selectivity and structural features turn out to be intriguing as well.
Returning to the concept of indirect nonselectivity, Dzhafarov ([38]) investigated a number of fundamental properties and consequences of this concept (i.e., nonselectivity caused through stochastic dependence). However, it remained unknown as to its relationship to marginal selectivity. Proposition 1A and 1B establish how it can be satisfied. The most intriguing aspect is that it can still be true even if both indirect and direct nonselective influence hold. This ability of direct nonselective influence to compensate for indirect perturbations is then defined as “offset” (Definition 9).
Propositions 1A and 1B limn in conditions that suffice for marginal selectivity in serial and parallel systems. Propositions 2A and 2B do the same regarding failure. Sealing the importance of offset in allowing (though not forcing) marginal selectivity is discovered in the portion of Proposition 2A(c), which proves that, lacking offset and in the presence of indirect nonselective influence, marginal selectivity is impossible. An immediate inference deriving from Proposition 2A and 2B comes with Proposition 3: Since the Dzhafarov definition of selective influence implies marginal selectivity, without offset but with indirect nonselective influence, the Dzhafarov condition must fail.
After deliberation and explication through more examples comes the inevitable issue of the impact of all these results on identification of architecture, the backbone of systems factorial technology ([14]). The next section ensues with reminders of the foundational definitions of the double difference statistics which, under proper conditions of selective influence, permit such identification. These form the distinguishing characteristics of the mean interaction contrast ( M I C ) and the much more potent distributional interaction contrast ( D I C or its complement, survivor interaction contrast, S I C ). These remind the reader in Definitions 10 and 11.
We consider that the qualitative features of the architecture (and the pertinent) stopping rule predicted under pure selective influence to be characterized by the term “canonical”. For instance, under pure selective influence, exhaustive parallel systems produce survivor interaction contrasts that are always negative ([14]). Exhaustive serial systems’ survivor interaction contrasts start negative, cross the abscissa an odd number of times and end positive ([56]) and so on.
Proposition 4 states the fact employed earlier by Dzhafarov, Schweickert and Sung ([45]) that if the D I C is single valued, then the Dzhafarov definition of selective influence preserves the canonical D I C signature. In the case of some architectures, integration was called upon by these authors to produce the single signed property. Although very useful, this device can also bring forth more D I C mimicking of signature. We revisit our new theorem on serial exhaustive processing below.
As acknowledged in the earlier text, we had, for a long spell, supposed that marginal selective influence would force canonical D I C functions. One of the stark and surprising outcomes of the present theoretical enterprise was the finding that this belief might not be universally valid. On the one hand, we discovered for exhaustive parallel processing, with marginal selectivity and the imposed condition, that factors influence the joint distributions only through the marginals, then canonical results do indeed emerge. That is, strong marginal selectivity is well named. Yet, if the factors can exert influence in ways outside the marginals as with weak marginal selectivity, there can be (and probably usually will be) failure of the classical predictions. These findings are captured in Propositions 5 and 6. These discoveries we made possible through the use of copula theory, introduced into mathematical psychology by Colonius ([46,47,48]). Presently, we have not found a path to their exploitation with serial systems excepting for the means.
Proposition 7 explores the relationships of the two tiers of marginal selectivity to the Dzhafarov ([39]) definition of selective influence. Moving on to serial systems, we observed that the earlier Dzhafarov, Schweickert and Sung ([45]) results did not include the canonical serial predictions, perhaps because these are not single valued D I C functions. We frankly expected that the canonical serial predictions would therefore not be valid even when the Dzhafarov condition was in force. Therefore, we were surprised to espy a simple proof that exhaustive serial systems enjoying selective influence of the Dzhafarov variety would indeed effectuate the canonical signature. This is Proposition 8.
A final fairly patent result may also be viewed as a partial, if weak, justification for our earlier belief in the impact marginal selectivity can have on system identifiability. Namely, marginal selectivity does imply the canonical MIC result for exhaustive serial systems, namely M I C = 0 . This statement partly, but only partly, helps justify our earlier belief in the force of marginal selectivity.
We finally investigated the linkages of marginal selectivity and distributions on the serial sum of times in serial exhaustivity, with very minor success. We trust that future work by mathematical psychologists can cut the Gordian knot on this one.
The seemingly simple unadorned notion of selective influence has turned out to be rather more complex and multifaceted than perhaps earlier appreciated. We now advocate for more of a rich, multi-tiered and not necessarily linearly ordered-by-implication set of concepts. Various types of influence may come to be most appropriate or observable in somewhat distinct circumstances. We look forward to further advances in this quest in the near future.

Author Contributions

Conceptualization, J.T.T.; Formal analysis, J.T.T. and Y.L.; Investigation, J.T.T. and Y.L.; Methodology, J.T.T.; Project administration, J.T.T.; Validation, J.T.T. and Y.L.; Visualization, Y.L.; Writing—original draft, J.T.T.; Writing—review and editing, J.T.T. and Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to express their appreciation for discussion and useful ideas relative to Proposition 5 from Hari Bercovici of Department of Mathematics, Indiana University, and to Proposition 6 from Philippe Barbe, formerly of the French National Center for Scientific Research.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proposition 2A(c) in the Finite Case

All of our own previous theoretical work and almost all of the others’ theoretical and empirical applications of SFT to response times have assumed continuous densities. However, some investigators have engaged discrete probability distributions in various venues (e.g., see a number of these in the review by [57]; see also [58]). The following offers a proof of Proposition 2A(c) in the finite case. Readers with a background in linear algebra may also find the following demonstration more intuitive. We also lay out the steps in even more detail. The greater strength lies in the fact that the discrete settings permit well-behaved inverses under general conditions. Inverse functions will exist under analogous conditions in continuous function space, but they themselves may not be well behaved (e.g., continuous almost everywhere, etc.). We assume non-degenerate probability mass distributions and effective action of the factors, just as before.
Proposition A1.
In the 1-directional serial systems, suppose the dependent random variables T a and T b take on finite set of n distinct real number values, T a = { t a 1 , t a 2 , , t a n 1 , t a n } and T b = { t b 1 , t b 2 , , t b n 1 , t b n } . Indirect nonselective influence is present but not direct nonselective influence, so that f a ( t a i ; A , B ) = f a ( t a i ; A ) for all i and f b ( t b j ; A , B | t a i ) = f b ( t b j , B | t a i ) f b ( t b j ; B ) for all i, j. Note that mathematically, we could prove the more general result for Lebesgue integrable functions, and then specialize, in the form of a corollary, the finite result but here we pursue the opposite course to optimize comprehensibility.
Further, assume that there exist n distinct values for A = { A 1 , A 2 , , A n 1 , A n } and B = { B 1 , B 2 , , B n 1 , B n } and that all functions are non-trivial functions of their independent variables. Finally, assume that the square matrix operator:
Λ n × n = f a ( t a 1 ; A 1 ) f a ( t a 2 ; A 1 ) f a ( t a n 1 ; A 1 ) f a ( t a n ; A 1 ) f a ( t a 1 ; A n ) f a ( t a 2 ; A n ) f a ( t a n 1 ; A n f a ( t a n ; A n )
is nonsingular. Moreover, consider the vector < f b ( t b j ; B | t a i ) > 1 × n where j is fixed at some j = 1 , 2 , , n 1 , n and i varies for each fixed j over i = 1 , 2 , , n 1 , n . Then, marginal selectivity is impossible.
Proof. 
First, observe that j = 1 n f a ( t a j ; A i ) = 1 for each i, so that Λ n × n < 1 , , 1 > n T = 1 n × 1 . This implies that < 1 , , 1 > n T is an eigenvector of Λ corresponding to eigenvalue 1 for Λ n × n . This implies:
Λ n × n 1 < 1 , , 1 > n T = < 1 , , 1 > n T
Next, observe that:
Λ n × n < f b ( t b j ; B | t a i ) > n T
= f a ( t a 1 ; A 1 ) f b ( t b j ; B | t a 1 ) + f a ( t a 2 ; A 1 ) f b ( t b j ; B | t a 2 ) + + f a ( t a n ; A 1 ) f b ( t b j ; B | t a n ) f a ( t a 1 ; A 2 ) f b ( t b j ; B | t a 1 ) + f a ( t a 2 ; A 2 ) f b ( t b j ; B | t a 2 ) + + f a ( t a n ; A 2 ) f b ( t b j ; B | t a n ) f a ( t a 1 ; A n ) f b ( t b j ; B | t a 1 ) + f a ( t a 2 ; A n ) f b ( t b j ; B | t a 2 ) + + f a ( t a n ; A n ) f b ( t b j ; B | t a n ) = f a b ( t a 1 , t b j ; A 1 , B ) + f a , b ( t a 2 , t b j ; A 1 , B ) + + f a b ( t a n , t b j ; A 1 , B ) f a b ( t a 1 , t b j ; A 2 , B ) + f a , b ( t a 2 , t b j ; A 2 , B ) + + f a b ( t a n , t b j ; A 2 , B ) f a b ( t a 1 , t b j ; A n , B ) + f a b ( t a 2 , t b j ; A n , B ) + + f a b ( t a n , t b j ; A n , B ) = m = 1 n f a b ( t a m , t b j ; A 1 , B ) m = 1 n f a b ( t a m , t b j ; A 2 , B ) m = 1 n f a b ( t a m , t b j ; A n , B )
Now, let us assume that the indirect selectivity also holds so that marginal selectivity satisfies. This means that: m = 1 n f a , b ( t a m , t b j ; A i , B ) = f b ( t b j ; B ) , j = 1 , 2 , , n and B and notice that f b ( t b j ; B ) are identical across different A i .
Then, it follows that:
Λ n × n < f b ( t b j ; B | t a i ) > n T = f b ( t b j ; B ) < 1 , , 1 > n T Λ n × n f b ( t b j ; B | t a 1 ) f b ( t b j ; B | t a 2 ) f b ( t b j ; B | t a n ) n = f b ( t b j ; B ) < 1 , , 1 > n T f b ( t b j ; B | t a 1 ) f b ( t b j ; B | t a 2 ) f b ( t b j ; B | t a n ) n = f b ( t b j ; B ) Λ n × n 1 < 1 , , 1 > n T , ( substitute with Equation ( A1 ) ) f b ( t b j ; B | t a 1 ) f b ( t b j ; B | t a 2 ) f b ( t b j ; B | t a n ) n = f b ( t b j ; B ) f b ( t b j ; B ) f b ( t b j ; B ) n
Following that, we find the implication that if marginal selectivity holds, f b ( t b j ; B | t a i ) = f b ( t b j ; B ) , i , j = 1 , 2 , , n and B.
Hence, the degenerate constraint is found that if f b ( t b ; B ) cannot depend on value of A, then the probability density for any t b j cannot be dependent on t a . Therefore, marginal selectivity is impossible if A, and t a are effective. □

Appendix B. DICs Simulated from Two 1-Directional Serial Exhaustive Systems Using the Copula Strategy

The example is constructed from the 1-directional serial exhaustive systems that have the Farlie–Gumbel–Morgenster (FGM) bivariate cumulative distribution (see [48], p. 259):
P ( T a t a , T b t b ; A , B ) = F a ( t a ; A ) F b ( t b ; B ) α { [ 1 F a ( t a ; A ) ] [ 1 F b ( t b ; B ) ] }
We set the marginals F b ( t a ) = 1 e A t a and F b ( t b ) = 1 e B t b , which satisfies the marginal selectivity. Then, take the partial derivative of the CDF with respect to t a :
P ( T a = t a , T b t b ; A , B ) = P ( T a t a , T b t b ; A , B ) t a = A α e A t a b t b ( 1 e A t a ) ( 1 e B t b ) + A e A t a ( 1 e B t b ) ( 1 + α e A t a B t b )
The CDF of the total processing time of the system: P ( T a + T b t ; A , B ) is obtained by replacing t b with t t a and integrating it with respect to t a from 0 to t. The simulated D I C is shown in Figure A1 and M I C = 0 for all the cases.
Figure A1. Estimated D I C ( t ) s simulated from a 1-direction serial exhaustive system having FGM cdf with different α values. A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Figure A1. Estimated D I C ( t ) s simulated from a 1-direction serial exhaustive system having FGM cdf with different α values. A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Mathematics 10 01059 g0a1

References

  1. Townsend, J.T.; Ashby, G.F. Stochastic Modeling of Elementary Psychological Processes; CUP Archive: Cambridge, UK, 1983. [Google Scholar]
  2. Sternberg, S. High-speed scanning in human memory. Science 1966, 153, 652–654. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Townsend, J.T. Mock parallel and serial models and experimental detection of these. In Purdue Centennial Symposium on Information Processing; Purdue University Press: West Lafayette, IN, USA, 1969; pp. 617–628. [Google Scholar]
  4. Townsend, J.T. A note on the identifiability of parallel and serial processes. Percept. Psychophys. 1971, 10, 161–163. [Google Scholar] [CrossRef]
  5. Townsend, J.T. Some results concerning the identifiability of parallel and serial processes. Br. J. Math. Stat. Psychol. 1972, 25, 168–199. [Google Scholar] [CrossRef]
  6. Treisman, A.M.; Gelade, G. A feature-integration theory of attention. Cogn. Psychol. 1980, 12, 97–136. [Google Scholar] [CrossRef]
  7. Atkinson, R.; Holmgren, J.; Juola, J. Processing time as influenced by the number of elements in a visual display. Percept. Psychophys. 1969, 6, 321–326. [Google Scholar] [CrossRef] [Green Version]
  8. Algom, D.; Eidels, A.; Hawkins, R.X.; Jefferson, B.; Townsend, J.T. Features of response times: Identification of cognitive mechanisms through mathematical modeling. In The Oxford Handbook of Computational and Mathematical Psychology; Oxford University Press: Oxford, UK, 2015; pp. 63–98. [Google Scholar]
  9. Wolfe, J.M. Visual search revived: The slopes are not that slippery: A reply to Kristjansson (2015). i-Perception 2016, 7, 2041669516643244. [Google Scholar] [CrossRef] [Green Version]
  10. Little, D.R.; Eidels, A.; Houpt, J.W.; Yang, C.T. Set size slope still does not distinguish parallel from serial search. Behav. Brain Sci. 2017, 40, 3. [Google Scholar] [CrossRef]
  11. Sternberg, S. Memory-scanning: Mental processes revealed by reaction-time experiments. Am. Sci. 1969, 57, 421–457. [Google Scholar]
  12. Thomas, R.D. Processing time predictions of current models of perception in the classic additive factors paradigm. J. Math. Psychol. 2006, 50, 441–455. [Google Scholar] [CrossRef]
  13. Dutilh, G.; Annis, J.; Brown, S.D.; Cassey, P.; Evans, N.J.; Grasman, R.P.; Hawkins, G.E.; Heathcote, A.; Holmes, W.R.; Krypotos, A.M.; et al. The quality of response time data inference: A blinded, collaborative assessment of the validity of cognitive models. Psychon. Bull. Rev. 2019, 26, 1051–1069. [Google Scholar] [CrossRef]
  14. Townsend, J.T.; Nozawa, G. Spatio-temporal properties of elementary perception: An investigation of parallel, serial, and coactive theories. J. Math. Psychol. 1995, 39, 321–359. [Google Scholar] [CrossRef]
  15. Sternberg, S. The discovery of processing stages: Extensions of Donders’ method. Acta Psychol. 1969, 30, 276–315. [Google Scholar] [CrossRef]
  16. Townsend, J.T.; Wenger, M.J.; Houpt, J.W. Uncovering mental architecture and related mechanisms in elementary human perception, cognition and action. Stevens’ Handb. Exp. Psychol. 2018, 3, 429–454. [Google Scholar]
  17. Harding, B.; Goulet, M.A.; Jolin, S.; Tremblay, C.; Villeneuve, S.P.; Durand, G. Systems factorial technology explained to humans. Tutorials Quant. Methods Psychol. 2016, 12, 39–56. [Google Scholar] [CrossRef] [Green Version]
  18. Little, D.R.; Altieri, N.; Fific, M.; Yang, C.T. Systems Factorial Technology: A Theory Driven Methodology for the Identification of Perceptual and Cognitive Mechanisms; Academic Press: Cambridge, MA, USA, 2017. [Google Scholar]
  19. Joseph, W.H.; Daniel, R.L.; Ami, E. Developments in Systems Factorial Technology: Theory and applications. Spec. Issue J. Math. Psychol. 2019, 27, 11. [Google Scholar]
  20. Dzhafarov, E.N.; Jordan, J.S.; Zhang, R.; Cervantes, V.H. Contextuality from Quantum Physics to Psychology; World Scientific: Singapore, 2015; Volume 6. [Google Scholar]
  21. Townsend, J.T.; Wenger, M.J. On the dynamic perceptual characteristics of gestalten: Theory-based methods. In Oxford Handbook of Perceptual Organization; Oxford University Press: Oxford, UK, 2014. [Google Scholar]
  22. Townsend, J.T.; Altieri, N. An accuracy–response time capacity assessment function that measures performance against standard parallel predictions. Psychol. Rev. 2012, 119, 500. [Google Scholar] [CrossRef] [Green Version]
  23. Townsend, J.T. Uncovering mental processes with factorial experiments. J. Math. Psychol. 1984, 28, 363–400. [Google Scholar] [CrossRef]
  24. Schweickert, R. A critical path generalization of the additive factor method: Analysis of a Stroop task. J. Math. Psychol. 1978, 18, 105–139. [Google Scholar] [CrossRef] [Green Version]
  25. Schweickert, R. The bias of an estimate of coupled slack in stochastic PERT networks. J. Math. Psychol. 1982, 26, 1–12. [Google Scholar] [CrossRef]
  26. Egeth, H.; Dagenbach, D. Parallel versus serial processing in visual search: Further evidence from subadditive effects of visual quality. J. Exp. Psychol. Hum. Percept. Perform. 1991, 17, 551. [Google Scholar] [CrossRef]
  27. Townsend, J.T.; Ashby, G.F. Methods of modeling capacity in simple processing systems. In Cognitive Theory; Psychology Press: Hove East Sussex, UK, 1978; Volume III, pp. 200–239. [Google Scholar]
  28. Townsend, J.T. Truth and consequences of ordinal differences in statistical distributions: Toward a theory of hierarchical inference. Psychol. Bull. 1990, 108, 551. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Townsend, J.T.; Schweickert, R. Toward the trichotomy method of reaction times: Laying the foundation of stochastic mental networks. J. Math. Psychol. 1989, 33, 309–327. [Google Scholar] [CrossRef]
  30. Schweickert, R.; Townsend, J.T. A trichotomy: Interactions of factors prolonging sequential and concurrent mental processes in stochastic discrete mental (PERT) networks. J. Math. Psychol. 1989, 33, 328–347. [Google Scholar] [CrossRef]
  31. Townsend, J.T.; Nozawa, G. Strong evidence for parallel processing with simple dot stimuli. Bull. Psychon. Soc. 1988, 26. [Google Scholar]
  32. Kujala, J.V.; Dzhafarov, E.N. Testing for selectivity in the dependence of random variables on external factors. J. Math. Psychol. 2008, 52, 128–144. [Google Scholar] [CrossRef]
  33. Dzhafarov, E.N.; Kujala, J.V. The joint distribution criterion and the distance tests for selective probabilistic causality. Front. Psychol. 2010, 1, 151. [Google Scholar] [CrossRef] [Green Version]
  34. Zhang, R.; Liu, Y.; Townsend, J.T. A theoretical study of process dependence for critical statistics in standard serial models and standard parallel models. J. Math. Psychol. 2019, 92, 102277. [Google Scholar] [CrossRef]
  35. Houpt, J.W.; Townsend, J.T.; Jefferson, B. Stochastic foundations of elementary mental architectures. In New Handbook of Mathematical Psychology; Cambridge University Press: Cambridge, UK, 2018; Volume 2, pp. 104–128. [Google Scholar]
  36. Townsend, J.T.; Liu, Y.; Zhang, R. Selective influence and classificatory separability (perceptual separability) in perception and cognition: Similarities, distinctions, and synthesis. In Systems Factorial Technology; Elsevier: Amsterdam, The Netherlands, 2017; pp. 93–114. [Google Scholar]
  37. Townsend, J.T.; Thomas, R.D. Stochastic dependencies in parallel and serial models: Effects on systems factorial interactions. J. Math. Psychol. 1994, 38, 1–34. [Google Scholar] [CrossRef]
  38. Dzhafarov, E.N. Conditionally selective dependence of random variables on external factors. J. Math. Psychol. 1999, 43, 123–152. [Google Scholar] [CrossRef]
  39. Dzhafarov, E.N. Selective influence through conditional independence. Psychometrika 2003, 68, 7–25. [Google Scholar] [CrossRef]
  40. Dzhafarov, E.N.; Schweickert, R. Decompositions of response times: An almost general theory. J. Math. Psychol. 1995, 39, 285–314. [Google Scholar] [CrossRef]
  41. Howard, Z.L.; Garrett, P.; Little, D.R.; Townsend, J.T.; Eidels, A. A show about nothing: No-signal processes in systems factorial technology. Psychol. Rev. 2021, 128, 187. [Google Scholar] [CrossRef] [PubMed]
  42. Luce, R.D.; Edwards, W. The derivation of subjective scales from just noticeable differences. Psychol. Rev. 1958, 65, 222. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Townsend, J.T. The mind-body equation revisited. Philos. Res. Arch. 1975, 4, 196–246. [Google Scholar]
  44. Dzhafarov, E.N. Unconditionally selective dependence of random variables on external factors. J. Math. Psychol. 2001, 45, 421–451. [Google Scholar] [CrossRef] [Green Version]
  45. Dzhafarov, E.N.; Schweickert, R.; Sung, K. Mental architectures with selectively influenced but stochastically interdependent components. J. Math. Psychol. 2004, 48, 51–64. [Google Scholar] [CrossRef]
  46. Colonius, H.; Diederich, A. Dependency in multisensory integration: A copula-based analysis. Philos. Trans. R. Soc. A 2019, 377, 20180364. [Google Scholar] [CrossRef] [Green Version]
  47. Colonius, H. An invitation to coupling and copulas: With applications to multisensory modeling. J. Math. Psychol. 2016, 74, 2–10. [Google Scholar] [CrossRef]
  48. Colonius, H. Possibly dependent probability summation of reaction time. J. Math. Psychol. 1990, 34, 253–275. [Google Scholar] [CrossRef] [Green Version]
  49. Townsend, J.T.; Liu, Y.; Zhang, R.; Wenger, M.J. Interactive parallel models: No Virginia, violation of miller’s race inequality does not imply coactivation and yes Virginia, context invariance is testable. Quant. Methods Psychol. 2020, 16, 192–212. [Google Scholar] [CrossRef]
  50. Nelsen, R.B. An Introduction to Copulas; Springer: New York, NY, USA, 1999. [Google Scholar]
  51. Parzen, E. Modern Probability Theory and Its Applications; Wiley: New York, NY, USA, 1962. [Google Scholar]
  52. Dzhafarov, E.N.; Kujala, J.V.; Cervantes, V.H.; Zhang, R.; Jones, M. On contextuality in behavioural data. Philos. Trans. R. Soc. Math. Phys. Eng. Sci. 2016, 374, 20150234. [Google Scholar] [CrossRef] [PubMed]
  53. Zhang, R.; Dzhafarov, E.N. Noncontextuality with marginal selectivity in reconstructing mental architectures. Front. Psychol. 2015, 6, 735. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Zhang, R.; Dzhafarov, E.N. Testing contextuality in cyclic psychophysical systems of high ranks. In International Symposium on Quantum Interaction; Springer: New York, NY, USA, 2016; pp. 151–162. [Google Scholar]
  55. Feller, W. An Introduction to Probability Theory and Its Applications, Volume 2; John Wiley & Sons: Hoboken, NJ, USA, 2008. [Google Scholar]
  56. Yang, H.; Fific, M.; Townsend, J.T. Reprint of survivor interaction contrast wiggle predictions of parallel and serial models for an arbitrary number of processes. J. Math. Psychol. 2014, 59, 82–94. [Google Scholar] [CrossRef]
  57. Schweickert, R.; Fisher, D.L.; Sung, K. Discovering Cognitive Architecture by Selectively Influencing Mental Processes; World Scientific: Singapore, 2012; Volume 4. [Google Scholar]
  58. Kujala, J.V.; Dzhafarov, E.N. Regular Minimality and Thurstonian-type modeling. J. Math. Psychol. 2009, 53, 486–501. [Google Scholar] [CrossRef]
Figure 1. (a) The survivor function for T b conditioned on t a in example 1. S b ( t b = 1 ; B | t a ) decreases with increases in t a and B. (b). The marginal survivor function for T b in example. S b ( t b = 1 ; B ) increases with A and decreases with B.
Figure 1. (a) The survivor function for T b conditioned on t a in example 1. S b ( t b = 1 ; B | t a ) decreases with increases in t a and B. (b). The marginal survivor function for T b in example. S b ( t b = 1 ; B ) increases with A and decreases with B.
Mathematics 10 01059 g001
Figure 2. (a.) The survivor function for T b in Proposition 1(c). S b ( t b = 1 ; B = 2 , A | t a ) decreases with increase in t a and A. (b.) The marginal survivor function for T b in Proposition 1(c). S b ( t b = 1 ; B ) decreases with B.
Figure 2. (a.) The survivor function for T b in Proposition 1(c). S b ( t b = 1 ; B = 2 , A | t a ) decreases with increase in t a and A. (b.) The marginal survivor function for T b in Proposition 1(c). S b ( t b = 1 ; B ) decreases with B.
Mathematics 10 01059 g002
Figure 3. M I C for serial systems, parallel exhaustive systems and parallel self-terminating system.
Figure 3. M I C for serial systems, parallel exhaustive systems and parallel self-terminating system.
Mathematics 10 01059 g003
Figure 4. S I C ( t ) s simulated from a serial exhaustive system and a parallel exhaustive system with the same set of parameters for the density functions of the sub-channel processing time.
Figure 4. S I C ( t ) s simulated from a serial exhaustive system and a parallel exhaustive system with the same set of parameters for the density functions of the sub-channel processing time.
Mathematics 10 01059 g004
Figure 5. The D I C ( t ) estimated from a parallel exhaustive system that satisfies weak marginal selectivity with A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Figure 5. The D I C ( t ) estimated from a parallel exhaustive system that satisfies weak marginal selectivity with A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Mathematics 10 01059 g005
Figure 6. Resulted M I C from the counter example of the proof for Corollary 2(1).
Figure 6. Resulted M I C from the counter example of the proof for Corollary 2(1).
Mathematics 10 01059 g006
Figure 7. D I C ( t ) of a 1-direction serial exhaustive system that obeys the Dzhafarov’s definition.
Figure 7. D I C ( t ) of a 1-direction serial exhaustive system that obeys the Dzhafarov’s definition.
Mathematics 10 01059 g007
Figure 8. Estimated D I C ( t ) from the example 1-direction serial exhaustive system in Proposition 1A(c) with A L = 3 , A H = 8 , B L = 4 , B H = 6 .
Figure 8. Estimated D I C ( t ) from the example 1-direction serial exhaustive system in Proposition 1A(c) with A L = 3 , A H = 8 , B L = 4 , B H = 6 .
Mathematics 10 01059 g008
Figure 9. Estimated D I C ( t ) s from a 1-direction serial exhaustive system having Gumbel’s bivariate exponential distribution where q ( A , B ) = B A n . A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Figure 9. Estimated D I C ( t ) s from a 1-direction serial exhaustive system having Gumbel’s bivariate exponential distribution where q ( A , B ) = B A n . A L = 3 , A H = 12 , B L = 1 , B H = 4 .
Mathematics 10 01059 g009
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Townsend, J.T.; Liu, Y. Varieties of Selective Influence: Toward a More Complete Taxonomy and Implications for Systems Identification. Mathematics 2022, 10, 1059. https://doi.org/10.3390/math10071059

AMA Style

Townsend JT, Liu Y. Varieties of Selective Influence: Toward a More Complete Taxonomy and Implications for Systems Identification. Mathematics. 2022; 10(7):1059. https://doi.org/10.3390/math10071059

Chicago/Turabian Style

Townsend, James T., and Yanjun Liu. 2022. "Varieties of Selective Influence: Toward a More Complete Taxonomy and Implications for Systems Identification" Mathematics 10, no. 7: 1059. https://doi.org/10.3390/math10071059

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop