You are currently viewing a new version of our website. To view the old version click .
Analytics
  • Article
  • Open Access

29 December 2025

Can Length Limit for App Titles Benefit Consumers?

,
,
and
1
Faculty of Economics, Kyoto Sangyo University, Kyoto 603-8555, Japan
2
Department of Communications Management, Shih Hsin University, Taipei 116, Taiwan
3
Department of Business Management, National Sun Yat-Sen University, Kaohsiung 804, Taiwan
4
Farwind Industrial Corporation, Taichung 420, Taiwan

Abstract

The App Store introduced a title-length limit for mobile apps in 2016, and similar policies were later adopted across the industry. This issue drew considerable attention from industry practitioners in the 2010s. Using both empirical and theoretical approaches, this paper examines the effectiveness of this policy and its welfare implications. Title length became an issue because some sellers assemble meaningful keywords in the app title to convey information to consumers, while others combine irrelevant yet popular keywords in an attempt to increase their app’s downloads. We hypothesize that when titles are short, title length is positively associated with an app’s performance because both honest and opportunistic sellers coexist in the market. However, due to the presence of opportunistic sellers, once titles become too long, this positive relationship disappears. We examine this hypothesis using a random sample of 1998 apps from the App Store in 2015. Our results show that for apps with titles longer than 30 characters, title length remains positively associated with app performance. However, for titles exceeding 50 characters, we do not have sufficient evidence to conclude that further increases in length continue to generate additional downloads. To interpret our empirical findings, we construct communication games between an app seller and a consumer, in which the equilibrium is characterized by a threshold. Based on our model and empirical observations, the 30-character limit might hurt consumers.
JEL Classification:
D83; L86; M37

1. Introduction

In September 2016, the App Store introduced a 50-character limit for the names of mobile apps (App Store Review Guidelines History n.d. [1]). This caused more than 25% of the top apps to change their names because the limit on app names had previously been 255 characters (McCabe 2016 [2]). After June 2017, Apple further reduced the limit on app titles to 30 characters. This change is believed to have come in response to ‘keyword stuffing’ in the titles of the apps, a strategy in which the publishers of the apps were assembling keywords and creating long titles with the general goal of increasing the total downloads of their apps. Keyword stuffing was originally developed in the 1990s as a means of obtaining top search engine rankings through the inclusion of targeted keywords numerous times on a page, and although modern search engines have invalidated such techniques, this idea has been grafted onto the naming of apps. Keyword stuffing was one popular method to improve the visibility of apps before 2016. The effectiveness and welfare implications of the length limit policies for mobile app titles were one of the important issues among industry practitioners in the 2010s.
The purpose of this study is to analyze the effect of keyword stuffing and to discuss whether this 30-character limit, or a return to the 50-character limit, benefits or hurts consumers and app sellers. To this end, we adopt propensity score methods. Specifically, we apply the semiparametric estimation approach for multivalued treatment effects developed by Cattaneo (2010) [3] and Cattaneo et al. (2013) [4] to analyze our random sample of 1998 apps from 2015. To interpret our empirical result, we also extend cheap talk models in game theory, formalized by Crawford and Sobel (1982) [5], to describe the relationship between consumers and sellers in the app market before 2016. To the best of our knowledge, no paper directly examined the effectiveness and welfare implication of the length limit policies for mobile app titles or combined empirical and game theoretic approaches as we did, although many papers scrutinized the effects of various factors of app titles on consumer behavior.
The App Store is a crowded market. In May 2016, the App Store had a total of 2,309,309 apps available for download, the attributes of which were quite diverse; in January 2022, there were 4,796,774 apps available (Pocket Gamer n.d. [6]). It is challenging for app developers to get consumers to discover their apps. It is also difficult for consumers to find a specific app that matches exactly their needs. Practitioners refer to the process of improving the visibility of apps in app stores (such as the App Store or Google Play) as app store optimization (ASO). Many ASO marketers believe that creating a proper app title is crucial in ASO (e.g., App Radar n.d. [7]). One common strategy was to bring together numerous keywords as app titles (Perez 2015 [8]; McCabe 2016 [2]). For example, the title of the Google navigation app was ’Google Maps—Real-time navigation, traffic, transit, and nearby places.’ This practice makes app titles prolix but informative and increases the visibility of apps. We expect consumers to incur a cost in examining the attributes of each app. This practice provides a shortcut for consumers to find an app that matches their needs, either through search engines or by browsing a list. As such, an informative title can drive more downloads than a title with little information.
However, some developers used too many popular, but irrelevant, keywords in their titles, so that their app titles read as ‘a comma separated list’ (McCabe 2016 [2]). Moreover, once the market is full of apps with long and meaningless titles, misled consumers may disregard titles as a channel to identify apps that match their needs. Senior ASO marketers, such as Patel (2014) [9], convinced app publishers that lengthy titles make the apps look unprofessional, whilst also creating unpleasant user experiences that can ultimately destroy the willingness of consumers to download apps. He recommended that the optimal title length not exceed 25 characters. In 2016, Apple decided to reduce the number of characters allowed in app names to crack down on the misuse of keyword stuffing, even though lengthy app names composed of relevant keywords may benefit consumers.
ASO through keywords is an important practice for promoting apps (Karagkiozidou et al. 2019 [10]; Padilla-Piernas et al. 2020 [11]). An increasing number of studies have recently discussed this issue, but there is still limited knowledge about the implication and effectiveness of this method. The current literature examined, for example, how consumers’ decisions to downloading an app is affected by sellers’ dependent factors (such as the brand image, keywords in its descriptions, the price, system requirements, and the last update) and how the popularity is affected by consumers’ dependent factors such as reviews, the average rating, and the number of downloads. (Martin et al. 2017 [12]; Karagkiozidou et al. 2019 [10]; Strzelecki 2020 [13]; Stocchi et al. 2022 [14]). (Martin et al. (2017) [12], which surveyed the literature on App Store Analysis for Software Engineering between 2000 and 2015, based on 127 articles, is a complement to the survey articles on ASO. Karagkiozidou et al. (2019) [10] first provided a systematic review of the literature on ASO, based on nine published papers. Strzelecki (2020) [13] provided a brief overview of the relevant literature after the survey by Karagkiozidou et al. (2019) [10]. But his focus was on data analysis of the correlation between multiple factors, including selected keywords in the app name and the number of downloads. Stocchi et al. (2022) [14] presented an integrative review of existing marketing research on mobile apps, based on 471 studies.) However, to our knowledge, there is no research that directly examines the relationship between the length of an app name and consumers’ decisions to download the app in our way. Therefore, we do not know whether the length limit policies introduced by the App Store are beneficial to consumers and app sellers. The present study is the first attempt to provide additional implications for this problem.
We make the hypothesis that there is a threshold for the length of app titles. When titles are short, title length is positively associated with an app’s performance. Once the length of an app title exceeds the threshold, it is difficult to get more downloads by increasing its length.
Our empirical results indicate that before the App Store restricted title length in 2016, for apps whose titles were below the threshold, apps with longer titles had better market performance than those with shorter titles. Specifically, we randomly observed 1998 apps in 2015 from an official directory listing all available apps and divided these apps into three groups: Group 0 with apps having titles shorter than or equal to 5 words, Group 1 with apps whose title length is from 6 words to 8 words, and Group 2 with apps having titles longer than 8 words. Our results reveal that the apps in Group 2 had better market performance than those in the other groups, and the apps in Group 1 performed better than those in Group 0. However, we did not find enough evidence to claim that apps with titles longer than 11 words performed better than apps with title lengths of 9 to 11 words. Since the average length of words in titles is approximately 6 characters in our sample, we can suggest that the lower bound was at least above 5 words (30 characters).
To provide a possible explanation for why the threshold can exist, we propose communication games between a representative app seller and a representative consumer. The seller app has certain specific attributes. The consumer downloads the app if the total benefit associated with the attributes exceeds the cost. The seller advertises the attribute information (assembling keywords as his app title). However, the seller can be honest or rebellious. (It would be more appropriate in game theory if the honest (rebellious) seller is called a non-strategic good (strategic bad) type. However, for the purpose of our study, we use the words “honest” and “rebellious” to describe the types of seller.) The honest seller tells the truth, but the rebellious seller may lie to increase downloads. We show that the equilibrium is characterized by a threshold. For advertisements longer than the threshold, the informativeness is limited because such advertisements include those of the rebellious seller who lies. Hence, increasing its length does not bring more downloads because the consumer has recognized the situation in which the rebellious seller stuffs irrelevant keywords in the app title, and the consumer has thus adjusted her behavior. However, when the length is below the threshold, the seller can transmit more beneficial information to the consumer and induce more downloads by increasing the length. Consequently, a policy of limiting the length of the title may hurt the consumer. If the new length limit is below the original threshold mentioned above, then the loss created by limiting the information transmission of an honest seller exceeds the gain of preventing noise caused by rebellious sellers because the consumer has already adjusted her prior behavior. If the new length limit is above the original threshold, the policy does not affect the quality of information transmission.
For theoretical models, we use the cheap talk communication settings formalized by Crawford and Sobel (1982) [5]. There are a number of applications of cheap talk models to analyze market transactions. Gardete and Bart (2018) [15] examined how transparency in seller motives helped them ’tailor’ communications with customers. Chakraborty and Harbaugh (2014) [16] investigated the effectiveness of ’puffery’ (the marketing strategy that emphasizes selected attributes). The ideas of introducing non-strategic players (a honest seller) in cheap talk models were studied by Chen, Kartik, and Sobel (2008) [17] and Chen (2011) [18]. Our setup of the limited number of messages (the limit length policy) is relevant to the game-theoretic models of optimal organizational languages by Crémer et al. (2007) [19].
The remainder of this paper is organized as follows. Section 2 discusses the hypothesis and then explains the materials and the empirical method used in this study. Subsequently, Section 3 reports the results. Section 4 discusses communication games that may support our empirical results. Finally, Section 5 concludes this paper.

2. Data and Empirical Strategy

2.1. Hypotheses

We hypothesize that there is a threshold for the length of app titles such that an app title longer than the threshold does not get more downloads. If the title of the app is shorter than the threshold, increasing the length helps the app seller to get more downloads. We will later propose a game theoretic model to explain why such a threshold occurs and how the threshold affects welfare.
To examine our hypotheses, we empirically investigate whether apps with longer titles exhibited better market performance than those with shorter titles (before the 2016 App Store’s title length restriction), and whether further increases in title length continued to affect performance once titles exceeded a certain threshold. Unfortunately, after the implementation of the title-length restriction, apps with titles exceeding 30 characters essentially no longer existed; consequently, it becomes impossible to analyze whether increasing title length beyond 30 characters would continue to improve app performance today. We then assess whether this threshold lies above 30 or 50 characters.

2.2. Data

We randomly sample 1998 apps in 2015 from an official directory that lists all available apps. (The official list providing the information of available apps in the App Store could be found at the following: https://itunes.apple.com/us/genre/ios/id36?mt=8 (Accessed on 6 June 2018).) According to Dillman, Smyth, and Christian (2009) [20] and Champ (2017) [21], a sample size of around 380 is enough to obtain ±5% sampling error no matter if the size of the study population is 10,000 or 100 million. In contrast to previous studies, which discussed only the apps in the charts (e.g., Jung et al. 2012 [22]; Lee and Raghu 2014 [23]), we discuss all available apps in the App Store. Google Play also imposed similar title length regulations. However, since only Apple provides a public list of all apps, our analysis focuses on the app market of the App Store. Since the app market is very competitive, most apps have never been in the top-100, top-200 or top-300 charts. Therefore, a study of the apps on the charts is an analysis of extremely successful apps only. If the features of successful apps are quite different from those of general apps, then the conclusion of this analysis would be restricted to a small part of the available apps in the market.
Since it is costly to observe all the available apps, we make a random sample from the official directory in September 2015, where all the available apps were classified into 23 categories and arranged by title in each category. We take a random sample from each category. Table 1 shows the proportion of the number of apps in each category relative to the number of all apps, and the number of apps we select from each category.
Table 1. Ratio of the number of apps for each category in the App Store.
In this sample, we record both time-variant and time-invariant features for the selected apps. For time-variant features, the observation date is set at 15 December 2015. Our dependent variable, Y i , is an index that indicates whether a selected app was in the chart of Global Rank at the observation date. The chart of Global Rank is created by the mobile attribution and analytics company, Adjust Inc (https://www.adjust.com/ (Accessed 15 June 2017)). This chart provides information on the global ranking of a selected app if the app is ranked in the top 300,000. Since Apple does not release data regarding downloads or sales of each app, practitioners usually monitor the App Store rankings as an index of the performance of a particular app on the market. Garg and Telang (2013) [24] showed that researchers could reasonably infer downloads from these ranking data. However, even though the Global Rank chart provides information about the ranking of a selected app until it reaches the top 300,000, more than three-fourths of available apps remain out of the chart. Therefore, we use whether a selected app was in the chart (top 300,000) as a proxy measure of app performance in the market. Yin et al. (2014) [25] also considered whether an app appeared in a chart as a measure of success. The difference between their study and our study is the chart used by Yin et al. (2014) [25] was a top-300 ranking and therefore focused on extremely successful apps.
Our key explanatory variable is the title length for each app, which is measured as the number of words. (If we measure title length using the number of characters and group the apps based on the 30-character and 50-character thresholds, we obtain the same results. However, to better align with our theoretical model, which is based on information conveyed per word, we define groups over words.) Since the number of characters used by a word in a logographic language, such as Chinese, is different from that in alphabetic languages, the way to count the length of titles in logographic writing systems is quite different from that in alphabetic writing systems. In our sample, the average length of a word in titles of alphabetic languages is around 6 characters, but for Chinese or other logographic languages, depending on the encoding system, a single word may use 1–3 characters. In our sample, apps whose titles contain Chinese have an average title length of 9.0 words and an average of 22.0 characters used. In contrast, apps with titles in alphabetical languages have an average title length of 3.8 words and use an average of 21.4 characters. These two types of apps indeed differ substantially in how many words they use in their titles. We drop observations with only a title in logographic writing systems. In our sample, the average length of a word in titles is around 6 characters.
In order to examine whether long titles could help and compare the 30-character (5-word) limit with the threshold, we divide our sample into three groups: Group 0 contains apps having titles of shorter than or equal to 5 words, Group 1 includes apps for which the title length is longer than 5 words but shorter than or equal to 8 words, and Group 2 includes apps having titles longer than 8 words. Our goal is to examine whether apps in different groups have different probabilities of appearing in the chart, and especially to examine whether apps in Group 2 performed better than those in Group 1.
Similarly to compare the 50-character (around 8-word) limit with the threshold, we divide our sample into four groups: Group 0 contains apps with titles shorter than or equal to 5 words, Group 1 includes apps for which the title length is longer than 5 words but shorter than or equal to 8 (5 + 3) words, Group 2 includes apps for which the title length is longer than 8 words but shorter than or equal to 11 (8 + 3) words, and Group 3 includes apps with titles longer than 11 words. Our goal here is to examine whether an app in Group 3 has a higher probability of appearing in the chart than one in Group 2.
Table 2 summarizes the explanations of the dependent variables and lists all of the covariates that we use in the analysis.
Table 2. Lists of variables and descriptive statistics.

2.3. Empirical Strategy

In the past decade, statistical matching techniques have become increasingly popular among researchers due to several appealing features. First, this approach mimics experimental design, making causal inference more intuitive. Second, statistical matching often enables researchers to obtain higher-quality estimates compared with traditional regression techniques. In many cases, balancing the observed characteristics also helps balance unobserved characteristics that are correlated with them (Caliendo and Kopeinig 2008 [26]).
In addition, we further adopted propensity score weighting rather than the traditional propensity score matching within the framework of propensity score analysis. In the propensity score weighting approach, we obtain a balanced sample by weighting observations from different groups according to their propensity scores. Compared with picking observations with similar propensity scores to construct a balanced sample (i.e., traditional propensity score matching), this method is superior because its properties are easier to grasp and it more effectively produces balanced samples (Hainmueller 2012 [27]). We also applied the method that combines regression adjustment with propensity score weighting, which has the property of being doubly robust. This means that even if either the model used to generate the propensity scores or the model used to estimate the conditional mean is misspecified, the results can still be guaranteed to remain valid (Wooldridge 2010 [28]).
To employ statistical matching techniques, we see our estimation problem in view of the multi-valued potential outcome framework introduced by Cattaneo (2010) [3], which extends the traditional Rubin causal mode of binary treatment so that the multi-valued treatment effects can be jointly estimated. In the analysis where we divide our sample into three groups, we suppose that for an app i, there are three potential values for the outcome: Z i 0 , Z i 1 , and Z i 2 . Here, Z i 0 is the value of the outcome if i has a title shorter than or equal to 5 words. We say that this is the case that i receives no treatment or that i is at treatment level 0. Here, Z i 1 is the value of the outcome if i has a title in which the length is between 6 and 8 words. We call this scenario that of i receiving level 1 treatment. If i has a title longer than 8 words, then Z i 2 is the value of the outcome. We call this scenario that of i receiving level 2 treatment. For app i, the effect of a long title relative to a ’common’ (or brief) title, which is called a treatment effect, can be defined as the difference between Z i 1 and Z i 0 or the difference between Z i 2 and Z i 0 .
We observe Y i , G i , and X i , where Y i is the observed dependent variable, G i denotes the group to which app i belongs (or the treatment level i really received), and X i is a vector containing covariates. We also have three indicators W i j , which take the value 1 if i is in Group j (i.e., G i = j ), and take the value 0 otherwise. In this framework, the value of Y i is given by
Y i = W i 0 Z i 0 + W i 1 Z i 1 + W i 2 Z i 2 .
Since app i can be in only one group (that is, it can take one title), only one of the three potential outcomes can be observed. Although two counterfactual potential outcomes cannot be observed for a specific app, under the assumptions of unconfoundedness and overlap, we can still estimate the means of Z 0 , Z 1 , and Z 2 among the apps, or their differences (e.g., the difference between the mean of Z 1 and the mean of Z 0 ) (Wooldridge 2010 [28]; Cattaneo 2010 [3]; Cattaneo et al. 2013 [4]). The latter are known as average treatment effects. Cattaneo (2010) [3] combines the assumptions of unconfoundedness and overlap together as an assumption of “selection-on-observables.” Moreover, the assumption of unconfoundedness can be also called ignorability or conditional independence (Wooldridge 2010 [28]). Below are the details of the two assumptions we use in this analysis:
i 
Unconfoundedness: For j = 0 , 1 , 2 , Z j W | X . In other words, conditional on X, W and ( Z 0 , Z 1 , Z 2 ) are independent.
ii 
Overlap: For j = 0 , 1 , 2 , and for all X χ , where χ is the support of the covariates, 0 < P r ( G = j | X ) < 1 . In other words, apps of each covariate type always have a strictly positive probability in each group. This means that the treatment i received is not a deterministic function of the covariates.
Based on the assumption of unconfoundedness, we can obtain the following moment condition (Wooldridge 2010 [28]; Cattaneo et al. 2013 [4]): For j = 0 , 1 , 2 ,
E W j ( Z j μ j ) P r ( G = j | X ) = E E W j ( Z j μ j ) P r ( G = j | X ) | X = E E [ W j | X ] E [ ( Z j μ j ) | X ] P r ( G = j | X ) = 0 ,
where μ j = E [ Z j ] (that is, the mean of Z j ), and E [ W j | X ] = P r ( G = j | X ) (that is, the probability of receiving level j treatment). Motivated by this moment condition, Cattaneo (2010) [3] and Cattaneo et al. (2013) [4] proposed the inverse probability weighting (IPW) estimator, which extends the work of Hirano et al. (2003) [29] to a multi-valued treatment context. In the case of the mean of Z j among apps, the IPW estimator, μ I P W , j ^ , can be obtained by solving the following equation:
1 n Σ i = 1 n W i j P r ( G i = j ^ | X i ) ( Y i μ I P W , j ^ ) = 0 ,
where P r ( G = j ^ | X ) is the estimator for the conditional probability in which app i received level j treatment or is the estimator for the propensity score. In our analysis, P r ( G = j ^ | X ) is specified as the multinomial logit model. Hence, the IPW estimator can be obtained by the weighted mean of the observed outcome through the estimated conditional probability of the received treatment.
Moreover, based on Cattaneo (2010) [3], Cattaneo et al. (2013) [4] also introduced the following moment condition:
E W j ( Z j μ j ) P r ( G = j | X ) E [ ( Z j μ j ) | X , G = j ] P r ( G = j | X ) ( W j P r ( G = j | X ) ) = 0 .
Motivated by this moment condition, Cattaneo et al. (2013) [4] proposed the efficient influence function (EIF) estimator, μ E I F , j ^ , which is the solution to the following equation:
1 n Σ i = 1 n W i j ( Y i μ E I F , j ^ ) P r ( G i = j ^ | X i ) e i j ^ ( X i μ E I F , j ^ ) P r ( G i = j ^ | X i ) ( W i j P r ( G i = j ^ | X i ) ) = 0 ,
where e i j ^ are the predicted values for the difference between Y i and μ j for observations in Group j. In our analysis, we use the user-written STATA command poparms proposed by Cattaneo et al. (2013) [4] to obtain the IPW and EIF estimators.

3. Empirical Results

3.1. Analysis 1: On the 30-Character Limit

Table 2 provides descriptive statistics for all variables. From the right-hand half of the table we can see that, on average, the apps in Group 2 had a higher probability of appearing in the chart (top-300k) of Global Rank than apps in Group 1. Moreover, the apps in Group 1 also had a higher probability than the apps in Group 0. It seems that the length of the title of an app (the number of the messages or keywords the app sends to consumers) was positively correlated with the probability that the app is in the chart. Even for apps whose title length was longer than 5 words (around 30 characters), this correlation seems to still exist. However, does this turn out to be true after we control for other determinants? We need further analyses.
We first run the user-written STATA command bfit proposed by Cattaneo et al. (2013) [4] to find the best fit model for P r ( G = j | X ) , or the best propensity model. This conditional probability, in which an app received level j treatment, is specified by a multinomial logit model. In order to obtain the IPW and EIF estimators, we first need to find the predicted probabilities for all observations, whereas the functional form and estimated coefficients in the multinomial logit model are not our concern. In order to find the best-fitting model, bfit can automatically combine all of the covariates we have to form different functional forms, run these candidate multinomial logit models, and select the best model with a minimal Akaike information criterion value. Table 3 presents the covariates used for the multinomial logit model chosen by bfit.
Table 3. Covariates used in the treatment model and outcome model.
We also use a similar way to select the covariates used in the estimation of conditional means E [ Z j | X , G = j ] , which we use to calculate e i j ^ . The conditional mean is specified by a logit model and the covariates used in this model chosen by bfit are shown in Table 3.
Table 4 presents our estimation of the mean of Z j (the expectation of the potential outcome if an app received level j treatment). Since the result is whether an app was on the Global Rank chart on the observation date, the mean of Z j is the probability that it is on the chart if an app receives level j treatment. The estimation results shown in Table 4 indicate that the probability of being in the chart is always significantly greater than 0 at all treatment levels and that the probabilities are increasing in the treatment level.
Table 4. Estimation of the means of the potential outcomes: Analysis 1 a.
Table 5 reports the estimated average treatment effects, which are the differences between probabilities for two treatment levels. In this table, according to the result from the EIF estimator, we can see that if an app receives level 2 treatment (having a title longer than 8 words), compared to an app at treatment level 1 (having a title whose length is from 6 words to 8 words), its probability of being in the chart would increase by 0.158. The 95% confidence interval of this estimate does not overlap 0, which suggests that we can reject the null hypothesis that this effect is insignificant. The estimation results from the EIF estimator also indicate that the average treatment effect of changing the treatment level from 0 to 1 is 0.069 and that the average treatment effect of going from treatment level 0 to treatment level 2 is 0.227. These two estimated effects are both statistically different from 0.
Table 5. Estimation of average treatment effects: Analysis 1 a.
Table 5 also reports the IPW estimator estimation results. These results also reveal that the average treatment effects are all positively and statistically significant. The treatment effects estimated by the IPW estimator are greater than those estimated by the EIF estimator. However, in general, the results of the EIF estimator are more reliable than those from the IPW estimator. The first reason is that the EIF estimator is more efficient; the EIF estimator uses both treatment probability and conditional mean models, and the IPW estimator uses only one model. The second reason is that the EIF estimator enjoys the double-robust property (Cattaneo 2010 [3]; Cattaneo et al. 2013 [4]). This property says that, to consistently estimate treatment effects, the EIF estimator only requires either the model for treatment probability ( P r ( G = j | X ) ) or the model for the conditional mean ( E [ Z j | X , G = j ] ) to be correctly specified.
These results suggest that before the App Store restricted the title length, a longer title helped an app have better market performance. Especially, even if the title length was more than 5 words, increasing the title length could still affect the app’s market performance, which means that the threshold was over 30 characters.

3.2. Analysis 2: On the 50-Character Limit

In Analysis 2, we use the same empirical strategy as that used in Analysis 1, but we divide our sample into four groups. Table 6 provides descriptive statistics for the dependent variable in each group. From this table, we can see that, on average, apps in Group 3 had a higher probability of appearing in the chart (top-300k) of Global Rank than apps in Group 2. It seems that the title length of an app was still positively correlated to whether the app was in the chart even though the length of the app title was longer than 8 words.
Table 6. Descriptive statistics for the dependent variable in Analysis 2.
Since we divide our sample into four groups instead of three groups, we need a new propensity model. We again run the STATA command bfit to find the best-fitting multinomial logit model for P r ( G = j | X ) . The covariates for this model chosen by bfit are shown in the last row of Table 3. We then estimate the expectation of the potential outcome (that is, the probability of being in the chart) if an app received level j treatment, and the results are presented in Table 7. The estimation results indicate that the probability of being in the chart is always increasing in the treatment level. Table 8 reports the estimated differences between the probabilities in two treatment levels. In this table, according to the result from the EIF estimator, we can see that if an app receives level 3 treatment, compared to an app at treatment level 2, the probability of this app being in the chart increases by 0.093. However, the 95% confidence interval of this estimate overlaps 0, so the 0.093 increase is not statistically significant.
Table 7. Estimation of the means of the potential outcomes: Analysis 2 a.
Table 8. Estimation of average treatment effects: Analysis 2 a.
The results in this section still suggest that, basically, a longer title helped an app have a better market performance. However, we cannot obtain enough evidence to say that increasing the title length still affected the market performance of the apps after the length was more than 8 words. In other words, we do not know whether the threshold was greater than 50 characters.
Moreover, for this study, we initially also used standard logistic regression. The results showed that whether we used the number of words in the title or the number of characters in the title, both measures produced a significantly positive association with the dependent variable. Moreover, this association gradually weakened as the number of words (or characters) increased. In other words, when titles are short, the title length is positively associated with the performance of the app, but when titles become too long, the title no longer has any relationship with the performance of the app. These results are consistent with the above finding.

4. Discussions of Theoretical Models

We propose a simple advertisement game to provide a possible theoretical explanation for our empirical results presented in Section 2 and Section 3.
There are two players, a representative consumer (she) and a representative seller (he). The app of the seller is endowed with attributes θ , which is drawn from a uniform distribution with support [ 0 , 1 ] . A larger θ indicates that the app of the seller includes more useful features. This distribution is common knowledge. However, the realized θ is known only to the seller. The seller launches an advertisement for the consumer. The message set of the seller is M = [ 0 , 1 ] .
There are two types of seller: rebellious and honest. Hereafter, the honest (rebellious) type is called the honest (rebellious) seller. The probability of type realization is p 0 , 1 for the rebellious seller and 1 p for the honest seller. The honest seller is not strategic. The message of the honest seller is m = θ for any θ . That is, the honest seller always tells the truth. On the other hand, the rebellious seller is strategic and can select any m in the message set M regardless of the attributes θ of the app. The payoff for the rebellious seller is a. That is, the rebellious seller prefers the consumer to download his app regardless of θ .
After the advertising from the seller, the consumer selects a decision a { 1 , 0 } , where 1 indicates the action of downloading the app and 0 indicates no action. The payoff of the consumer is a θ c , where the cost to the consumer of downloading the app of the seller, c, is drawn from a uniform distribution with support [ 0 , 1 ] . The cost is the private information of the consumer, which is realized when the consumer makes a decision.
The first best outcome (for the consumer) is: a = 1 if c θ , and a = 0 otherwise. If the first best outcome is not achieved, we say that the download by the consumer is insufficient or excessive.
The timeline is summarized as follows:
Step 1.
Nature decides the type of the seller and the attributes ( θ ) of the seller’s app. Then, the seller privately and perfectly observes both pieces of information.
Step 2.
The seller places an advertisement (m) for the consumer.
Step 3.
The consumer decides whether to download the app ( a = 1 ) or not ( a = 0 ) given her private cost (c).
Step 4.
Finally, the payoffs are realized for the players.
Our solution concept is perfect Bayesian equilibrium (PBE). As we assumed above, the honest seller is not strategic and selects m = θ for any θ . Hence, PBE consists of the message strategy of the rebellious seller, the download strategy of the consumer, and the belief of the consumer. The strategy of the rebellious seller, q ( m | θ ) , associates the attributes θ with the advertisement m. This strategy is optimal for the rebellious seller given the strategy and belief of the consumer. The strategy of the consumer, y ( a | m , c ) , associates the advertisement m and her own cost c with her action a. This strategy is optimal for the consumer given the strategy of the rebellious seller and the behavior of the honest seller. The consumer updates her belief using Bayes’ rule.
The following is a summary of our statements.
1.
Among advertisements having lengths below some threshold, increasing the length (that is, a larger m) results in more downloads (that is, a = 1 is realized with a higher probability). Among advertisements having lengths above the threshold, changing the length does not affect the number of downloads. (Statement 1)
2.
A policy of limiting the length of advertisement will not benefit the consumer. The policy sometimes hurts the consumer and, at other times, does not affect the consumer. (Statement 2)
We present the results in a more formal manner. The updated belief of the consumer given m is denoted by E [ θ | m ] .
Statement 1.
There exist informative PBEs characterized by a threshold L 0 , 1 1 + p as follows.
(1)
The message strategy of the rebellious seller, q ( m | θ ) , is
q ( m | θ ) 0 for m [ L , 1 ] q ( m | θ ) = 0 else
for θ . This strategy satisfies the belief of the consumer, expression (7), mentioned below.
(2)
The belief of the consumer is
E [ θ | m ] = ( 1 p ) ( 1 L ) ( 1 + L ) 2 + p 2 ( 1 p ) ( 1 L ) + p for m [ L , 1 ] m else
(3)
The action strategy of the consumer is
y ( a | m , c ) = 1 for c E [ θ | m ] 0 else
for m .
In addition, the ex-ante expected payoff is maximized for the consumer and the rebellious seller when L = 1 1 + p . We let
L ¯ : = 1 1 + p
According to Statement 1, a babbling PBE (in the case of L = 0 ) exists as standard cheap talk models predict and multiple informative PBEs (in the case of L ( 0 , L ¯ ] ) also exist. (We also need to mention that there can be other PBEs. Let X be the set of messages so that every message in X leads to the same belief by the consumer. For example, the message set X can be disconnected such that X = [ L , a ] [ b , 1 ] or X = [ L , a ) ( a , b ) ( b , 1 ] , where 0 < L < a < b < 1 as long as E [ θ | m ] θ for m X and for θ M \ X . But the set X should include the neighborhood of 1 and hence we simply focus on the cases of connected X in Statement 1.)
But among these PBEs, the PBE with the threshold L = L ¯ is the most efficient for the consumer and the rebellious seller as follows. In order to examine the welfare of the consumer, we add terms. Let V a r ( θ | m L ) denote the variance of θ when m L , and let V ( L ) denote the ex-ante expected payoff of the consumer, respectively given L [ 0 , L ¯ ] . Let V * denote the ex-ante expected payoff of the consumer for the first best outcome (i.e., both types of seller tell the truth). We can show that
V * V ( L ) = 1 2 P r ( m L ) V a r ( θ | m L ) = ( 1 L ) 3 + p L 3 6
where V * V ( L ) measures the loss of the consumer compared to the first best outcome and decreases with L (i.e., V ( L ) increases with L) for L [ 0 , L ¯ ] .
For the rebellious seller, the ex-ante expected payoff is E [ θ | m ] for m L , which increases with L in [ 0 , L ¯ ] . Therefore, the PBE given L = L ¯ is the best for the consumer and the rebellious seller.
Hence, we will focus on the PBE described in Statement 1 given L = L ¯ from now on. Before analyzing the effect of the length limit policy, we explain other implications of Statement 1.
We can show that E [ θ | m ] = L ¯ for m [ L ¯ , 1 ] . Hence, the probability of downloading ( a = 1 ) given each m [ 0 , 1 ] is
P r ( a = 1 | m ) = L ¯ if m L ¯ , 1 m else
That is, a larger m (a longer title) results in a = 1 (download) with a higher probability for m < L ¯ , while this does not affect the probability for m L ¯ .
In the PBE, the rebellious seller exaggerates a message and pretends that his app has attributes θ L ¯ . Hence, for any long advertisement m L ¯ , the consumer does not know if the seller is an honest seller who tells the truth or a rebellious seller who exaggerates the features of his app. (The rebellious seller selects m to result in the highest belief by the consumer (and hence the most frequent download rate). Since the honest seller honestly reveals his information, the consumer trusts any length of the advertisement to some extent. Thus, the rebellious seller selects the longest advertisement. However, this reduces the belief of consumers in the longest advertisement, and a shorter advertisement may lead to more frequent downloads.) For any other short advertisement m < L ¯ , the consumer knows that the seller is an honest seller who tells the truth. As a result, for a long advertisement m L ¯ , increasing the length does not affect the consumer’s belief or the number of downloads (on average). On the other hand, for m < L ¯ , increasing the length results in more downloads (on average). Moreover, there are insufficient downloads when the seller is honest with θ L ¯ and excessive downloads (on average) when the seller is rebellious.
Finally, we introduce the policy of limiting advertisement lengths. Under this policy, the message set of the seller is reduced to M B = [ 0 , B ] , where B ( 0 , 1 ) .
We assume that the honest seller tells the truth as much as possible in the following sense: the honest seller chooses m = B if θ B , and m = θ otherwise.
This policy affects the equilibrium outcome and hurts the consumer if the length limit, B, is below the original threshold, L ¯ . Statement 2 is derived from Statement 1.
Statement 2.
Introduce a policy to limit the length of an advertisement to B (i.e., the message set of the seller changes from M to M B ). Then, if B < L ¯ , the policy reduces the ex-ante expected payoff of the consumer. Otherwise, the policy does not affect the ex-ante expected payoff of the consumer.
If B < L ¯ , the policy affects the outcome. The message strategy of the rebellious seller is
q ( m = B | θ ) = 1 for   θ
The rebellious seller always selects m = B . The ex-ante expected payoff of the consumer is V ( B ) ( < V ( L ¯ ) ) . The policy helps to avoid the exaggeration of the rebellious seller, but this policy prevents the revelation of the truth of the honest seller. The negative effects of the latter dominate the positive effects of the former. The consumer is worse off.
On the other hand, if B L ¯ , the policy does not actually affect the outcome. The rebellious seller selects m [ L ¯ , B ] so that the belief of the consumer is:
E [ θ | m ] = L ¯ for m [ L ¯ , B ] m else
With or without the policy, the rebellious seller pretends that his app has θ L ¯ and the ex-ante expected payoff of the consumer is V ( L ¯ ) .
We have proposed a simple theoretical model. We think these results hold if we consider (1) partially rebellious types and (2) more general continuous distributions of θ and c (including discrete distributions). Suppose that there are continuum types, type q, where q is drawn from a continuum distribution with support [0,1]. Then, type q behaves rebelliously (i.e., strategically) with probability q and honestly (i.e., non-strategically) with the remaining probability. Then, after letting p = q [ 0 , 1 ] q d G ( q ) , where G ( q ) is the cumulative distribution function of the type distribution, the same analysis applies (the seller behaves rebelliously with probability p and honestly with the remaining probability in total).
See the Appendix A for the mathematical details in this section.

5. Conclusions

In this study, we make random observations of 1998 apps available from the App Store in 2015, with our estimation results revealing that, for an app in 2015, its probability of being in the chart of Global Rank was related to the length of its title. This implies that before the App Store introduced title length regulations in 2016, although there were some developers who used too many popular, but irrelevant, keywords in their titles, it seems that creating a longer title remained a useful strategy to communicate with consumers. (It is important to emphasize that although more advanced statistical matching techniques yield higher-quality results than traditional logistic regression and help alleviate potential causal inference concerns, strictly speaking, statistical matching cannot completely eliminate endogeneity problems, just like other existing methods for addressing endogeneity (Bittmann et al. 2021 [30]). These constitute the limitations of this study.) To some extent, consumers know that many developers remain honest and view titles as a channel to identify apps matching their needs.
Moreover, our estimation analyses found that the lower limit for the noisy advertisement was over 30 characters, but we do not have enough evidence to claim that this threshold was over 8 words. Although the evidence regarding the threshold of 8 words is ambiguous, our evidence regarding the threshold of 5 words is clearer. Based on our theoretical model supported by the pre-policy correlations, even if app titles were longer than 5 words, consumers still believed that the messages sent by sellers could communicate the features of apps, and hence the 30-character limit may be too stringent.
The App Store Review Guidelines [31] published by the App Store in 2017 claim that
Customers should know what they’re getting when they download or buy your app, so make sure your app description, screenshots, and previews accurately reflect the app’s core experience.
(App Store n.d.)
Our analysis implies that the App Review process conducted by the App Store does achieve their goals to some extent. Hence, if the App Review process can reduce the ratio of rebellious sellers who combine irrelevant yet popular keywords in the titles of their apps, there might be little need for a stringent character limit for app titles. However, the message that sellers send to consumers through their app titles can help consumers make an accurate decision. Therefore, our theoretical model, supported by the pre-policy correlations, predicts that cutting the number of messages or keywords the sellers can send may force consumers to face uncertainty and subsequently damage their welfare.
However, the 50-character limit might have been harmless. Despite this, since consumers had adjusted their behavior prior to the noise, the 50-character limit did not benefit consumers. Of course, the 50-character limit might have improved the aesthetic enjoyment of users, which we do not cover in our model.
This study suggests various avenues for future research. While we have focused on the length of the app titles, there are many other factors that can affect consumers’ behavior, such as the syntax of the app title and the pictures used. By including these factors as independent variables, our model can be more complicated but realistic and lead to more insights and implications.
Lastly, this paper uses a simple cheap talk model for our theoretical argument and examines the effect of the limit length policy on the consumer welfare. Our current model does not consider an inherent penalty for excessively long names. The possible way to involve such a penalty is by involving noises of the buyer understanding/receiving the message associated with the length. For example, the buyer understands the message with a probability between 0 and 1, and this probability decreases with the length. In addition, consumers may rely on visual information (such as screenshots of the app) and recognizable elements (such as famous brand names) as well as titles. Investigating richer models considering such additional factors can answer new questions.

Author Contributions

Conceptualization, S.C., Y.-H.L. and C.-Y.S.; methodology, S.C., Y.-H.L. and C.-Y.S.; software, C.-Y.S. and M.-H.T.; formal analysis, S.C., Y.-H.L., C.-Y.S. and M.-H.T.; investigation, S.C., Y.-H.L., C.-Y.S. and M.-H.T.; data curation, C.-Y.S. and M.-H.T.; writing—original draft preparation, S.C. and C.-Y.S.; writing—review and editing, S.C., Y.-H.L. and C.-Y.S.; project administration, S.C., Y.-H.L. and C.-Y.S.; funding acquisition, S.C. and C.-Y.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Japan Society for the Promotion Science (no. 16K03549, 20K01544, 24K04799), Kyoto Sangyo University Publication Grants, the Joint Research Program of KIER (Kyoto University) (this grant was given twice in 2023 and 2024 without a specific grant number), and the Kyoto University Foundation (this grant was given in 2019 without a specific grant number).

Data Availability Statement

The official list providing the information of available apps in the App Store can be found at the following: https://itunes.apple.com/us/genre/ios/id36?mt=8. Further details are available on request.

Acknowledgments

This paper has benefited from the advice provided by Ming-Jen Lin and the seminar participants at the 2nd Joint Economics Symposium of 4 Leading Universities in Taiwan and Japan in Osaka University, Japan. We are also grateful to the participants at the conference with topic “Business and Management: Framing Compliance and Dynamic” in National Economics University, Vietnam for helpful comments. We appreciate the editorial support from Rahel O’More and Jose de Jesus Herrera Velasquez. Sincere gratitude is also extended to the anonymous referees for helpful suggestions. The usual disclaimer applies.

Conflicts of Interest

Author Min-Hsueh Tsai was employed by the Farwind Industrial Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A. Theoretical Models

Since this is a one-shot game, the consumer chooses a = 1 if and only if c E [ θ | m ] . Thus, it suffices to show that the message strategy is optimal for the rebellious seller and is consistent with the belief of the consumer.
Next, the rebellious seller’s optimality requires that E [ θ | m ] be constant and that E [ θ | m ] L for m [ L , 1 ] hold. Expression (7) in Section 4 is derived as follows. For m [ L , 1 ] ,
E [ θ | m ] = ( 1 p ) ( 1 L ) L 1 θ d θ + p 0 1 θ d θ ( 1 p ) ( 1 L ) + p = 1 ( 1 p ) ( 1 L ) + p ( 1 p ) ( 1 L ) ( 1 + L ) 2 + p 2
Hence, the equilibrium L can be any value in the set [ 0 , 1 1 + p ] , because we have the following result.
E [ θ | m L ] L ( L 1 1 + p ) ( L 1 1 p ) 0
In addition, the download of the rebellious seller app is maximized for L = L ¯ (where L ¯ = 1 1 + p ) among any value in the set [ 0 , L ¯ ] because E [ θ | m L ] is increasing in L in the range of [ 0 , L ¯ ] as follows.
ln E [ θ | m L ] L 0 ( L 1 1 + p ) ( L 1 1 p ) 0
We will show details of welfare analyzes. The ex-ante expected payoff of the consumer for the first outcome is:
V * = 0 1 0 θ ( θ c ) d c d θ = 1 2 E [ θ 2 ] = 1 6
In any PBE described in Statement 1, the ex-ante expected payoff of the consumer is:
V ( L ) = p 0 1 0 j ( θ c ) d c d θ + ( 1 p ) L 1 0 L ( θ c ) d c d θ + 0 L 0 θ ( θ c ) d c d θ
The first term is for a rebellious seller, and the second term is for an honest seller.
The loss compared to the first best outcome is:
V * V ( L ) = p 0 1 0 θ ( θ c ) d c 0 L ( θ c ) d c d θ + ( 1 p ) L 1 0 θ ( θ c ) d c 0 L ( θ c ) d c d θ = p 2 0 1 ( θ L ) 2 d θ + ( 1 p ) L 1 ( θ L ) 2 d θ = 1 2 P r ( m L ) V a r ( θ | m L ) = ( 1 L ) 3 + p L 3 6
where Pr( m L ) denotes the probability that the consumer receives m L . We can show that V * V ( L ) decreases in L, i.e., V ( L ) increases in L, for L [ 0 , L ¯ ] .

References

  1. App Store Review Guidelines History. 1 September 2016. Updated Subscription Rules, Sirikit, Stickers and More. Available online: http://www.appstorereviewguidelineshistory.com/ (accessed on 20 January 2017).
  2. McCabe, W. More Than 25 % of top iOS Apps Will Soon Need to Change Their Names. Sensor Tower. 1 September 2016. Available online: https://sensortower.com/blog/new-app-name-length-rules (accessed on 10 February 2024).
  3. Cattaneo, M.D. Efficient semiparametric estimation of multi-valued treatment effects under ignorability. J. Econom. 2010, 155, 138–154. [Google Scholar] [CrossRef]
  4. Cattaneo, M.D.; Drukker, D.M.; Holland, A.D. Estimation of multivalued treatment effects under conditional independence. Stata J. 2013, 13, 407–450. [Google Scholar] [CrossRef]
  5. Crawford, V.; Sobel, J. Strategic information transmission. Econometrica 1982, 50, 1431–1451. [Google Scholar] [CrossRef]
  6. Pocket Gamer. Count of Active Applications in the APP Store. Available online: https://www.pocketgamer.biz/data/ (accessed on 20 February 2022).
  7. App Radar. App Title: Writing Android App Titles That Drive Downloads. Available online: https://appradar.com/academy/app-store-optimization-guide/app-title/ (accessed on 15 July 2021).
  8. Perez, S. Report: Recent App Store Algorithm Change Points to Crackdown on “Keyword Stuffing”. Techcrunch. 29 July 2015. Available online: https://techcrunch.com/2015/07/28/report-recent-app-store-algorithm-change-points-to-crackdown-on-keyword-stuffing/ (accessed on 10 February 2024).
  9. Patel, N. 5 Myths About App Store Optimization. KISSmetrics. 2014. Available online: https://neilpatel.com/blog/5-myths-about-aso/ (accessed on 20 January 2017).
  10. Karagkiozidou, M.; Ziakis, C.; Vlachopoulou, M.; Kyrkoudis, T. App store optimization factors for effective mobile app ranking. In Strategic Innovative Marketing and Tourism: 7th ICSIMAT, Athenian Riviera, Greece, 2018. Springer Proceedings in Business and Economics; Kavoura, A., Kefallonitis, E., Giovanis, A., Eds.; Springer: Cham, Switzerland, 2019; pp. 479–486. [Google Scholar]
  11. Padilla-Piernas, J.M.; Parra-Meroño, M.C.; Beltrán-Bueno, M.Á. The Importance of App Store Optimization (ASO) for Hospitality Applications. In Digital and Social Media Marketing: Advances in Theory and Practice of Emerging Markets; Rana, N.P., Slade, E.L., Sahu, G.P., Kizgin, H., Singh, N., Dey, B., Gutierrez, A., Dwivedi, Y.K., Eds.; Springer: Cham, Switzerland, 2020; pp. 151–161. [Google Scholar]
  12. Martin, W.; Sarro, F.; Jia, Y.; Zhang, Y.; Harman, M. A Survey of App Store Analysis for Software Engineering. IEEE Trans. Softw. Eng. 2017, 43, 817–847. [Google Scholar] [CrossRef]
  13. Strzelecki, A. Application of Developers’ and Users’ Dependent Factors in App Store Optimization. International Association of Online Engineering. Int. J. Interact. Mob. Technol. 2020, 14, 91–106. [Google Scholar] [CrossRef]
  14. Stocchi, L.; Pourazad, N.; Michaelidou, N.; Tanusondjaja, A.; Harrigan, P. Marketing research on mobile apps: Past, present and future. J. Acad. Mark. Sci. 2022, 50, 195–225. [Google Scholar] [CrossRef] [PubMed]
  15. Gardete, P.M.; Bart, Y. Tailored cheap talk: The effects of privacy policy on ad content and market outcomes. Mark. Sci. 2018, 37, 685–853. [Google Scholar] [CrossRef]
  16. Chakraborty, A.; Harbaugh, R. Persuasive puffery. Mark. Sci. 2014, 33, 315–461. [Google Scholar] [CrossRef]
  17. Chen, Y.; Kartik, N.; Sobel, J. Selecting Cheap-Talk Equilibria. Am. Econ. Rev. 2008, 76, 117–136. [Google Scholar] [CrossRef]
  18. Chen, Y. Perturbed communication games with honest senders and naive receivers. J. Econ. Theory 2011, 146, 401–424. [Google Scholar] [CrossRef]
  19. Crémer, J.; Garicano, L.; Prat, A. Language and the theory of the firm. Q. J. Econ. 2007, 122, 373–407. [Google Scholar] [CrossRef]
  20. Dillman, D.A.; Smyth, J.D.; Christian, L.M. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method, 3rd ed.; John Wiley & Sons: New York, NY, USA, 2009. [Google Scholar]
  21. Champ, P.A. Collecting nonmarket valuation data. In A Primer on Nonmarket Valuation, 2nd ed.; Champ, P.A., Boyle, K.J., Brown, T.C., Eds.; Springer Science+Business Media: Berlin, Germany, 2017; pp. 55–82. [Google Scholar]
  22. Jung, E.Y.; Baek, C.; Lee, J.D. Product survival analysis for the App Store. Mark. Lett. 2012, 23, 929–941. [Google Scholar] [CrossRef]
  23. Lee, G.; Raghu, T.S. Determinants of mobile apps’ success: Evidence from the APP Store market. J. Manag. Inf. Syst. 2014, 31, 133–170. [Google Scholar] [CrossRef]
  24. Garg, R.; Telang, R. Inferring app demand from publicly available data. MIS Q. 2013, 37, 1253–1264. [Google Scholar] [CrossRef]
  25. Yin, P.L.; Davis, J.P.; Muzyrya, Y. Entrepreneurial innovation: Killer apps in the iPhone ecosystem. Am. Econ. Rev. 2014, 104, 255–259. [Google Scholar] [CrossRef]
  26. Caliendo, M.; Kopeinig, S. Some practical guidance for the implementation of propensity score matching. J. Econ. Surv. 2008, 22, 31–72. [Google Scholar] [CrossRef]
  27. Hainmueller, J. Entropy balancing for causal effects: A multivariate reweighting method to produce balanced samples in observational studies. Political Anal. 2012, 20, 25–46. [Google Scholar] [CrossRef]
  28. Wooldridge, J.M. Econometric Analysis of Cross Section and Panel Data, 2nd ed.; MIT Press: Cambridge, MA, USA, 2010. [Google Scholar]
  29. Hirano, K.; Imbens, G.W.; Ridder, G. Efficient estimation of average treatment effects using the estimated propensity score. Econometrica 2003, 71, 1161–1189. [Google Scholar] [CrossRef]
  30. Bittmann, F.; Tekles, A.; Bornmann, L. Applied usage and performance of statistical matching in bibliometrics: The comparison of milestone and regular papers with multiple measurements of disruptiveness as an empirical example. Quant. Sci. Stud. 2021, 2, 1246–1270. [Google Scholar] [CrossRef]
  31. App Store. App Store Review Guidelines. Available online: https://developer.apple.com/app-store/review/guidelines/ (accessed on 15 September 2017).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.