When using mobile apps, money is certainly not the only conceivable currency to pay for them: Mobile apps might be advertised as free to download and use, however, providers seek access to private information upstage that they might a) use themselves to enhance their offer (e.g. through personalization) and/or b) selland therewith finance their app and increase their profits (Kummer and Schulte 2019). Although mobile app markets have enormously grown since their introduction about a decade ago (Wottrich et al.
2018), and their use has become an integral part of customers everyday life; mobile app providers distressing intrusiveness and even exploitation can be demonstrated by the following instances: First, more than 70 percent of medical apps examined by Andrews (2018) not only transfer highly sensitive and personal health related information with third parties, but also actively capitalize on said data by selling it to marketing agencies, insurance and analytics companies amongst others. According to the authors (2018), this information was then used to discriminate against certain consumers (e.
g. deny access to insurance). Out of nearly 1000 health apps, approx. 30 percent in fact request more access (and therefore more personal data) than necessary to ensure the execution of primary functions (Andrews 2018). This is supported by Huckvale et al. (2019) who confirm that well-known health apps, specified on highly sensitive issues such as depression, transfer private yet linkable data with predominantly Google and Facebook Analytics, as well as Googles marketing services. In a recent article published by The Wall Street Journal (2019), period and fertility tracker Flo was specifically named for sharing and marketing private information about womens bodies and their fertility issues.
Second, according to an article by German newspaper Zeit (2018) about the congressional hearing regarding Facebooks information disclosure scandal with Cambridge Analytics, Mark Zuckerberg (Facebooks CEO) has not excluded that there might be an additional premium version of Facebook (with costs) in the futurewhich could then feature higher levels of information privacy. The article (2018) states that Facebook was sued due to marketing their social media platform as free although active users apparently pay with their private data: Facebook earns approx. five Euro per user every month (profit divided by number of users).
In general, the business model of capitalizing on user data is based on the exchange of consumers private information for personalized marketing measures (Xu et al. 2011). Companies have an interest in presenting their (potential) customers with tailor-made solutions that could stimulate the purchase of their products or services. Marketing agencies support companies with advertising campaigns that target their exact customer base. The more specific said user data is that marketing or analytics companies provide, the more valuable and therefore expensive it is in turn for their clients. For instance, as shown in FIGURE 1, compared to a gender-neutral campaign, it is five percent more expensive to address men specifically and nine percent more to reach women with advertising (in this specific case).
FIGURE 1: Prices in a marketing campaign of a temporary employment agency
Gender Campaign Clicks Currency Cost per click Cost
Unkown Arbeit-in-Bayern 58 EUR 0,42 24,31
Female Arbeit-in-Bayern 117 EUR 0,46 53,58
Male Arbeit-in-Bayern 69 EUR 0,44 30,29
The key that allows app providers to invade end users privacy, to retrieve, and (mis-)use sensitive data, are permission requests (Gu et al. 2017). Although developers in popular app market places, such as Apples software iOS and Googles Android, brief users about the information the respective app is about to seekand allow them to decide whether to accept or reject the permission request, few customers actively evaluate their privacy preferences and decide accordingly (Pentina et al. 2016). Privacy paradox describes consumers contradictory behavior of perceiving privacy concerns but not reducing voluntarily information disclosure nor taking precautions to reduce the risk of involuntary information disclosure (Debatin et al. 2009). Current research examines antecedents to consumers decisions of disclosing their private information (in mobile apps). Predominantly, consumers are not aware nor willing to share their data with intrusive app providers (Almuhimedi et al. 2015). However, research on data protection measuresand especially on their perceived value to customersdoes not reflect their disgust about potential misuse of data (Grossklags and Acquisti 2007). Furthermore, several apps have more than one app version, which is a result of freemium strategy: A basic version is freely available, while a premium version offers either unlimited access or additional features/services. Both premium versions of Spotify and YouTube charge monthly fees in exchange for offline use and, interestingly, eliminating advertisements. Information privacy measures could be such services that differentiate free and premium version from each other.
Revisiting the study on information privacy in health apps (Andrews 2018), as not all mobile app providers request the same amount of sensitive information and also have different privacy policies, it is apparent that mobile apps could be categorized according to their level of data security. Previous research has exclusively focused on minimum and maximum information privacy. However, a third, in-between dimensionlimited data securitycould be considered as common ground for both supply (app providers) and demand (end users). The amount of information accessed would be high enough for app providers to profitably lead on their business with analytics companies, yet low enough for end users to continue using the app without feeling invaded; while also gaining new customers that might have felt daunted by perceived privacy risks in app versions with minimum information privacy.
The aim of this research is to calculate and analyze consumers willingness to pay for data security (measured by the willingness to pay for mobile apps) and consequentially examine the monetarization of consumers personal data provided in mobile apps from the end user perspectivewhile also comparing hedonic and utilitarian apps. Against this background I address the following research questions:
How does consumers willingness to pay for a well-established app differ depending on the level of data security provided in the respective app version?
In this context, how do hedonic vs. utilitarian apps affect consumers assessment of privacy?
The following chapter provides an overview of theories and concepts regarding the field of information privacy with emphasis on the privacy calculus theory. I then present prior research findings and related work before introducing the research model and associated considerations. The conceptual model specifies the research intentions with the aid of three hypotheses concerning end users perception of data security risks and their willingness to pay for data protection measures in mobile apps. Subsequently, the empirical approach describes the quantitative research in detailincluding participants, methodology and data evaluation in statistical software SPSS. I then present the results of the experiment, specifically focusing on the informative value regarding the predefined hypotheses. In the last of seven main chapters I critically discuss and integrate the findings both in a theoretical and a managerial context. A conclusion to the research objectives will be drawn and, based on this papers limitations, I present an outlook and possible points of contact for future researchers to tie in with.
2 Theoretical and Conceptual Foundations
In this chapter, concepts and theories related to the field of data security in mobile apps will be presented. To begin with, as suggested by B?langer and Crossler (2011), this research defines the construct of information privacy as the combination of two privacy dimensions introduced by Clarke (1999): Privacy of personal communication and data privacy. Additionally, information privacy should be viewed as a state, suggesting that it can be present or (partially) absent in a certain situation (Dinev and Hart 2004). That is, complete information privacy is given if individuals retain full control over collection, processing and use of their personal information (Westin 1967).
For this research, I mainly focus on privacy calculus (Culnan and Bies 2003), as it is most closely related to information disclosure in mobile appsand as most other frameworks might be universally relevant in a broader context (e.g. theory of planned behavior (Ajzen 1991)), but already too disaffected regarding my particular research question. Privacy calculus theory (Culnan and Bies 2003) presents the relation between the perception of risks and benefits as scales. If risks outweigh benefits, consumers are less willing to disclose their information. Consistent with Phelps et al. (2000), this research equates increasing information disclosure with decreasing information privacy, suggesting that frameworks and theories regarding information disclosure can also be applied to the field of information privacy (Mothersbaugh et al. 2012).
Importantly, privacy calculus implies that consumers rationally weigh up their options. However, as various research regarding privacy paradox suggests, consumers behavior in mobile apps is undoubtedly not completely rational (e.g. Pentina et al. (2016)). Since purchase decisions are usually made in consideration of profits and lossesand information disclosure in apps is closely related to a considerable level of uncertaintyprospect theory (Kahneman and Tversky 1979) may be applied in this research. Prospect theory states that end users evaluate their options according to which outcome they are likely to face (Kahneman and Tversky 1979). Most importantly regarding this research, consumers have a tendency to overestimate highly certain outcomes compared to less probable scenariosresulting in risk aversion (Kahneman and Tversky 1979). If end users were presented with the option to disclose information, they might hesitate as loss of control would be certain in the very moment of sharing datawhile related benefits might have yet to be proven. Therefore, prospect theory could suggest reviewing the dynamics of privacy calculus: Even though both dimensions have been confirmed to significantly influence consumers willingness to disclose information, primarily perceived risk could be what tips the scales in the context of this research (as confirmed by Sun et al. (2015)).