1br0wn

joined 1 year ago
MODERATOR OF
mae
 

Interoperability plays an important role in the digital economy and can be a key driver of innovation, competition, choice and economic growth. However, effective interoperability can be hindered by certain barriers, and is often not implemented to its full potential, including in mobile ecosystems.

This report develops a general framework to promote the adoption of hardware interoperability and makes specific recommendations for the effective implementation of Article 6(7) of the Digital Markets Act (DMA).

 

Wi-Fi-based Positioning Systems (WPSes) are used by modern mobile devices to learn their position using nearby Wi-Fi access points as landmarks. In this work, we show that Apple’s WPS can be abused to create a privacy threat on a global scale. We present an attack that allows an unprivileged attacker to amass a worldwide snapshot of Wi-Fi BSSID geolocations in only a matter of days. Our attack makes few assumptions, merely exploiting the fact that there are relatively few dense regions of allocated MAC address space. Applying this technique over the course of a year, we learned the precise locations of over 2 billion BSSIDs around the world.The privacy implications of such massive datasets become more stark when taken longitudinally, allowing the attacker to track devices’ movements. While most Wi-Fi access points do not move for long periods of time, many devices—like compact travel routers—are specifically designed to be mobile.We present several case studies that demonstrate the types of attacks on privacy that Apple’s WPS enables: We track devices moving in and out of war zones (specifically Ukraine and Gaza), the effects of natural disasters (specifically the fires in Maui), and the possibility of targeted individual tracking by proxy—all by remotely geolocating wireless access points.We provide recommendations to WPS operators and Wi-Fi access point manufacturers to enhance the privacy of hundreds of millions of users worldwide. Finally, we detail our efforts at responsibly disclosing this privacy vulnerability, and outline some mitigations that Apple and Wi-Fi access point manufacturers have implemented both independently and as a result of our work.

 

This report aims to go behind the headlines and beyond the simplified facial recognition (FR) debate that we see in so much of the media today. The report highlights the types of FR in use by policing and others around the world, and explores the big debates around that use. We interviewed 34 stakeholders – including policing practitioners, academics, regulators and suppliers – to share their views and expertise on this complex issue. This analysis includes an insightful foreword from National Police Chiefs' Council (NPCC) Chair Chief Constable Gavin Stephens.

 

Digital platforms have disrupted many sectors but have not yet visibly transformed highly regulated industries. This study of Big Tech entry in healthcare and education explores how platforms have begun to enter highly regulated industries systematically and effectively. It presents a four-stage process model of platform entry, which we term as “digital colonization.” This involves provision of data infrastructure services to regulated incumbents; data capture in the highly regulated industry; provision of data-driven insights; and design and commercialization of new products and services. The article clarifies platforms’ sources of competitive advantage in highly regulated industries and concludes with managerial and policy recommendations.

 

This Article challenges the common view that more stringent regulation of the digital economy inevitably compromises innovation and undermines technological progress. This view, vigorously advocated by the tech industry, has shaped the public discourse in the United States, where the country’s thriving tech economy is often associated with a staunch commitment to free markets. U.S. lawmakers have also traditionally embraced this perspective, which explains their hesitancy to regulate the tech industry to date. The European Union has chosen another path, regulating the digital economy with stringent data privacy, antitrust, content moderation, and other digital regulations designed to shape the evolution of the tech economy toward European values around digital rights and fairness. According to the EU’s critics, this far-reaching tech regulation has come at the cost of innovation, explaining the EU’s inability to nurture tech companies and compete with the United States and China in the tech race. However, this Article argues that the association between digital regulation and technological progress is considerably more complex than what the public conversation, U.S. lawmakers, tech companies, and several scholars have suggested to date. For this reason, the existing technological gap between the United States and the EU should not be attributed to the laxity of American laws and the stringency of European digital regulation. Instead, this Article shows there are more foundational features of the American legal and technological ecosystem that have paved the way for U.S. tech companies’ rise to global prominence—features that the EU has not been able to replicate to date. By severing tech regulation from its allegedly adverse effect on innovation, this Article seeks to advance a more productive scholarly conversation on the costs and benefits of digital regulation. It also directs governments deliberating tech policy away from a false choice between regulation and innovation while drawing their attention to a broader set of legal and institutional reforms that are necessary for tech companies to innovate and for digital economies and societies to thrive.

 

Our current policy and research focus on artificial intelligence (AI) needs a paradigmatic shift in order to regulate technology effectively. It is evident that AI-based systems and services grab the attention of policymakers and researchers in light of recent regulatory efforts, like the EU AI Act and subsequent public interest technology initiatives focusing on AI (Züger & Asghari, 2023). While these initiatives have their merits, they end up narrowly focusing efforts on the latest trends in digitalisation. We argue that this approach leaves untouched the engineered environments in which digital services are produced, thereby undermining efforts to regulate AI and to ensure that AI-based services serve the public interest. Even when policymakers consider how digital services are produced, they assume that AI is captured by a few cloud companies (Cobbe, 2024; Vipra & West, 2023), when, in fact, AI is a product of these environments. If policymakers fail to recognise and tackle issues stemming from AI’s production environments, their focus on AI may be misguided. Accordingly, we argue that policy and research should shift their regulatory focus from AI to its production environments.

 

There is a growing trend for high-status original equipment manufacturers (OEMs) such as premium electronics manufacturers and premium carmakers to create and capture value through digital extensions of their products. However, these incumbents face disruptive threats from platforms offering substitutes for these digital extensions. The literature suggests that coopetition—the interplay of cooperation and competition—is a viable strategic response to this threat. However, we have a limited understanding of how high-status OEMs coopete with platforms to maintain their digital extensions' edge over time. We address this gap through a longitudinal case study of InnoCar, a premium European carmaker whose digital extensions—car-specific digital services that enhance the driving experience, such as real-time navigation and infotainment—were challenged by Google and Apple. In response, InnoCar pursued what we call the slipstream strategy, which consists of two phases with varying intensities of cooperation and competition. A high-status OEM first increases its cooperation with platforms at the expense of competition in order to establish shared demand-related complementary assets. Second, it focuses on competing with platforms on the quality of its digital extensions while keeping cooperation to a minimum. We develop a conceptual framework that specifies the slipstream strategy and provide boundary conditions for its application. Our paper contributes to research on coopetition with platforms.

 

Research on platform owners’ entry into complementary markets points in divergent directions. One strand of the literature reports a squeeze on post-entry complementor profits due to increased competition, while another observes positive effects as increased customer attention and innovation benefit the complementary market as a whole. In this research note, we seek to transcend these conflicting views by comparing the effects of the early and late timing of platform owners’ entry. We apply a difference-in-differences design to explore the drivers and effects of the timing of platform owners’ entry using data from three entries that Amazon made into its Alexa voice assistant’s complementary markets. Our findings suggest that early entry is driven by the motivation to boost the overall value creation of the complementary market, whereas late entry is driven by the motivation to capture value already created in a key complementary market. Importantly, our findings suggest that early entry, in contrast to late entry, creates substantial consumer attention that benefits complementors offering specialized functionality. In addition, the findings also suggest that complementors with more experience are more likely to benefit from the increased consumer attention. We contribute to platform research by showing that the timing of the platform owner’s entry matters in a way that can potentially reconcile conflicting findings regarding the consequences of platform owners’ entry into complementary markets.

 

There has been an explosion in uses of educational technology (EdTech) to support schools’ teaching, learning, assessment and administration. This article asks whether UK EdTech and data protection policies protect children's rights at school. It adopts a children's rights framework to explore how EdTech impacts children's rights to education, privacy and freedom from economic exploitation, taking Google Classroom as a case study. The research methods integrate legal research, interviews with UK data protection experts and education professionals working at various levels from national to local, and a socio-technical investigation of the flow of children's data through Google Classroom. The findings show that Google Classroom undermines children's privacy and data protection, potentially infringing children's other rights. However, they also show that regulation has impacted on Google's policy and practice. Specifically, we trace how various governments’ deployment of a range of legal arguments has enabled them to regulate Google's relationship with schools to improve its treatment of children's data. Although the UK government has not brought such actions, the data flow investigation shows that Google has also improved its protection of children's data in UK schools as a result of these international actions. Nonetheless, multiple problems remain, due both to Google's non-compliance with data protection regulations and schools’ practices of using Google Classroom. We conclude with a blueprint for the rights-respecting treatment of children's education data that identifies needed actions for the UK Department for Education, data protection authority, and industry, to mitigate against harmful practices and better support schools.

 

In recent years, Amazon, Microsoft, and Google have become three of the dominant developers of AI infrastructures and services. The increasing economic and political power of these companies over the data, computing infrastructures, and AI expertise that play a central role in the development of contemporary AI technologies has led to major concerns among academic researchers, critical commentators, and policymakers addressing their market and monopoly power. Picking up on such macro-level political-economic analyses, this paper more specifically investigates the micro-material ways infrastructural power in AI is operated through the respective cloud AI infrastructures and services developed by their cloud platforms: AWS, Microsoft Azure, and Google Cloud. Through an empirical analysis of their evolutionary trajectories in the context of AI between January 2017 and April 2021, this paper argues that these cloud platforms attempt to exercise infrastructural power in three significant ways: through vertical integration, their complementary innovation, and the power of abstraction. Each dynamic is strategically mobilised to strengthen these platforms’ dominant position at the forefront of AI development and implementation. This complicates the critical evaluation and regulation of AI technologies by public authorities. At the same time, these forms of infrastructural power in the cloud provide Amazon, Microsoft, and Google with leverage to set the conditions of possibility for future AI production and deployment.

view more: next ›