Post 9/11 fears of terrorism have radically changed information
gathering and intelligence structures. Massive surveillance systems have become
a site for daily navigation. Everyday interactions require digitised
information for going to the movies, getting insurance, paying bills, and
accessing government services. This information is increasingly stored in the
cloud in perpetuity with little control over how this information is used and
deployed.
An increasing public concern with privacy and security is stimulated by
this immense data-gathering milieu. The Cambridge Analytica scandal has focused
attention on social media networks, while The Gorgon Stare project has raised
concerns about the extent to which safety, risk, crime and harm can be
responsibly managed by states as they increasingly outsource policing to
private companies.
The motivations behind the gathering of this data is the power that it
holds and the potential within it to shape and redefine human knowledge and
practises. Data-sets reveal patterns of human behaviour and allow the tracking
of outcomes and the prediction of potentialities. Despite Google’s growing
reputation as a massive database of our personal search histories and ‘pioneer
of surveillance capitalism’ it also is a site for the tremendous social benefit
of this data dragnetting.
Viktor Mayer-Schӧnberger and Kenneth Cukier recount
the role Google played in containing a potential H1N1 outbreak: Google could “predict” the spread of the winter flu in the United
States, not just nationally, but down to specific regions and even states. The
company could achieve this by looking at what people were searching for on the
Internet. Since Google receives more than three billion search queries every
day and saves them all, it had plenty of data to work with.
Google took the 50 million most common search terms that Americans type
and compared the list with CDC data on the spread of seasonal flu between 2003
and 2008. The idea was to identify areas infected by the flu virus by what
people searched for on the Internet. Others had tried to do this with Internet
search terms, but no one else had as much data, processing power, and
statistical know-how as Google.
We are seeking chapters that interrogate these awkward and in-between
spaces of the surveillance society – between the oppressive and terrifying and
the socially just and beneficial. How can the coerced digital participation
context service social justice rather than harm it? Instead of continued
oppression of disempowered and unpopular individuals and groups, how might
big-data surveillance assist resistance and rebellion, social justice and
mobility?
Potential Topics:
- Social Media and Security
- Privacy and Empowerment
- Big Data for Health
- Governance, Sovereignty and Human Rights
- CCTV, Cities and Movement through Urban Spaces
- Big Data and the Environment
- The GDPR and European Identity
- Terms of Service and Data Use Policies
- Facial Recognition Masks and other surveillance obscuring fashions
- AI and the Right to be Forgotten
- Big Brother in the 21st Century
- Popular Culture, Big Data and the Representation of Surveillance
- Parenting, Technology and Surveillance
- Message Encryption and Resistance
- Open Source/Open Society?
- Corporate Surveillance
Submission Procedure
Potential authors are invited to submit chapter abstracts of no more
than 500 words, including a title, 4 to 6 keywords, and a brief bio, by email
to m.kent@curtin.edu.au and Leanne.mcrae@curtin.edu.au by 1 January 2020.
(Please indicate in your proposal if you wish to use any visual material, and
how you have or will gain copyright clearance for visual material). Authors
will receive a response by 15 February 2020, with those provisionally accepted
due as chapters of approximately 6000 words (including references) by 15 May
2020 for review. If you would like any further information, please contact Mike
or Leanne.
Edited by Dr Leanne McRae (Curtin University), and Associate Professor Mike Kent (Curtin University)
No hay comentarios:
Publicar un comentario