- Bibliography
- Subscribe
- News
-
Referencing guides Blog Automated transliteration Relevant bibliographies by topics
Log in
Українська Français Italiano Español Polski Português Deutsch
We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!
Relevant bibliographies by topics / UN. Information and Communication Technologies Task Force / Journal articles
To see the other types of publications on this topic, follow the link: UN. Information and Communication Technologies Task Force.
Author: Grafiati
Published: 27 July 2024
Last updated: 30 July 2024
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 47 journal articles for your research on the topic 'UN. Information and Communication Technologies Task Force.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.
1
Pokryshen, Dmytro, and Sofiia Nesterenko. "INFORMATION AND COMMUNICATION TECHNOLOGIES AS A TEACHER'S SELF-DEVELOPMENT TOOL." OPEN EDUCATIONAL E-ENVIRONMENT OF MODERN UNIVERSITY, no.13 (2022): 114–21. http://dx.doi.org/10.28925/2414-0325.2022.139.
Full textAbstract:
The article is devoted to the problem of self-development of a pedagogical employee of a general secondary education institution. An overview of the teacher's self-development model in the context of informal education, using information and communication technologies, was made. The analysis of various studies showed that ICT for educational activities has a significant impact and is a Russian lever in providing quality educational content. Mastery of basic principles, skills and abilities with the use of digital tools is currently a priority task for all institutions of general secondary education. Competence with ICT is currently the basis for self-development and self-organization of a teacher in the process of sustainable development of a specialist's personality. But having modern and practical skills in the use of ICT for a teacher is more necessary than a simple desire for improvement. The approach to teaching using ICT tools of teachers of different cycles can be implemented, taking into account the following confirmation: new approaches to teaching, increased flexibility; user orientation, greater autonomy for students; support and use of new technologies; strengthening network connections between institutions and partnerships between interested parties in the field of education (stakeholders); improved access allows those with fewer resources to only advance in knowledge. Separate components of the teacher's self-development: internal and external. The first group approves work on oneself, acceptance and manifestation of oneself in the world. The second is a balanced deep work with him, which is his own personality. The external components of a teacher's self-development include the following characteristics: environment, relationships, priorities, activities. external aspects that need development, the teacher should not look for internal ones that are the driving force in this process: self-knowledge, understanding, planning, implementation. An analysis of the results of the survey was made regarding the use and use by teachers of secondary schools of the Internet platform and resources before and after the introduction of quarantine restrictions
APA, Harvard, Vancouver, ISO, and other styles
2
Thompson, Kay, and Melissa Lapsa. "COMMUNICATION ACROSS THE BLACK SEA VIA INTERNET TECHNOLOGY." International Oil Spill Conference Proceedings 2001, no.2 (March1, 2001): 1119–20. http://dx.doi.org/10.7901/2169-3358-2001-2-1119.
Full textAbstract:
ABSTRACT The U.S. Department of Energy's (DOE's) Office of International Affairs has been joined by an interagency task force to undertake a program in the Black Sea region called the “Black Sea Environmental Initiative.” The objectives of the task force are to support the countries of the region to address significant Black Sea environmental issues, including oil spill response and prevention. Working with delegates from Bulgaria, Georgia, Romania, Russia, Turkey, and Ukraine, DOE and Oak Ridge National Laboratory (ORNL) coordinated a workshop on a regional oil spill emergency response system for the Black Sea on September 14–17, 1999 in Odessa, Ukraine; DOE and the National Academy of Science, Ukraine cosponsored the workshop. The “Black Sea Environmental Information Center” Web site was unveiled at the Odessa workshop. Created by ORNL, the Web site ( http://pims.ed.ornl.gov/blacksea) facilitates information flow and dialog between the countries of the region. The Web site is intended to provide a comprehensive source for information on: Oil spill cleanup, monitoring, and related commercial technologies Scientists' requests for research partner Various countries' laws, regulations, and standards relating to the environmental condition of the Black Sea Publication of scientific papers and on-line discussions of these issue Individuals and companies working on Black Sea environmental issues The Web site also provides a real time chat capability where meetings are organized. Several meetings among regional officials have been conducted and planning is underway for the first real-time training session, which will be held in the next few months. The Web site also is host to a growing database of historical pollution testing data from research institutes around the Black Sea.
APA, Harvard, Vancouver, ISO, and other styles
3
Shumilov,V.M., and L.M.Krajnyukova. "The role of the UN in normative counteraction to the transnational crimes of terroristic character committed in the information sphere." Moscow Journal of International Law, no.4 (December31, 2020): 23–37. http://dx.doi.org/10.24833/0869-0049-2020-4-23-37.
Full textAbstract:
INTRODUCTION. In today’s world, threats to international information security are increasing. One of them is the use of information and communication technologies for criminal purposes. The United Nations has become the centre for the development of measures to counter such practices. The article discusses the role of the United Nations in the formation of a new international legal institution.MATERIALS AND METHODS. The study was based on resolutions of the United Nations General Assembly, the United Nations Security Council, the texts of relevant international treaties and draft treaties, and academic writings. The methodological basis of the study was the general scientific and private scientific methods of knowledge which are traditional for legal works.RESEARCH RESULT. As a result of the research, the authors corrected the view of the term "information terrorism" that is being approved in legal science, and highlighted the provisions of UN General Assembly and UN Security Council resolutions that form the normative basis for state ‘countering crimes in the information space, and more broadly, the use of information and communication technologies for criminal purposes.DISCUSSION AND CONCLUSIONS. The authors note that the formation of a new international legal in- stitution takes place within the framework and under the auspices of the United Nations mainly under the basis of soft law norms. But now a new stage of "switching" is beginning. It is the stage in which the method of developing international recommendatory norms turns to the method of developing international Treaty norms that have a more stringent legal force.
APA, Harvard, Vancouver, ISO, and other styles
4
Norris, Wendy, Amy Voida, and Stephen Voida. "People Talk in Stories. Responders Talk in Data: A Framework for Temporal Sensemaking in Time- and Safety-critical Work." Proceedings of the ACM on Human-Computer Interaction 6, CSCW1 (March30, 2022): 1–23. http://dx.doi.org/10.1145/3512955.
Full textAbstract:
Global crowdsourcing teams who conduct humanitarian response use temporal narratives as a sensemaking device when time is a critical element of the data story. In dynamic situations in which the flow of online information is rapid, fluid, and disordered, the process of how distributed teams construct a temporal narrative is not well understood nor well supported by information and communication technologies (ICTs). Here, we examine an intense need for temporal sensemaking: time- and safety-critical information work during the 2017 Hurricane Maria crisis response in Puerto Rico. Our analysis of semi-structured interviews reveals how members of a global digital humanitarian group, The Standby Task Force (SBTF), use a process of triage, evaluation, negotiation, and synchronization to construct collective temporal narratives in their high-tempo, distributed information work. Informed by these empirical insights, we reflect on the design implications for cloud-based, collaborative ICTs used in time- and safety-critical remote work.
APA, Harvard, Vancouver, ISO, and other styles
5
Soldatenko, Iryna. "The Communication technologies to encourage innovative activities engagement in students." Technium: Romanian Journal of Applied Sciences and Technology 2, no.7 (November9, 2020): 201–8. http://dx.doi.org/10.47577/technium.v2i7.1987.
Full textAbstract:
The challenges of the new century have expanded the range of tasks universities are facing. In addition to traditional research and cultural tasks, the task of mastering the role of innovators not only in educational and research technologies - but also in cultivating socially responsible individuals and active citizens who are aware of global threats, able to anticipate risks, address social issues and develop the economy. The article presents the results of the author's sociology research on the following issues: how students of VN Karazin Kharkiv National University respond to global challenges, whether students are prepared to become part of a solution to global development challenges identified by the Government of Ukraine (based on UN Sustainable Development Goals) as a priority for the near future and what forms of innovative practices the students choose for this purpose. Attractive for students and young people, as an innovative practice enabling them to combine social activity and a startup, be independent of the state in the labor market and commence entrepreneurial activity in an exciting sector, is social entrepreneurship. Young people now feel congenial to the philosophy of social entrepreneurship, including many members of Generation Z− people aged 18 to 24, who care for the future of their country. According to the Diffusion of Innovation Theory by Everett Rogers, students belong to Early Majority of innovation adopters. For this group of innovators, it is important that information and communication programs should contain success stories and other evidence of the effectiveness of innovation, especially by opinion leaders of youth audiences. The article looks into the new communicative technologies of influence, which encourage students to innovate.
APA, Harvard, Vancouver, ISO, and other styles
6
Romashkina,N. "Problem of International Information Security in the UN." World Economy and International Relations 64, no.12 (2020): 25–32. http://dx.doi.org/10.20542/0131-2227-2020-64-12-25-32.
Full textAbstract:
Information and communication technologies (ICTs) have transformed society and the economy and expanded opportunities for international cooperation. However, the unique capabilities of ICTs are used not only for the common good of humanity. Today, we are talking about an accelerated increase in threats in the information and cyberspace. Many states consider them as additional, but often these are the main spaces for military-political confrontation. Thus, the issues of inadmissibility of application of ICTs with military and political purposes for realization of hostile actions and acts of aggression are critically urgent acute. For this reason, the task of ensuring international information security has become part of the agenda for one of the six main committees of the UN General Assembly (UNGA) – the First Committee, where states investigate threats to peace, seek for ways to promote global security and disarmament. Discussion on international information security issues within this Committee plays a crucial role and requires a deep analysis by the scientific and expert community. The article presents the results of this analysis, starting from 1998, when the problem of international information security was first discussed at the UN on the initiative of Russia. The study identifies and investigates the possibilities and prospects of implementing the main goal in the process of ensuring international information security: namely, the creation of an international legal regime to ban the development, production and use of information and cyber weapons. In addition, this paper includes the results of a study on the UN work and prospects for its success in maintaining international information security in a new format: within the framework of the Open Ended Working Group (OEWG) organized in 2019 at the suggestion of Russia, and the UN Group of Governmental Experts (GGE) proposed by the United States. The output of both groups is going to have a significant impact on information security trends and policies on a global scale. Therefore, a timely scientific analysis of the issue, current challenges and threats, as well as the forecast of trends and prospects for enhancing international information security, presented in the article, are currently among the most relevant tasks.
APA, Harvard, Vancouver, ISO, and other styles
7
Karmazina, Liliya. "HISTORICAL EVOLUTION OF IT TERMINOLOGY AND ITS FURTHER DEVELOPMENT." Scientific Journal of Polonia University 59, no.4 (November15, 2023): 30–35. http://dx.doi.org/10.23856/5904.
Full textAbstract:
The article examines the peculiarities of the evolution of the IT (Information Technology) language and its interaction with the environment, which is manifested by the constant appearance of new words and expressions that arise when describing technological phenomena. The study employs historical, linguistic, and cultural analyses to provide insights into the evolution of IT terminology. The historical evolution of IT terminology traces the dynamic progress of technology. Beginning with borrowed mathematical terms like "algorithm" from the work of Persian mathematician al-Khwārizmī, IT terminology has continually adapted to embrace new concepts. Charles Babbage's analytical engine introduced "punch cards" and "mechanical levers" as precursors to modern IT vocabulary. The ENIAC era expanded it to include "circuit," "transistor," and "byte." Software development contributed "bug," and the rise of personal computers brought "desktop" and "mouse." The internet era ushered in terms like "email" and "browser," while the mobile age introduced "apps" and "WiFi." The 21st century witnessed the emergence of "tweet," "neural networks," "deep learning," and "machine learning," reshaping technology and industries. Immersive technologies brought "virtual reality" and "augmented reality," while decentralized systems introduced "blockchain" and "cryptocurrency," revolutionizing finance. Standardization of IT terminology has become crucial for clear cross-border communication, led by organizations like the Internet Engineering Task Force (IETF). Looking forward, the IT lexicon will expand with terms related to quantum computing, biotechnology integration, and emerging technologies. In conclusion, IT terminology reflects adaptability in the digital age, ensuring precise communication in an ever-changing technological landscape.
APA, Harvard, Vancouver, ISO, and other styles
8
Sun, Jing. "Practice and Exploration of the Construction of New Engineering Skills Studios under the "Three navigation and Five Integration" Model from the Perspective of Industrial Demands." Journal of Computing and Electronic Information Management 10, no.3 (May24, 2023): 147–49. http://dx.doi.org/10.54097/jceim.v10i3.8761.
Full textAbstract:
According to the talent demand statistics of the IoT industry by the China Academy of Information and Communication Technology, the total talent demand gap in the intelligent hardware industry in the next few years will exceed 16 million people. It is urgent for vocational colleges to cultivate a large number of technologies integrated technical and skilled talents in the form of artificial intelligence professional groups. Starting from the needs of the positions, various mentors deeply participate in the development of studio training courses, "1+X" certificate training, order project guidance, innovation and entrepreneurship guidance, and other routine work. Using the studio management system as a driving force, clarify the division of labor for each mentor, specific requirements for mentoring and passing on skills, and performance evaluation indicators, to ensure that mentors are not "absent" and provide protection for students' growth. We precisely meet the needs of talent training for new positions in the information technology industry. Driven by the application research and development task, guided by students' interests, driven by enterprise product research and development, and driven by the research and development of intelligent hardware and consumer electronics, we complete the whole process from professional training to precision employment, and improve employment competitiveness. Students participate in real projects or enterprise real orders, master the latest industry development, technical information, mainstream work methods, and achieve precise education.
APA, Harvard, Vancouver, ISO, and other styles
9
Feerrar, Julia. "Development of a framework for digital literacy." Reference Services Review 47, no.2 (June10, 2019): 91–105. http://dx.doi.org/10.1108/rsr-01-2019-0002.
Full textAbstract:
Purpose Institutions seeking to develop or expand digital literacy programs face the challenge of navigating varied definitions for digital literacy itself. In answer to this challenge, this paper aims to share a process for developing a shared framework for digital literacy at one institution, including drawing on themes in existing frameworks, soliciting campus feedback and making revisions. Design/methodology/approach A draft digital literacy framework was created following the work of an initial library task force. Focus groups were conducted to gather feedback on the framework and to identify areas for future development. Findings Focus groups yielded 38 written responses. Feedback themes related to gaps in the framework, structural suggestions and common challenges for learners. Themes in focus group feedback led to several framework revisions, including the addition of Curation as a competency area, the removal of information communication technologies as its own competency area, and the inclusion of Learner rather than Student at the center of the framework. Practical implications The approaches described in this case study can be adapted by those looking to create a shared framework or definition for digital literacy on their campuses, as well as to create or revise definitions for other related literacies. Originality/value This case study presents an adaptable process for getting started with broad digital literacy initiatives, within the context of existing digital literacy frameworks worldwide.
APA, Harvard, Vancouver, ISO, and other styles
10
Ionov,M.V., N.E.Zvartau, and A.O.Konradi. "Telemedicine and out-of-office blood pressure monitoring: up-to-date view of ESC/ESH." "Arterial’naya Gipertenziya" ("Arterial Hypertension") 24, no.6 (January26, 2019): 631–36. http://dx.doi.org/10.18705/1607-419x-2018-24-6-631-636.
Full textAbstract:
The 2018 Joint Guidelines of the European Society of Cardiology and the European Society of Hypertension Specialists present a successful attempt to revise the approach to one of the most prevalent health problems worldwide. For more than two years, a Task Force of experts from the two Societies have assessed and have investigated the most recent scientific advances in the field of hypertension (HTN) in order to provide doctors with the adequate diagnostic tools, evaluation of cardiovascular risk and the optimal drug treatment. Undoubtedly, among a number of crucial changes of target blood pressure (BP) range along with the new sections dedicated to HTN in different circumstances, one can notice equally valuable, albeit subtle remarks about out-of-office BP and closely related telehealth. Extensive use of ambulatory and self-BP monitoring forced to match it to office BP. Booming information and communication technologies applied successfully in various therapeutic areas and have taken place in the Guidelines. From now digital health becomes a piece of the follow-up and adherence control. This brief report highlights the current position of European experts on telemedicine and outpatient methods of monitoring blood pressure.
APA, Harvard, Vancouver, ISO, and other styles
11
Ambrosino, Nicolino, Guido Vagheggini, Stefano Mazzoleni, and Michele Vitacca. "Telemedicine in chronic obstructive pulmonary disease." Breathe 12, no.4 (November30, 2016): 350–56. http://dx.doi.org/10.1183/20734735.014616.
Full textAbstract:
Telemedicine is a medical application of advanced technology to disease management. This modality may provide benefits also to patients with chronic obstructive pulmonary disease (COPD). Different devices and systems are used. The legal problems associated with telemedicine are still controversial. Economic advantages for healthcare systems, though potentially high, are still poorly investigated. A European Respiratory Society Task Force has defined indications, follow-up, equipment, facilities, legal and economic issues of tele-monitoring of COPD patients including those undergoing home mechanical ventilation.Key pointsThe costs of care assistance in chronic disease patients are dramatically increasing.Telemedicine may be a very useful application of information and communication technologies in high-quality healthcare services.Many remote health monitoring systems are available, ensuring safety, feasibility, effectiveness, sustainability and flexibility to face different patients’ needs.The legal problems associated with telemedicine are still controversial.National and European Union governments should develop guidelines and ethical, legal, regulatory, technical, administrative standards for remote medicine.The economic advantages, if any, of this new approach must be compared to a “gold standard” of homecare that is very variable among different European countries and within each European country.The efficacy of respiratory disease telemedicine projects is promising (i.e.to tailor therapeutic intervention; to avoid useless hospital and emergency department admissions, and reduce general practitioner and specialist visits; and to involve the patients and their families).Different programmes based on specific and local situations, and on specific diseases and levels of severity with a high level of flexibility should be utilised.A European Respiratory Society Task Force produced a statement on commonly accepted clinical criteria for indications, follow-up, equipment, facilities, legal and economic issues also of telemonitoring of ventilator-dependent chronic obstructive pulmonary disease patients.Much more research is needed before considering telemonitoring a real improvement in the management of these patients.Educational aimsTo clarify definitions of aspects of telemedicineTo describe different tools of telemedicineTo provide information on the main clinical resultsTo define recommendations and limitations
APA, Harvard, Vancouver, ISO, and other styles
12
Bratsuk, Ivan, and Sviatoslav Kavyn. "INTERNATIONAL LEGAL REGULATION OF ENSURING INFORMATION SECURITY WITHIN THE FRAMEWORK OF THE UN." Bulletin of Taras Shevchenko National University of Kyiv. Legal Studies, no.125 (2023): 21–26. http://dx.doi.org/10.17721/1728-2195/2023/1.125-4.
Full textAbstract:
As a result of the active implementation of digital technologies in all spheres of social life, both international and national legal mechanisms aiming at ensuring the support for the security of the information space stand out in the foreground. The existing legal mechanisms provide for the improvement and harmonization of the legal framework in the field of information security at the national and international levels. In this context, the idea of digital sovereignty determines the use of legal mechanisms that ensure the protection of information security. Due to this faction, a comprehensive study of the general patterns of functioning and development of international legal mechanisms for ensuring information security within the framework of the UN is particularly appropriate, relevant and requires a detailed analysis. The article addresses the analysis and study of UN legal mechanisms in the field of ensuring information security. The purpose of the scientific work is a comprehensive study of the general patterns of functioning and development of international legal mechanisms for ensuring information security within the framework of the UN and the development of scientifically based proposals and recommendations regarding the effective operation of these mechanisms in international and national legal orders (legal systems). The methodological basis of scientific research is general scientific and special legal methods. In particular, a systematic approach, a generalization method, and a systematic analysis were used in the process of scientific research. In the course of the study, there were analyzed: the peculiarities of the functioning of the institutional and legal mechanism of information protection within the framework of UN coordination in the context of the multi-vector system of international security and legal regulation of international cooperation. The article substantiates the expediency of developing an integrated, coordinated information policy of international organizations and institutions with the aim of unifying approaches to ensuring information security. Also, the work summarizes the main problems arising in the international legal regulation of the fight in the field of ensuring information security, and the main threats to international peace and security in the information space, and suggests as well ways to solve them. In this context, the work summarizes the principles of international information security, highlights the main trends in the development of cyber threats in the modern information space and measures necessary for their neutralization. The article analyzes the peculiarities of the functioning of the institutional and legal mechanism of cyber protection in the context of the legislative regulation of international cooperation between international organizations and institutions. In particular, an analysis of the main mechanisms of legal support for cyber protection of the information space was carried out with the aim of their integration into a unified international system of the legal information field. As a result of the study, recommendations were formed. In the field of ensuring information security at the national and international levels, it is necessary to continue and expand activities to create conditions for the formation of an international information security system based on generally recognized principles and norms of international law. In particular, at the UN level, it is necessary to prepare and adopt international legal acts regulating the application of the principles and norms of international law in the field of the use of information and communication technologies. Since there is no single global act that regulates the procedure for combating information threats, in this context, the task of developing the UN Convention on International Information Security is very important. The document should: identify the main threats to international peace and security in the information space; determine the main principles of ensuring international information security; prescribe in detail the principles of international cooperation in the fight against crimes in the information sphere; determine effective and efficient mechanisms of legal responsibility in the information space up to the creation of a special international body for the investigation of crimes in the information sphere.
APA, Harvard, Vancouver, ISO, and other styles
13
Ovchinnikova,OksanaV. "Activation of the Victim’s Role in Pre-trial Proceedings: Remote Forms of Participation." Victimology 10, no.4 (February28, 2024): 442–52. http://dx.doi.org/10.47475/2411-0590-2023-10-4-442-452.
Full textAbstract:
The article discusses the issues of activating the role of the victim in pre-trial proceedings as part of the implementation of the “minimum standard rules for the treatment of victims of crimes and abuse of power”. Since active participation in the investigation of a criminal case increases the level of satisfaction of the victim with the justice system, it is proposed to expand the relevant procedural possibilities, while not shifting the burden of proof to him. The beginning of the victim’s interaction with law enforcement agencies should be his participation in the inspection of the scene of the incident. The author proposes to provide the victim with the opportunity to participate in this investigative action remotely, using video conferencing. This will not only increase the level of confidence of the victim in law enforcement agencies, but also provide instant feedback between the investigative task force and the victim of the crime, allowing to correctly determine the direction of the ongoing investigation. The article notes that the victim acquires the full range of procedural rights from the moment the relevant decision is made. The author believes that the victim should be able to immediately familiarize himself with this document, as well as receive other information on the criminal case both personally and remotely, using information and communication technologies. It is noted that it is necessary to organize pre-trial proceedings in accordance with the needs of the victim, giving him the opportunity to choose the form of participation (in person or remotely), as well as the use of a personal digital device to participate in investigative actions through video conferencing. It is proposed to amend the legislation allowing the use of household messengers in the production of remote investigative actions, or the creation of an electronic service on the mobile platform of public services for video conferencing with government agencies.
APA, Harvard, Vancouver, ISO, and other styles
14
Canuto, Claudia, and Ilaria Ciavattin. "L’USO DEL TASK SUPPORTED TEACHING AND LEARNING IN DIDATTICA A DISTANZA. LO STUDIO DI UN CASO NEL PROGETTO “ITALIANO L2 A SCUOLA”." Italiano LinguaDue 15, no.1 (June26, 2023): 929–49. http://dx.doi.org/10.54103/2037-3597/20445.
Full textAbstract:
Il presente contributo nasce dall’esperienza di educazione linguistica con un gruppo di alunni stranieri avvenuta all’interno di un percorso laboratoriale realizzato a distanza e incentrato sul Project-based learning (PBL). Il laboratorio preso in esame è stato condotto nell’anno scolastico 2020/21, all’interno del progetto “Italiano L2 a scuola”, nato dalla collaborazione tra i Servizi Educativi del Comune di Torino e il Dipartimento di Studi Umanistici dell’Università della stessa città. Il progetto si occupa di sostenere l’integrazione linguistica degli studenti Neo Arrivati in Italia (NAI), inseriti nelle scuole primarie e secondarie di primo grado della città di Torino. Ogni anno un gruppo di studenti universitari conduce laboratori di italiano L2 nelle scuole della città per potenziare l’uso dell’italiano per scopi comunicativi reali, al fine di aiutare gli studenti NAI a integrarsi nella comunità scolastica. L’obiettivo dell’articolo è condurre un’analisi di alcune buone pratiche glottodidattiche fondate sul TSTL (Task Supported Language and Learning), che hanno saputo stimolare il gruppo classe promuovendo un apprendimento attivo della lingua, anche attraverso l’uso delle TIC (tecnologie dell’informazione e della comunicazione) risultate essenziali per l’erogazione del laboratorio durante il periodo di pandemia. La volontà è dunque di contribuire nel fornire degli spunti di riflessione per orientare le scelte glottodidattiche in alcuni contesti di apprendimento di una lingua seconda. The use of Task Supported Teaching and Learning (TSTL) in Italian L2 distance learning. A case study in the Project “Italiano L2 a scuola” This paper examines the language education experience with a group of foreign students, held as an online workshop focused on Project-based learning (PBL). The workshop was held in 2020/21 as part of the Project "Italiano L2 a scuola", born from the collaboration between the Educational Services of the Municipality of Turin and the Department of Humanities of the University of the same city. The Project deals with supporting the linguistic integration of Newly Arrived in Italy (NAI) students, enrolled in primary and lower secondary schools in the city of Turin. Every year a group of university students conducts L2 Italian workshops in city schools to enhance the use of Italian for real communicative purposes, in order to help NAI students integrate into the school community. The aim of the article is to analyze some good language teaching practices based on TSTL (Task Supported Language and Learning), which have been able to stimulate the class group by promoting active learning of the language, through the use of ICT (Information and Communication Technologies), which proved to be essential for the provision of the workshop during the pandemic period. The intention is therefore to contribute in providing food for thought to guide language teaching choices in certain contexts of Italian as a Second Language learning.
APA, Harvard, Vancouver, ISO, and other styles
15
Kravchuk, Iryna, Olena Popadiuk, and Inna Lopashchuk. "EUROPEAN EXPERIENCE THE CONSTRUCTION OF NETWORK ECONOMY AND PRIORITY AREAS OF DEVELOPMENT IN UKRAINE." Ukrainian Journal of Applied Economics 4, no.3 (August30, 2019): 149–60. http://dx.doi.org/10.36887/2415-8453-2019-3-17.
Full textAbstract:
Modern economy is based on the information and communication technologies and innovations where information and knowledge are considered to be the main keys to achieve the high and qualitative economic growth. For Ukraine this process is complicated because there is a necessity of the fundamental economy modernization on the basis of the information and network system formation and continuous innovation. The complicated economic situation and the loss of the part of the territory because of the occupation force to find new approaches and possibilities of the qualitative growth and economic stability. Modern tendencies of the world development due to the transition to the post-industrial stage of the information society development facilitate the appearing of various economic models of the network economy. The topicality of the paper lies in considering of reaching the high technological growth of the information society using the experience of the successful countries in the sphere of its implementing and usage. Nowadays less attention is paid to the problems of the Ukrainian economy transformation in the conditions of the modern network economy growth conceptions and the common approach to the determination of the economic model in network economy suitable for the Ukraine was not yet offered. The aim of the article is to analyze the main components of the network economy, to study the Estonian model of network economy development as a country with the successful experience of the present strategy realization. The determination and argumentation of the key tendencies of the network economy development with consideration of the possibilities of its implementing in Ukraine are under the study. The problem of the network economy and the information society is much disputed amongst scholars. The most fundamental works in the network economy and the information society belongs to Manuel Castells who analyzes the tendencies that formed network society and new economy emerging called informational and global. D. Bell in his work «The Coming of Post-Industrial Society» denotes the place and role of the post-industrial society in the overall view of the social progress. Ukrainian scholar A. Chukhno in his works considers the problems of the correlation of industrial and post-industrial growth, new economy emerging and transition to the qualitatively new level of the social and economic growth. The expansion and usage of the information and telecommunication technologies he sees as one of «the fundamental processes of the post-industrial growth». According to S. Sokolenko the formation of the world networks and network economy became the leading tendency in evolution of the global output. The transition to the new stage of the world economy development he regards as industrial and information network economy. The method of analysis and synthesis is used to reveal the basic constituents of the network economy and tendencies of its development in modern economy, the methods of historic and logical analyses are used to study the Estonian model of the network economy growth and to define the possibilities of this model implementing in Ukraine. Network economy technologies cannot solve the problem of the national economy development, but the movement in this direction strengthens the Ukraine's trade integration strategy into the world economy, facilitates the scientific, innovative and network economy development and enables to take the worthy place in the global information society. Consideration of the foreign experience, the prior task for Ukraine is to develop the global information infrastructure, to constitute an open and flexible foundation for e-Government services and «Open Big Data» for all authorities and government institutions. Key words: network economy, e-Government, information services, Internet, information society. Modern economy is based on the information and communication technologies and innovations where information and knowledge are considered to be the main keys to achieve the high and qualitative economic growth. For Ukraine this process is complicated because there is a necessity of the fundamental economy modernization on the basis of the information and network system formation and continuous innovation. The complicated economic situation and the loss of the part of the territory because of the occupation force to find new approaches and possibilities of the qualitative growth and economic stability. Modern tendencies of the world development due to the transition to the post-industrial stage of the information society development facilitate the appearing of various economic models of the network economy. The topicality of the paper lies in considering of reaching the high technological growth of the information society using the experience of the successful countries in the sphere of its implementing and usage. Nowadays less attention is paid to the problems of the Ukrainian economy transformation in the conditions of the modern network economy growth conceptions and the common approach to the determination of the economic model in network economy suitable for the Ukraine was not yet offered. The aim of the article is to analyze the main components of the network economy, to study the Estonian model of network economy development as a country with the successful experience of the present strategy realization. The determination and argumentation of the key tendencies of the network economy development with consideration of the possibilities of its implementing in Ukraine are under the study. The problem of the network economy and the information society is much disputed amongst scholars. The most fundamental works in the network economy and the information society belongs to Manuel Castells who analyzes the tendencies that formed network society and new economy emerging called informational and global. D. Bell in his work «The Coming of Post-Industrial Society» denotes the place and role of the post-industrial society in the overall view of the social progress. Ukrainian scholar A. Chukhno in his works considers the problems of the correlation of industrial and post-industrial growth, new economy emerging and transition to the qualitatively new level of the social and economic growth. The expansion and usage of the information and telecommunication technologies he sees as one of «the fundamental processes of the post-industrial growth». According to S. Sokolenko the formation of the world networks and network economy became the leading tendency in evolution of the global output. The transition to the new stage of the world economy development he regards as industrial and information network economy. The method of analysis and synthesis is used to reveal the basic constituents of the network economy and tendencies of its development in modern economy, the methods of historic and logical analyses are used to study the Estonian model of the network economy growth and to define the possibilities of this model implementing in Ukraine. Network economy technologies cannot solve the problem of the national economy development, but the movement in this direction strengthens the Ukraine's trade integration strategy into the world economy, facilitates the scientific, innovative and network economy development and enables to take the worthy place in the global information society. Consideration of the foreign experience, the prior task for Ukraine is to develop the global information infrastructure, to constitute an open and flexible foundation for e-Government services and «Open Big Data» for all authorities and government institutions. Key words: network economy, e-Government, information services, Internet, information society.
APA, Harvard, Vancouver, ISO, and other styles
16
Korchak, Nataliia. "ANTI-CORRUPTION DIGITAL SOLUTIONS: THE UKRAINIAN EXPERIENCE AND THE PECULIARITIES OF THEIR IMPLEMENTATION IN A STATE OF WAR." Bulletin of Taras Shevchenko National University of Kyiv. Public Administration 16, no.2 (2022): 13–16. http://dx.doi.org/10.17721/2616-9193.2022/16-2/7.
Full textAbstract:
The purpose of the article is to highlight the innovative experience of implementation of digital transformation in Ukraine on the example of the National Agency for the Prevention of Corruption (hereinafter - NAPC). The content of the publication is due to the specifics of the subject of research and reflects an interdisciplinary approach to the disclosure of the topic. The article is a comprehensive study of the problems of digital transformation (digitalization) in terms of quantitative and qualitative changes in public administration and management. By the example of the organization of management in the NAPC, attention is focused on the directions of digital transformation and the components of the processes of digitalization of the subject of public administration are highlighted. It is noted that for the effective fight against corruption it is necessary not only to adopt high-quality anti-corruption legislation and create strong anti-corruption bodies, but also to develop/apply digital tools. NAPC became one of the first state agencies, which appointed an official for digital development, digital transformation and digitalization and is the leader in Ukraine in the implementation of anti-corruption digital solutions. It is argued that digitalization in public and governmental activities forms a qualitative characteristic of the system of public administration with the use of modern technologies, and digitalization of the National Agency is the key to the development of its institutional autonomy as a service organization. It is established that since the beginning of the war the NAPC has completely reformatted its work in the direction of providing interdepartmental communication of specialized bodies in joint projects. This is due to the high level of new digital skills and knowledge acquired in the pre-war period and the presence of a powerful team of analysts involved in collecting and processing the data needed to form proposals to the sanctions lists. The digital competence of the NAPC is implemented in the process of identifying individuals involved in the aggression against Ukraine. Thanks to the Task-force portal, assets of sanctioned persons are identified for their seizure in order to restore Ukraine. The War and Sanctions portal provides information on sanctioned individuals and data on the assets of individuals involved in Russia's military aggression. The list of domestic collateral officials is formed through the maintenance of the Register of State Assignees. The implementation of the international IT tool RuAssets increases the efficiency of work to identify hidden Russian and Belarusian assets. It is noted and substantiated that the digitalization of NAPC covers both the automation of internal management processes and the introduction of modern information and communication technologies in the fight against corruption (in a peaceful period of development of the country) and the fierce struggle of the Ukrainian people against full-scale Russian aggression. The scientific novelty of the article is due to the fact that for the first time the various aspects of the implementation of digital solutions in the framework of digital transformation are reflected on the example of a state body. The practical significance of the article is associated with the possibility of further use of its materials in the educational process, conducting interdisciplinary research into the problems of digital state development and anti-corruption digital solutions and the formation of proposals for the use of innovative IT-technologies in the activities of public administrations.
APA, Harvard, Vancouver, ISO, and other styles
17
Acón-Matamoros, Ariana Gabriela, Aurora Trujillo-Cotera, and Heiner Guido-Cambronero. "IMPLEMENTACIÓN DE UN SERVICIO WEB EN LA UNED, HERRAMIENTA PARA LOGRAR EXCELENCIA ACADÉMICA. IMPLEMENTING A WEB SERVICE IN THE UNED, AS TOOL TO ACHIEVE ACADEMIC EXCELLENCE." Revista Electrónica Calidad en la Educación Superior 2, no.2 (September5, 2011): 193–211. http://dx.doi.org/10.22458/caes.v2i2.429.
Full textAbstract:
Las universidades han introducido, con más o menos celeridad y acierto, las tecnologías de información y comunicación en su dinámica administrativa y educativa a lo largo de las dos últimas décadas. El uso adecuado de las tecnologías, como complemento de la administración educativa y de los procesos de aprendizaje en educación superior, sí que puede ayudar a la mejora de los procesos y de los resultados en la tarea académica. La tecnología disponible en la actualidad ofrece muchas facilidades que la Educación Superior de Costa Rica puede utilizar para el mejoramiento continuo de los servicios que ofrece y con un web service, para ayudarse a asegurar la calidad y la búsqueda de la excelencia enfocada a sus estudiantes. Un ambiente de aplicaciones web independientemente de las plataformas en las cuales fueron desarrolladas, con el fin de brindar servicios de consulta de datos enfocados a usuarios externos, unificando esas aplicaciones por medio de protocolos universales, puede generar beneficios que se derivan de su implementación como son: reutilización de productos de trabajo, integración de aplicaciones desarrolladas con herramientas diferentes, consulta ágil de los datos, acceso de usuarios externos de forma inmediata a los servicios que brinda, ahorro de recurso humano destinado a estas consultas, automatización de los procesos, calidad en el servicio, entre otros. La Universidad de Alicante en España cuenta con un servicio web para la comunidad que se usará como ejemplo, para ilustrar el beneficio que tendría en la UNED su implementación. El artículo contempla los beneficios que obtendría la UNED con un web service, entre los cuales podemos citar: agilización de servicios sin que el estudiante o institución deba trasladarse a las oficinas de la Universidad, apertura a un e-commerce, mejoría en la calidad del mismo, automatización de aplicaciones, entre otras.Palabras Clave: web service, aplicaciones web, protocolos, colaboración, eficiencia, usabilidad.AbstractThe Universities have introduced more or less quickly and success, information technologies and communication in its administrative and educational dynamics over the last two decades. The proper user of technologies, in addition to the educational administration and learning processes in higher education, can help to improve processes and outcomes in the academic task. The technology available todays offers many facilities that higher education in Costa Rica can used to continuously improve the services offered and a web service, to help ensure the quality and the pursuit of excellence. A Web application environment regardless of the platforms which were developed to provide consulting services focusing data to external users, by unifying these applications through universal protocols can generate benefits from its implementation such as: reutilization of work products, integration of applications developed with different tools, flexible data query, external user access immediately to the services provided, human resource savings for these consultations, process automation, quality service, among other focus in their students. The University of Alicante in Spain provided a web service for the community to and will be used as an illustration to demonstrate the benefit that it would have its implementation on the UNED. The article considers the benefits that a web service could provide to the UNED, among which are: streamlining services without required for the student or institution to move to the University offices, opening an e-commerce, improved quality same automation applications, among others.Keywords: web Service, web applications, protocols, collaboration, efficiency, usability.
APA, Harvard, Vancouver, ISO, and other styles
18
Aréniz-Arévalo, Yesenia. "Desarrollo de la comunicación oral y escrita como competencia genérica en la formación profesional de estudiantes de Ingeniería Civil." Revista Perspectivas 2, no.2 (July1, 2017): 84. http://dx.doi.org/10.22463/25909215.1314.
Full textAbstract:
ResumenEl mundo contemporáneo se caracteriza por los cambios vertiginosos generados a partir de la globalización, el impacto de las tecnologías de la información y la comunicación, la sociedad del conocimiento y la necesidad de gestionar la diversidad. En este entorno es necesaria un tipo de educación superior significativamente distinta que obligue a la Universidad a repensar sus tradicionales funciones y responsabilidades frente al tipo de profesional que se está formando. En este sentido, el propósito del estudio consistió en la evaluación del desarrollo de la comunicación oral y escrita como competencia genérica entre estudiantes de Ingeniería Civil en la Universidad Francisco de Paula Santander. Para ello se utilizó un diseño metodológico mixto, predominantemente cualitativo complementado por cuestionarios. El análisis estadístico permitió describir y evaluar un conjunto de categorías inductivas asociadas a la comunicación oral y escrita derivada de una categoría deductiva previamente teorizada. Se utilizaron como instrumentos cualitativos la pintura enriquecida y el análisis de escritos; para la obtención de los datos para el análisis cuantitativo se empleó un cuestionario con escalas tipo Likert que se respaldaron y complementaron con cálculos estadísticos provenientes de diversas plantillas de control. Los resultados establecen las categorías inductivas que explican el déficit mostrado por los estudiantes en la apropiación de esta competencia genérica, lo cual permite reflexionar acerca de la responsabilidad que tienen las instituciones educativas, incluso en el nivel superior, para la adquisición y desarrollo de competencias de lectura y escritura.Palabras clave: competencias genéricas, comunicación oral, comunicación escrita, alfabetización académica, educación superior Development of oral and written communication as a generic competence in the professional training of Civil Engineering studentsAbstractThe contemporary world can be characterized by the dramatic changes brought on by globalization, the impact of information and communication technologies, the knowledge society and the commitment to manage diversity effectively. In this context, a significantly different type of higher education is needed to force the University to reevaluate its traditional functions and responsibilities regarding the type of professional the university is educating. Accordingly, the investigation aimed to evaluate the development of oral and written communication as a generic competence among students of Civil Engineering at the University Francisco de Paula Santander. For this, a predominantly qualitative, mixed method study design was used, complemented by questionnaires. The statistical analysis allowed us to describe and evaluate a set of inductive categories associated with oral and written communication, derived from a previously theorized deductive category. The enriched painting and analysis of writings were used as qualitative instruments; for the collection of data conducive to the quantitative analysis, a questionnaire with Likert - type scales was used, supported and complemented with statistical calculations from various control templates. The results establish the inductive categories that explain the students’ shortcomings as shown in the appropriation of this generic competence, which allows us to reflect about academic institution’s responsibility, even in higher education, to help students acquire and develop reading and writing skills.Keywords: generic skills, oral communication, written communication, academic literacy, higher education Desenvolvimento da comunicação oral e escrita como uma competência genérica na formação profissional dos estudantes de engenharia civilResumoO mundo contemporâneo caracteriza-se pelas vertiginosas mudanças geradas pela globalização, o impacto das tecnologias da informação e da comunicação, a sociedade do conhecimento e a necessidade de gerenciar a diversidade. Neste ambiente, é necessário um tipo de ensino superior significativamente diferente, o que obriga a Universidade a repensar suas funções e responsabilidades tradicionais em relação ao tipo de profissional que está sendo treinado. Nesse sentido, o objetivo do estudo foi avaliar o desenvolvimento da comunicação oral e escrita como uma competência genérica entre estudantes de engenharia civil na Universidade Francisco de Paula Santander. Para isso, utilizou-se um método metodológico misto, predominantemente qualitativo, complementado por questionários. A análise estatística nos permitiu descrever e avaliar um conjunto de categorias indutivas associadas à comunicação oral e escrita derivadas de uma categoria dedutiva previamente teorizada. A pintura enriquecida e a análise de escritos foram utilizadas como instrumentos qualitativos; Para obter os dados para a análise quantitativa, utilizou-se um questionário com escalas Likert que foram suportadas e complementadas com cálculos estatísticos de vários modelos de controle. Os resultados estabelecem as categorias indutivas que explicam o déficit demonstrado pelos alunos na apropriação dessa competência genérica, o que permite refletir sobre a responsabilidade que as instituições educacionais possuem, mesmo no nível superior, para a aquisição e desenvolvimento de competências de leitura e escritura.Palavras-chave: habilidades genéricas, comunicação oral, comunicação escrita, alfabetização acadêmica, ensino superior
APA, Harvard, Vancouver, ISO, and other styles
19
Olivares-García, María Ángeles, Sonia García-Segura, Elba Gutiérrez-Santiuste, and Rosario Mérida-Serrano. "El e-portafolio profesional: Una herramienta facilitadora en la transición al empleo de estudiantes de Grado en Educación Social en la Universidad de Córdoba." REOP - Revista Española de Orientación y Psicopedagogía 31, no.3 (December28, 2020): 129. http://dx.doi.org/10.5944/reop.vol.31.num.3.2020.29265.
Full textAbstract:
RESUMEN Este artículo presenta una investigación cualitativa con el fin de analizar las fortalezas y las debilidades del e-portafolio profesional como una herramienta de búsqueda activa de empleo dentro del proceso de transición al mundo laboral de los futuros egresados universitarios. En dicho proceso, la Universidad desempeña un papel clave mediante la formación y orientación sociolaboral de estos jóvenes. En esa labor de asesoramiento y acompañamiento, las tecnologías de la información y la comunicación y, en concreto el e-portafolio, se están convirtiendo en herramientas que facilitan, por un lado, el autoconocimiento y diseño de un proyecto profesional y, por otro lado, el conocimiento del entorno laboral. Este estudio surge de un proyecto de innovación educativa dentro de la asignatura obligatoria Orientación, formación e inserción sociolaboral, en el tercer curso del Grado de Educación Social (Universidad de Córdoba, España). Durante las sesiones teóricas y prácticas de esta materia, el alumnado ha ido diseñando y dando forma a su proyecto profesional que, finalmente, ha quedado reflejado en su e-portafolio profesional. Los resultados muestran cómo el diseño de este instrumento por parte del alumnado ha contribuido a la mejora del autoconocimiento y la concreción de intereses y objetivos profesionales. Así mismo, el e-portafolio favorece el aprendizaje permanente y fomenta la adquisición de competencias digitales entre el alumnado participante. No obstante, esta misma fortaleza puede representar una amenaza si no se dispone de las destrezas digitales necesarias o no se tiene acceso a recursos tecnológicos para poder desarrollarlo de manera autónoma.ABSTRACT This paper presents a qualitative research study in order to analyse the strengths and weaknesses of the professional e-portfolio as a tool for active job search in the transition process to the labour market of future university graduates. In this process, the University plays a key role regarding the training and career guidance of these young people. In this task of advice and support, information, and communication technologies and, particularly, the e-portfolio, are becoming tools which facilitate, on the one hand, self-knowledge, and the design of a professional project and, on the other hand, the understanding of the work environment. This study originated from an educational innovation project within the compulsory course named Guidance, training, and socio-labour insertion, taught during the third year of the Degree in Social Education (University of Cordoba, Spain). Throughout the theoretical and practical sessions of this course, students have been designing and shaping their professional project, which has been finally reflected in their professional e-portfolio. The results show how the design of this tool by students has contributed to their self-knowledge and the definition of their professional interests and goals. Likewise, the e-portfolio favours lifelong learning and fosters digital skills acquisition among participating students. However, this same strength may become a threat if they lack the necessary digital competence or have no access to technological resources to develop it independently.
APA, Harvard, Vancouver, ISO, and other styles
20
Lara, Antonia. "MIGRACIÓN, UNA EMPRESA SUSTENTABLE." Revista de Gestão Social e Ambiental 2, no.3 (January29, 2009): 39–58. http://dx.doi.org/10.24857/rgsa.v2i3.93.
Full textAbstract:
ResumenLa globalización brinda, sin duda, oportunidades para el desarrollo, pero es desde los ámbitos sociales y ambientales que se levantan posturas críticas que llaman la atención sobre el impacto negativo que está teniendo en la sustentabilidad de los recursos naturales, de las economías locales y de las comunidades. En este escenario, el desplazamiento de personas como mano de obra, tiene características distintas a las que tuvo en otros contextos y momentos históricos. La migración, en tanto dimensión social de la globalización se sitúa en tensión con respecto a los beneficios que este nuevo escenario ofrece. Al interior de las disparidades propias de este vertiginoso proceso, se plantea la pregunta: ¿Cómo puede contribuir la Responsabilidad Social Empresarial a producir beneficios con y para la migrantes transnacionales? Este artículo se propone aportar a la divulgación de ideas, mediante la exposición de casos de empresas sociales, para estimular y motivar estrategias de acción, en el marco del rol social que le compete al mundo privado, para favorecer la inclusión social y económica de los trabajadores inmigrantes y sus familias. Considerando las comunidades de migrantes transnacionales como fuerza laboral y de consumo, se identifican tres ejes donde la empresa puede capitalizar lo que la migración produce de un modo socialmente responsable. El primero tiene relación con el desplazamiento de personas entre países como trabajadores migrantes; el segundo con el flujo de capitales producido por los trabajadores migrantes y las inversiones de los mismos; y el tercero con el uso de tecnologías para el flujo de información y comunicación. En un cuarto apartado, tomando la noción de desarrollo sustentable, se propone una perspectiva de acción en el ámbito de la migración en términos de su sustentabilidad. Se expondrá un modelo de trabajo que considera la migración como fuerza productiva la cual, más que constituir un problema social, plantea desafíos y abre oportunidades de desarrollo y transformación social. Dicho modelo da cuenta de las dimensiones desde las cuales se propone abordar el campo de la migración y de cómo éstas se relacionan entre si.Palabras claves:Transnacionalismo, Identidades, Emprendimiento, Remesas, BiculturalismoAbstractGlobalization offers many opportunities for development, but from the social and environmental positions are being lifted criticisms, draw attention to the negative impact it is having on the sustainability of natural resources, local economies and communities. In this scenario, displacement of people as manpower, while social dimension of globalization, is in tension regarding the benefits that this new scenario offers. Inside the disparities inherent in this dizzying process, it raises the question, How can corporate social responsibility contribute to produce benefits for communities and transnational migrants? It is a purpose of this article contributes to the dissemination of ideas, through the exposure of cases of social enterprises, to encourage and motivate companies to expand existing business fields so as to encourage social and economic inclusion of transnational communities. Considering the communities as transnational workforce and as a force for consumption identified three areas where the company can capitalize on what the migration produces in a social responsible way. The first relates to the movement of people between countries as migrant workers, the second with the flow of capital produced by migrant workers and investment from them, and third, with the use of information technologies and communication for connecting transnational communities. On a fourth section, taking in account the notion of sustainable development, we will expose an approach to work for a sustainable migration. Will be showing a model which standpoint takes migration as a productive force, as a challenge and opportunity for social development than a social problem.Key Words:Transnationalism, Identities, Entrepreneurship, Remittances, BiculturalismResumoA globalização oferece oportunidades para o desenvolvimento, mas se levantam críticas quanto aos aspectos sociais e ambientais. Neste contexto, o deslocamento de pessoas como mão-de-obra tem características distintas àquelas de outros momentos e contextos históricos. Assim, coloca-se a pergunta: Como a Responsabilidade Social de Empresas (RSE) pode contribuir para produzir benefícios com e para os emigrantes transnacionais? Este artigo divulga idéias a este respeito, baseadas em casos de empresas, para estimular estratégias de ação a respeito do papel social das empresas privadas na inclusão social e econômica dos trabalhadores imigrantes e suas famílias. Considerando-se as comunidades de imigrantes como força de trabalho e consumo, se identificam três eixos onde a empresa pode capitalizar o que a imigração produz de um modo socialmente responsável. O primeiro tem relação com o deslocamento de pessoas entre países. O segundo com o fluxo de capitais produzido pelos trabalhadores imigrantes e seus investimentos. O terceiro se refere ao uso de tecnologias para o fluxo de informações e comunicação. Finalmente se propõe uma perspectiva de ação no âmbito da imigração de forma mais sustentável.Palavras-chave: Trans-nacionalismo –identidades – empreendimento -remessas- biculturalismo.
APA, Harvard, Vancouver, ISO, and other styles
21
Vedernikov, Mykhailo, Lesia Volianska-Savchuk, Oksana Chernushkina, and Natalia Bazaliyska. "DIGITAL TRANSFORMATION IN THE FIELD OF HR PROCESSES: DIRECTIONS, PROBLEMS AND OPPORTUNITIES." Proceedings of Scientific Works of Cherkasy State Technological University Series Economic Sciences, no.66 (October20, 2022): 39–48. http://dx.doi.org/10.24025/2306-4420.66.2022.268584.
Full textAbstract:
The purpose of the article. The development of provisions regarding the formation and use, directions, problems and opportunities of digital transformation in the field of digitalization of HR processes in business under modern business conditions is considered. The article examines the peculiarities of the development of digitalization in the management of the development of personnel potential among domestic enterprises, which requires the application of the experience of foreign countries, which are focused on business optimization, effective IT solutions, and ensuring the quality of personnel. The main directions of the development of management systems in the conditions of digitalization of management are determined, such as: promoting the acceleration of innovative initiatives, prognostic monitoring of the market environment, assessment of factors affecting the company's competitiveness, development of road maps based on industry priorities and customer experience. Along with this, the formation of personnel potential, complex synchronization of all types of activities, development of culture and competencies of information exchange, modernization of IT systems, application of analytics and Big Data are considered. An organic combination of digital HR with mobile applications, social networks, cloud technologies, virtual reality, artificial intelligence to create favorable conditions for improving the work of employees, recruiting and firing personnel, etc. has been determined. Methodology. Digital transformation of HR is a change in the functioning of HR through the use of data in all areas: payroll, performance management, learning and development, profit, compensation, motivation and recruiting. The data on the priorities of HR specialists regarding investments in recruitment activities are shown. As can be seen from the statistics, corporate websites are considered the most important element of recruiting, while the application tracking system is the next priority. Results. Implementation of human resources management strategy is an important stage of the strategic management process. For it to be successful, the organization's management must adhere to the following rules: first, the goals, strategies, tasks of personnel management must be carefully and timely communicated to all employees of the organization in order to obtain from them not only understanding of service personnel management of the organization, but also informal involvement in the implementation of strategies, in particular, the development of obligations of employees to the organization to implement the strategy; secondly, the general management of the organization and the heads of the personnel management service must not only ensure the timely use of all necessary resources (material, equipment, office equipment, financial, etc.), but also have a plan for implementing the strategy in the form of targeted guidelines for the state and development of labor potential and record the achievement of each goal. Digitalization is a necessary process of development of modern enterprises in the conditions of the neo-economy. It is designed to simplify and speed up work with large databases, to ensure the automation of all types of activities (main and auxiliary operational, investment, financial), to improve communication with customers, suppliers and partners and all institutions of the external environment, to form new principles of interaction within the enterprise – between divisions, employees, management, transition to new organizational forms of management. Practical implications. Today, the development of ICT (digitalization) is a factor that changes the pace of enterprise development. ICT contributes to the increase of the motivation of employees, development of their creative thinking, also allows to save working time, multimedia tools and interactivity contribute to a better presentation and, at the same time, the information presented is also assimilated. Modern ICT maximally changes management methods – workplace, type of activity, interests, circle of partners. It is appropriate to single out the following means of mass introduction of fundamentally new ICTs, which caused qualitative changes in enterprise management: mainframes; personal computers; Internet; specialized global networks; cloud computing; Internet sensors, etc. Using all the opportunities of ICT and turning them into a real competitive force becomes the main task for managers. Value/originality. Updated practices, supported by a new type of manager with a new way of thinking, help to strengthen and develop innovative teams. In terms of capabilities, HR provides digital transformation by offering technologies capable of monitoring workforce performance in real time, implementing innovations, and "using feedback to make informed decisions by managers". Digitalization of society has radically changed people's lives and opened up new opportunities in the field of HR. At whatever stage of digital development each individual organization is, the strategy of people management and IT personnel occupy a central place in its strategic priorities, which determines the conditions for long-term development. The digital transformation of HR affects all types of businesses, from the largest corporations to the smallest micro-firms. It includes the transition from long-standing and traditionally used resources, tools and processes (such as filing cabinets and contact lists) to digital means of information storage.
APA, Harvard, Vancouver, ISO, and other styles
22
Roessingh, Hetty. "Teachers’ roles in designing meaningful tasks for mediating language learning through the use of ICT: A reflection on authentic learning for young ELLs / Le rôle des enseignants dans la conception de tâches pertinentes en apprentissage des langues." Canadian Journal of Learning and Technology / La revue canadienne de l’apprentissage et de la technologie 40, no.1 (May9, 2014). http://dx.doi.org/10.21432/t2pp4m.
Full textAbstract:
Task based learning (TBL) continues to evolve as information and communication technology (ICT) inspired tools and teaching approaches afford the possibilities of transforming students’ learning experiences by heightening their motivation and sense of autonomy, and in turn, their vocabulary development. To capture this synergy, teachers will need to reimagine authentic learning and task design. This paper describes and reflects on the shifting demands and roles of the teacher in the elementary school setting. An illustrative sample of a series of linked tasks provides a model for pre-service teachers as they take on the work of preparing meaningful work for ELLs who are increasingly present in today’s mainstream class settings. Le rôle des enseignants dans la conception de tâches pertinentes en apprentissage des langues au moyen des TIC: Une réflexion sur l'apprentissage authentique pour les jeunes apprenants d’ALS. L'apprentissage par tâches continue d'évoluer au fur et à mesure que les outils et les approches pédagogiques inspirés des technologies de l'information et de la communication (TIC) permettent de transformer les expériences d'apprentissage des étudiants en stimulant leur motivation, leur sens de l'autonomie et, finalement, l’enrichissement de leur vocabulaire. Pour réaliser cette synergie, les enseignants devront réinventer l'apprentissage authentique et la conception des tâches. Cet article décrit et réfléchit aux changements d’exigences et de rôles de l'enseignant à l'école primaire. Un échantillon représentatif d'une série de tâches connectées fournit un modèle pour les futurs enseignants qui se lancent dans la préparation d’un travail sérieux pour les étudiants d’ALS, aujourd'hui de plus en plus nombreux dans l’enseignement général.
APA, Harvard, Vancouver, ISO, and other styles
23
Featherstone, Clairmont. "An Evaluation of the Correlation between Criminal Intelligence Management and Security Operations in Benue State, Nigeria." Indonesian Journal of Counter Terrorism and National Security 2, no.2 (July31, 2023). http://dx.doi.org/10.15294/ijctns.v2i2.74628.
Full textAbstract:
This study examined the correlation between criminal intelligence management and security operations in Benue State, North Central Nigeria. The study adopted the qualitative method, semi-structured Key Information Interview (KII) technique and review of relevant literature in its investigations. Based on its earlier findings, the following conclusions were reached by the study. A sample size of 21 (n-21) was determined for the study. Findings of the study showed that the way the key tools/agents used for collection of intelligence information for production of criminal intelligence, including informants, surveillance, technologies (ICTs), and community policing, has significant correlation with the outcome of security operations by the Joint Task Force and other outfits involved in the fight against crime in Benue State. Secondly, this study also found that how criminal intelligence analysis phase of the intelligence cycle (process) is managed has significant relationship with the outcome of security operations by the Joint Task Force and other outfits involved in the fight against the rising wave of crime in Benue State. Furthermore, it was also the finding of this study that the way dissemination/sharing of criminal intelligence products among sister anti-crime outfits is managed has significant correlation with the outcome of security operations involved in the fight against crime in Benue State. Another recommendation of the study was adequate provision of operational logistics to the Intelligence Units of various law enforcement agencies, including patrol vehicles, telephones, and other communication equipment, arms and ammunition, and cameras, among others. The study also recommended great efforts at enhancing the capacities of the community policing approach to fighting crime by way of adequate provision of operational logistics, enhanced remuneration, and regular training of volunteers in the various vigilante groups operating across various local and urban neighbourhoods in Nigeria, including the Livestock Guards in Benue State.
APA, Harvard, Vancouver, ISO, and other styles
24
"Problem Analysis of RPL Overhead in 6LOWPAN using 5w1h Model." International Journal of Innovative Technology and Exploring Engineering 8, no.12 (October10, 2019): 5300–5305. http://dx.doi.org/10.35940/ijitee.l3732.1081219.
Full textAbstract:
Smart Home (SH) is one of the Internet of Thing (IoT) ecosystem that is experiencing rapid growth, especially in communication and application technologies. However, most applications of SH are embedded devices that are categorized as low power, less memory usage, and limited cost. Therefore, the IPv6 Low Power Area Network (6LoWPAN) is introduced by Internet Engineering Task Force (IETF) in order to fulfill the connectivity requirement of embedded devices. However, the 6LoWPAN standard is restricted to 250 kbps and the frame length is limited to 127 bytes, whereas the packet size over IPv6 is 1280 bytes. Because of this glaring discrepancy, routing becomes the main issues in 6LoWPAN network capability. There is a number of existing routing protocols for 6LoWPAN, and among them RPL is effective in terms of latency and throughput, but the overhead is considerably high when implemented in a large-scale network. Therefore, this study focusses on analysing the causes of RPL overhead in the 6LoWPAN network. For that, this document analysis employed the 5W1H (What, Where, When, Why, Who and How) model in investigating and describing the causes of RPL overhead in 6LoWPAN. The results of this model show four (4) critical parameters needed to be addressed in solving the RPL overhead problem: i) network topology change, ii) limitation of 6LoWPAN, iii) Node failure in the large network and iv) additional transmission information. Furthermore, the future goal of this study is to come up with a novel 6LoWPAN routing protocol algorithm that would be used as high-level technical recommendations for IoT SH ecosystem communication
APA, Harvard, Vancouver, ISO, and other styles
25
Nevelichko, Lyubov, Irina Vorotilkina, Vasily Sinyukov, and Natalʹya Belkina. "Distance learning is a social risk factor." World of Science. Series: Sociology, Philology, Cultural Studies 12, no.1 (March 2021). http://dx.doi.org/10.15862/24scsk121.
Full textAbstract:
The article is devoted to an actual problem – distance learning. Distance learning is a qualitatively new progressive type of training based on modern information technologies and means of communication, the rapid surge of which is caused by the pandemic that has engulfed most of the planet. Interest in the study of the realities of post-industrial society is dictated by the need to study the nature of the transformations that occur with a person under the influence of those technological changes that have penetrated into all spheres of human existence: on the one hand, a new environment is being formed, opening up unlimited opportunities for the accumulation and transmission of information, communication in electronic network communities, on the other – these acquisitions carry potential threats and risks in both the distant and foreseeable future, and to a greater extent, in our opinion, the expected consequences of global modernization pose the greatest threat to such a specific area as education. The subject that caused the greatest number of the most acute discussions was the impact of distance learning on the quality of training of future specialists, on the degree of development of their communication skills. The main task of the article is to justify and argue distance learning as a new form of learning, to identify its problems and positive aspects. The article presents an analysis of theoretical approaches to distance learning for Russian and foreign researchers. Based on the results of a sociological study, the attitude of students to distance learning is revealed, and their assessment of the degree of development of their own communication skills in remote learning is obtained. The authors conclude that distance learning to a certain extent becomes a risk factor, depriving students of the opportunity to develop the qualities that modern conditions require of them: self-organization, self-control, and communication skills. Despite the fact that distance learning has become an integral part of the educational process, we are convinced that this form should only be a part, a component of the educational process, a "safety cushion" in a situation of force majeure. The leading form of educational classes at the university should remain the traditional classroom form of conducting classes as the only form that can prepare a person for life and successful functioning in a risk society
APA, Harvard, Vancouver, ISO, and other styles
26
Alsuradi, Haneen, Wanjoo Park, and Mohamad Eid. "Assessment of EEG-based functional connectivity in response to haptic delay." Frontiers in Neuroscience 16 (October18, 2022). http://dx.doi.org/10.3389/fnins.2022.961101.
Full textAbstract:
Haptic technologies enable users to physically interact with remote or virtual environments by applying force, vibration, or motion via haptic interfaces. However, the delivery of timely haptic feedback remains a challenge due to the stringent computation and communication requirements associated with haptic data transfer. Haptic delay disrupts the realism of the user experience and interferes with the quality of interaction. Research efforts have been devoted to studying the neural correlates of delayed sensory stimulation to better understand and thus mitigate the impact of delay. However, little is known about the functional neural networks that process haptic delay. This paper investigates the underlying neural networks associated with processing haptic delay in passive and active haptic interactions. Nineteen participants completed a visuo-haptic task using a computer screen and a haptic device while electroencephalography (EEG) data were being recorded. A combined approach based on phase locking value (PLV) functional connectivity and graph theory was used. To assay the effects of haptic delay on functional connectivity, we evaluate a global connectivity property through the small-worldness index and a local connectivity property through the nodal strength index. Results suggest that the brain exhibits significantly different network characteristics when a haptic delay is introduced. Haptic delay caused an increased manifestation of the small-worldness index in the delta and theta bands as well as an increased nodal strength index in the middle central region. Inter-regional connectivity analysis showed that the middle central region was significantly connected to the parietal and occipital regions as a result of haptic delay. These results are expected to indicate the detection of conflicting visuo-haptic information at the middle central region and their respective resolution and integration at the parietal and occipital regions.
APA, Harvard, Vancouver, ISO, and other styles
27
Simpson, Catherine. "Communicating Uncertainty about Climate Change: The Scientists’ Dilemma." M/C Journal 14, no.1 (January26, 2011). http://dx.doi.org/10.5204/mcj.348.
Full textAbstract:
Photograph by Gonzalo Echeverria (2010)We need to get some broad-based support, to capture the public’s imagination … so we have to offer up scary scenarios, make simplified, dramatic statements and make little mention of any doubts … each of us has to decide what the right balance is between being effective and being honest (Hulme 347). Acclaimed climate scientist, the late Stephen Schneider, made this comment in 1988. Later he regretted it and said that there are ways of using metaphors that can “convey both urgency and uncertainty” (Hulme 347). What Schneider encapsulates here is the great conundrum for those attempting to communicate climate change to the everyday public. How do scientists capture the public’s imagination and convey the desperation they feel about climate change, but do it ethically? If scientific findings are presented carefully, in boring technical jargon that few can understand, then they are unlikely to attract audiences or provide an impetus for behavioural change. “What can move someone to act?” asks communication theorists Susan Moser and Lisa Dilling (37). “If a red light blinks on in a cockpit” asks Donella Meadows, “should the pilot ignore it until in speaks in an unexcited tone? … Is there any way to say [it] sweetly? Patiently? If one did, would anyone pay attention?” (Moser and Dilling 37). In 2010 Tim Flannery was appointed Panasonic Chair in Environmental Sustainability at Macquarie University. His main teaching role remains within the new science communication programme. One of the first things Flannery was emphatic about was acquainting students with Karl Popper and the origin of the scientific method. “There is no truth in science”, he proclaimed in his first lecture to students “only theories, hypotheses and falsifiabilities”. In other words, science’s epistemological limits are framed such that, as Michael Lemonick argues, “a statement that cannot be proven false is generally not considered to be scientific” (n.p., my emphasis). The impetus for the following paper emanates precisely from this issue of scientific uncertainty — more specifically from teaching a course with Tim Flannery called Communicating climate change to a highly motivated group of undergraduate science communication students. I attempt to illuminate how uncertainty is constructed differently by different groups and that the “public” does not necessarily interpret uncertainty in the same way the sciences do. This paper also analyses how doubt has been politicised and operates polemically in media coverage of climate change. As Andrew Gorman-Murray and Gordon Waitt highlight in an earlier issue of M/C Journal that focused on the climate-culture nexus, an understanding of the science alone is not adequate to deal with the cultural change necessary to address the challenges climate change brings (n.p). Far from being redundant in debates around climate change, the humanities have much to offer. Erosion of Trust in Science The objectives of Macquarie’s science communication program are far more ambitious than it can ever hope to achieve. But this is not necessarily a bad thing. The initiative is a response to declining student numbers in maths and science programmes around the country and is designed to address the perceived lack of communication skills in science graduates that the Australian Council of Deans of Science identified in their 2001 report. According to Macquarie Vice Chancellor Steven Schwartz’s blog, a broader, and much more ambitious aim of the program is to “restore public trust in science and scientists in the face of widespread cynicism” (n.p.). In recent times the erosion of public trust in science was exacerbated through the theft of e-mails from East Anglia University’s Climate Research Unit and the so-called “climategate scandal” which ensued. With the illegal publication of the e-mails came claims against the Research Unit that climate experts had been manipulating scientific data to suit a pro-global warming agenda. Three inquiries later, all the scientists involved were cleared of any wrongdoing, however the damage had already been done. To the public, what this scandal revealed was a certain level of scientific hubris around the uncertainties of the science and an unwillingness to explain the nature of these uncertainties. The prevailing notion remained that the experts were keeping information from public scrutiny and not being totally honest with them, which at least in the short term, damaged the scientists’s credibility. Many argued that this signalled a shift in public opinion and media portrayal on the issue of climate change in late 2009. University of Sydney academic, Rod Tiffen, claimed in the Sydney Morning Herald that the climategate scandal was “one of the pivotal moments in changing the politics of climate change” (n.p). In Australia this had profound implications and meant that the bipartisan agreement on an emissions trading scheme (ETS) that had almost been reached, subsequently collapsed with (climate sceptic) Tony Abbott's defeat of (ETS advocate) Malcolm Turnbull to become opposition leader (Tiffen). Not long after the reputation of science received this almighty blow, albeit unfairly, the federal government released a report in February 2010, Inspiring Australia – A national strategy for engagement with the sciences as part of the country’s innovation agenda. The report outlines a commitment from the Australian government and universities around the country to address the challenges of not only communicating science to the broader community but, in the process, renewing public trust and engagement in science. The report states that: in order to achieve a scientifically engaged Australia, it will be necessary to develop a culture where the sciences are recognized as relevant to everyday life … Our science institutions will be expected to share their knowledge and to help realize full social, economic, health and environmental benefits of scientific research and in return win ongoing public support. (xiv-xv) After launching the report, Innovation Minister Kim Carr went so far as to conflate “hope” with “science” and in the process elevate a discourse of technological determinism: “it’s time for all true friends of science to step up and defend its values and achievements” adding that, "when you denigrate science, you destroy hope” (n.p.). Forever gone is our naïve post-war world when scientists were held in such high esteem that they could virtually use humans as guinea pigs to test out new wonder chemicals; such as organochlorines, of which DDT is the most widely known (Carson). Thanks to government-sponsored nuclear testing programs, if you were born in the 1950s, 1960s or early 1970s, your brain carries a permanent nuclear legacy (Flannery, Here On Earth 158). So surely, for the most part, questioning the authority and hubristic tendencies of science is a good thing. And I might add, it’s not just scientists who bear this critical burden, the same scepticism is directed towards journalists, politicians and academics alike – something that many cultural theorists have noted is characteristic of our contemporary postmodern world (Lyotard). So far from destroying hope, as the former Innovation Minister Kim Carr (now Minister for Innovation, Industry, Science and Research) suggests, surely we need to use the criticisms of science as a vehicle upon which to initiate hope and humility. Different Ways of Knowing: Bayesian Beliefs and Matters of Concern At best, [science] produces a robust consensus based on a process of inquiry that allows for continued scrutiny, re-examination, and revision. (Oreskes 370) In an attempt to capitalise on the Macquarie Science Faculty’s expertise in climate science, I convened a course in second semester 2010 called SCOM201 Science, Media, Community: Communicating Climate Change, with invaluable assistance from Penny Wilson, Elaine Kelly and Liz Morgan. Mike Hulme’s provocative text, Why we disagree about climate change: Understanding controversy, inaction and opportunity provided an invaluable framework for the course. Hulme’s book brings other types of knowledge, beyond the scientific, to bear on our attitudes towards climate change. Climate change, he claims, has moved from being just a physical, scientific, and measurable phenomenon to becoming a social and cultural phenomenon. In order to understand the contested nature of climate change we need to acknowledge the dynamic and varied meanings climate has played in different cultures throughout history as well as the role that our own subjective attitudes and judgements play. Climate change has become a battleground between different ways of knowing, alternative visions of the future, competing ideas about what’s ethical and what’s not. Hulme makes the point that one of the reasons that we disagree about climate change is because we disagree about the role of science in today’s society. He encourages readers to use climate change as a tool to rigorously question the basis of our beliefs, assumptions and prejudices. Since uncertainty was the course’s raison d’etre, I was fortunate to have an extraordinary cohort of students who readily engaged with a course that forced them to confront their own epistemological limits — both personally and in a disciplinary sense. (See their blog: https://scom201.wordpress.com/). Science is often associated with objective realities. It thus tends to distinguish itself from the post-structuralist vein of critique that dominates much of the contemporary humanities. At the core of post-structuralism is scepticism about everyday, commonly accepted “truths” or what some call “meta-narratives” as well as an acknowledgement of the role that subjectivity plays in the pursuit of knowledge (Lyotard). However if we can’t rely on objective truths or impartial facts then where does this leave us when it comes to generating policy or encouraging behavioural change around the issue of climate change? Controversial philosophy of science scholar Bruno Latour sits squarely in the post-structuralist camp. In his 2004 article, “Why has critique run out of steam? From matters of fact to matters of concern”, he laments the way the right wing has managed to gain ground in the climate change debate through arguing that uncertainty and lack of proof is reason enough to deny demands for action. Or to use his turn-of-phrase, “dangerous extremists are using the very same argument of social construction to destroy hard-won evidence that could save our lives” (Latour n.p). Through co-opting (the Left’s dearly held notion of) scepticism and even calling themselves “climate sceptics”, they exploited doubt as a rationale for why we should do nothing about climate change. Uncertainty is not only an important part of science, but also of the human condition. However, as sociologist Sheila Jasanoff explains in her Nature article, “Technologies of Humility”, uncertainty has become like a disease: Uncertainty has become a threat to collective action, the disease that knowledge must cure. It is the condition that poses cruel dilemmas for decision makers; that must be reduced at all costs; that is tamed with scenarios and assessments; and that feeds the frenzy for new knowledge, much of it scientific. (Jasanoff 33) If we move from talking about climate change as “a matter of fact” to “a matter of concern”, argues Bruno Latour, then we can start talking about useful ways to combat it, rather than talking about whether the science is “in” or not. Facts certainly matter, claims Latour, but they can’t give us the whole story, rather “they assemble with other ingredients to produce a matter of concern” (Potter and Oster 123). Emily Potter and Candice Oster suggest that climate change can’t be understood through either natural or cultural frames alone and, “unlike a matter of fact, matters of concern cannot be explained through a single point of view or discursive frame” (123). This makes a lot of what Hulme argues far more useful because it enables the debate to be taken to another level. Those of us with non-scientific expertise can centre debates around the kinds of societies we want, rather than being caught up in the scientific (un)certainties. If we translate Latour’s concept of climate change being “a matter of concern” into the discourse of environmental management then what we come up with, I think, is the “precautionary principle”. In the YouTube clip, “Stephen Schneider vs Skeptics”, Schneider argues that when in doubt about the potential environmental impacts of climate change, we should always apply the precautionary principle. This principle emerged from the UN conference on Environment and Development in Rio de Janeiro in 1992 and concerns the management of scientific risk. However its origins are evident much earlier in documents such as the “Use of Pesticides” from US President’s Science Advisory Committee in 1962. Unlike in criminal and other types of law where the burden of proof is on the prosecutor to show that the person charged is guilty of a particular offence, in environmental law the onus of proof is on the manufacturers to demonstrate the safety of their product. For instance, a pesticide should be restricted or disproved for use if there is “reasonable doubt” about its safety (Oreskes 374). Principle 15 of the Rio Declaration on Environment and Development in 1992 has its foundations in the precautionary principle: “Where there are threats of serious or irreversible environmental damage, lack of full scientific certainty should not be used as a reason for postponing measures to prevent environmental degradation” (n.p). According to Environmental Law Online, the Rio declaration suggests that, “The precautionary principle applies where there is a ‘lack of full scientific certainty’ – that is, when science cannot say what consequences to expect, how grave they are, or how likely they are to occur” (n.p.). In order to make predictions about the likelihood of an event occurring, scientists employ a level of subjectivity, or need to “reveal their degree of belief that a prediction will turn out to be correct … [S]omething has to substitute for this lack of certainty” otherwise “the only alternative is to admit that absolutely nothing is known” (Hulme 85). These statements of “subjective probabilities or beliefs” are called Bayesian, after eighteenth century English mathematician Sir Thomas Bayes who developed the theory of evidential probability. These “probabilities” are estimates, or in other words, subjective, informed judgements that draw upon evidence and experience about the likelihood of event occurring. The Intergovernmental Panel on Climate Change (IPCC) uses Bayesian beliefs to determine the risk or likelihood of an event occurring. The IPCC provides the largest international scientific assessment of climate change and often adopts a consensus model where viewpoint reached by the majority of scientists is used to establish knowledge amongst an interdisciplinary community of scientists and then communicate it to the public (Hulme 88). According to the IPCC, this consensus is reached amongst more than more than 450 lead authors, more than 800 contributing authors, and 2500 scientific reviewers. While it is an advisory body and is not policy-prescriptive, the IPCC adopts particular linguistic conventions to indicate the probability of a statement being correct. Stephen Schneider convinced the IPCC to use this approach to systemise uncertainty (Lemonick). So for instance, in the IPCC reports, the term “likely” denotes a chance of 66%-90% of the statement being correct, while “very likely” denotes more than a 90% chance. Note the change from the Third Assessment Report (2001), indicating that “most of the observed warming in over the last fifty years is likely to have been due to the increase in greenhouse gas emissions” to the Fourth Assessment (February 2007) which more strongly states: “Most of the observed increase in global average temperatures since the mid twentieth century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations” (Hulme 51, my italics). A fiery attack on Tim Flannery by Andrew Bolt on Steve Price’s talkback radio show in June 2010 illustrates just how misunderstood scientific uncertainty is in the broader community. When Price introduces Flannery as former Australian of the Year, Bolt intercedes, claiming Flannery is “Alarmist of the Year”, then goes on to chastise Flannery for making various forecasts which didn’t eventuate, such as that Perth and Brisbane might run out of water by 2009. “How much are you to blame for the swing in sentiment, the retreat from global warming policy and rise of scepticism?” demands Bolt. In the context of the events of late 2009 and early 2010, the fact that these events didn’t materialise made Flannery, and others, seem unreliable. And what Bolt had to say on talkback radio, I suspect, resonated with a good proportion of its audience. What Bolt was trying to do was discredit Flannery’s scientific credentials and in the process erode trust in the expert. Flannery’s response was to claim that, what he said was that these events might eventuate. In much the same way that the climate sceptics have managed to co-opt scepticism and use it as a rationale for inaction on climate change, Andrew Bolt here either misunderstands basic scientific method or quite consciously misleads and manipulates the public. As Naomi Oreskes argues, “proof does not play the role in science that most people think it does (or should), and therefore it cannot play the role in policy that skeptics demand it should” (Oreskes 370). Doubt and ‘Situated’ Hope Uncertainty and ambiguity then emerge here as resources because they force us to confront those things we really want–not safety in some distant, contested future but justice and self-understanding now. (Sheila Jasanoff, cited in Hulme, back cover) In his last published book before his death in mid-2010, Science as a contact sport, Stephen Schneider’s advice to aspiring science communicators is that they should engage with the media “not at all, or a lot”. Climate scientist Ann Henderson-Sellers adds that there are very few scientists “who have the natural ability, and learn or cultivate the talents, of effective communication with and through the media” (430). In order to attract the public’s attention, it was once commonplace for scientists to write editorials and exploit fear-provoking measures by including a “useful catastrophe or two” (Moser and Dilling 37). But are these tactics effective? Susanne Moser thinks not. She argues that “numerous studies show that … fear may change attitudes … but not necessarily increase active engagement or behaviour change” (Moser 70). Furthermore, risk psychologists argue that danger is always context specific (Hulme 196). If the risk or danger is “situated” and “tangible” (such as lead toxicity levels in children in Mt Isa from the Xstrata mine) then the public will engage with it. However if it is “un-situated” (distant, intangible and diffuse) like climate change, the audience is less likely to. In my SCOM201 class we examined the impact of two climate change-related campaigns. The first one was a short film used to promote the 2010 Copenhagen Climate Change Summit (“Scary”) and the second was the State Government of Victoria’s “You have the power: Save Energy” public awareness campaign (“You”). Using Moser’s article to guide them, students evaluated each campaign’s effectiveness. Their conclusions were that the “You have the power” campaign had far more impact because it a) had very clear objectives (to cut domestic power consumption) b) provided a very clear visualisation of carbon dioxide through the metaphor of black balloons wafting up into the atmosphere, c) gave viewers a sense of empowerment and hope through describing simple measures to cut power consumption and, d) used simple but effective metaphors to convey a world progressed beyond human control, such as household appliances robotically operating themselves in the absence of humans. Despite its high production values, in comparison, the Copenhagen Summit promotion was more than ineffective and bordered on propaganda. It actually turned viewers off with its whining, righteous appeal of, “please help the world”. Its message and objectives were ambiguous, it conveyed environmental catastrophe through hackneyed images, exploited children through a narrative based on fear and gave no real sense of hope or empowerment. In contrast the Victorian Government’s campaign focused on just one aspect of climate change that was made both tangible and situated. Doubt and uncertainty are productive tools in the pursuit of knowledge. Whether it is scientific or otherwise, uncertainty will always be the motivation that “feeds the frenzy for new knowledge” (Jasanoff 33). Articulating the importance of Hulme’s book, Sheila Jasanoff indicates we should make doubt our friend, “Without downplaying its seriousness, Hulme demotes climate change from ultimate threat to constant companion, whose murmurs unlock in us the instinct for justice and equality” (Hulme back cover). The “murmurs” that Jasanoff gestures to here, I think, can also be articulated as hope. And it is in this discussion of climate change that doubt and hope sit side-by-side as bedfellows, mutually entangled. Since the “failed” Copenhagen Summit, there has been a distinct shift in climate change discourse from “experts”. We have moved away from doom and gloom discourses and into the realm of what I shall call “situated” hope. “Situated” hope is not based on blind faith alone, but rather hope grounded in evidence, informed judgements and experience. For instance, in distinct contrast to his cautionary tale The Weather Makers: The History & Future Impact of Climate Change, Tim Flannery’s latest book, Here on Earth is a biography of our Earth; a planet that throughout its history has oscillated between Gaian and Medean impulses. However Flannery’s wonder about the natural world and our potential to mitigate the impacts of climate change is not founded on empty rhetoric but rather tempered by evidence; he presents a series of case studies where humanity has managed to come together for a global good. Whether it’s the 1987 Montreal ban on CFCs (chlorinated fluorocarbons) or the lesser-known 2001 Stockholm Convention on POP (Persistent Organic Pollutants), what Flannery envisions is an emerging global civilisation, a giant, intelligent super-organism glued together through social bonds. He says: If that is ever achieved, the greatest transformation in the history of our planet would have occurred, for Earth would then be able to act as if it were as Francis Bacon put it all those centuries ago, ‘one entire, perfect living creature’. (Here on Earth, 279) While science might give us “our most reliable understanding of the natural world” (Oreskes 370), “situated” hope is the only productive and ethical currency we have. ReferencesAustralian Council of Deans of Science. What Did You Do with Your Science Degree? A National Study of Employment Outcomes for Science Degree Holders 1990-2000. Melbourne: Centre for the Study of Higher Education, University of Melbourne, 2001. Australian Government Department of Innovation, Industry, Science and Research, Inspiring Australia – A National Strategy for Engagement with the Sciences. Executive summary. Canberra: DIISR, 2010. 24 May 2010 ‹http://www.innovation.gov.au/SCIENCE/INSPIRINGAUSTRALIA/Documents/InspiringAustraliaSummary.pdf›. “Andrew Bolt with Tim Flannery.” Steve Price. Hosted by Steve Price. Melbourne: Melbourne Talkback Radio, 2010. 9 June 2010 ‹http://www.mtr1377.com.au/index2.php?option=com_newsmanager&task=view&id=6209›. Carson, Rachel. Silent Spring. London: Penguin, 1962 (2000). Carr, Kim. “Celebrating Nobel Laureate Professor Elizabeth Blackburn.” Canberra: DIISR, 2010. 19 Feb. 2010 ‹http://minister.innovation.gov.au/Carr/Pages/CELEBRATINGNOBELLAUREATEPROFESSORELIZABETHBLACKBURN.aspx›. Environmental Law Online. “The Precautionary Principle.” N.d. 19 Jan 2011 ‹http://www.envirolaw.org.au/articles/precautionary_principle›. Flannery, Tim. The Weather Makers: The History & Future Impact of Climate Change. Melbourne: Text Publishing, 2005. ———. Here on Earth: An Argument for Hope. Melbourne: Text Publishing, 2010. Gorman-Murray, Andrew, and Gordon Waitt. “Climate and Culture.” M/C Journal 12.4 (2009). 9 Mar 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/184/0›. Harrison, Karey. “How ‘Inconvenient’ Is Al Gore’s Climate Change Message?” M/C Journal 12.4 (2009). 9 Mar 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/175›. Henderson-Sellers, Ann. “Climate Whispers: Media Communication about Climate Change.” Climatic Change 40 (1998): 421–456. Hulme, Mike. Why We Disagree about Climate Change: Understanding, Controversy, Inaction and Opportunity. Cambridge: Cambridge UP, 2009. Intergovernmental Panel on Climate Change. A Picture of Climate Change: The Current State of Understanding. 2007. 11 Jan 2011 ‹http://www.ipcc.ch/pdf/press-ar4/ipcc-flyer-low.pdf›. Jasanoff, Sheila. “Technologies of Humility.” Nature 450 (2007): 33. Latour, Bruno. “Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern.” Critical Inquiry 30.2 (2004). 19 Jan 2011 ‹http://criticalinquiry.uchicago.edu/issues/v30/30n2.Latour.html›. Lemonick, Michael D. “Climate Heretic: Judith Curry Turns on Her Colleagues.” Nature News 1 Nov. 2010. 9 Mar 2011 ‹http://www.nature.com/news/2010/101101/full/news.2010.577.html›. Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Minneapolis: U of Minnesota P, 1984. Moser, Susanne, and Lisa Dilling. “Making Climate Hot: Communicating the Urgency and Challenge of Global Climate Change.” Environment 46.10 (2004): 32-46. Moser, Susie. “More Bad News: The Risk of Neglecting Emotional Responses to Climate Change Information.” In Susanne Moser and Lisa Dilling (eds.), Creating a Climate for Change: Communicating Climate Change and Facilitating Social Change. Cambridge: Cambridge UP, 2007. 64-81. Oreskes, Naomi. “Science and Public Policy: What’s Proof Got to Do with It?” Environmental Science and Policy 7 (2004): 369-383. Potter, Emily, and Candice Oster. “Communicating Climate Change: Public Responsiveness and Matters of Concern.” Media International Australia 127 (2008): 116-126. President’s Science Advisory Committee. “Use of Pesticides”. Washington, D.C.: The White House, 1963. United Nations Declaration on Environment and Development. Rio de Janeiro, 1992. 19 Jan 2011 ‹http://www.unep.org/Documents.Multilingual/Default.asp?DocumentID=78&ArticleID=1163›. “Scary Global Warming Propaganda Video Shown at the Copenhagen Climate Meeting – 7 Dec. 2009.” YouTube. 21 Mar. 2011‹http://www.youtube.com/watch?v=jzSuP_TMFtk&feature=related›. Schneider, Stephen. Science as a Contact Sport: Inside the Battle to Save Earth’s Climate. National Geographic Society, 2010. ———. “Stephen Schneider vs. the Sceptics”. YouTube. 21 Mar. 2011 ‹http://www.youtube.com/watch?v=7rj1QcdEqU0›. Schwartz, Steven. “Science in Search of a New Formula.” 2010. 20 May 2010 ‹http://www.vc.mq.edu.au/blog/2010/03/11/science-in-search-of-a-new-formula/›. Tiffen, Rodney. "You Wouldn't Read about It: Climate Scientists Right." Sydney Morning Herald 26 July 2010. 19 Jan 2011 ‹http://www.smh.com.au/environment/climate-change/you-wouldnt-read-about-it-climate-scientists-right-20100727-10t5i.html›. “You Have the Power: Save Energy.” YouTube. 21 Mar. 2011 ‹http://www.youtube.com/watch?v=SCiS5k_uPbQ›.
APA, Harvard, Vancouver, ISO, and other styles
28
Burgueño, Rafael, Alberto Bonet-Medina, Álvaro Cerván-Cantón, Rubén Espejo, Francisco Borja Fernández-Berguillo, Felipe Gordo-Ruiz, Hugo Linares-Martínez, et al. "Educación Física en Casa de Calidad. Propuesta de aplicación curricular en Educación Secundaria Obligatoria (Quality Physical Education at Home. Curricular implementation proposal in Middle Secondary School)." Retos, no.39 (June20, 2020). http://dx.doi.org/10.47197/retos.v0i39.78792.
Full textAbstract:
Ante situaciones de emergencia que obligan al conjunto de la población – en especial a los adolescentes – a permanecer en casa, la Educación Física (EF) representa una buena estrategia para contribuir a mantener los niveles de actividad física diarios desde casa. Por tanto, el objetivo de este trabajo fue mostrar una propuesta didáctica que, basada en el currículum de Educación Física de educación secundaria obligatoria, contribuya a promocionar la actividad física en casa. Para ello, esta propuesta se fundamenta en un enfoque competencial, incluyendo tanto tecnologías de la información y la comunicación como el establecimiento de retos, con la finalidad de abordar los diferentes contenidos curriculares relacionados con la calidad de vida y salud, condición física y motriz, juegos y deportes, expresión corporal y actividades físicas en el medio natural. La evaluación se plantea mediante una serie de instrumentos (rúbrica, diario, portafolio, hoja de observación y cuestionario) que permitan conocer el grado de consecución de los criterios de evaluación. Después de todo, esta propuesta abre nuevas vías para que el profesorado de EF desarrolle otras propuestas didácticas que faciliten no sólo seguir con las clases de educación física, sino la realización de actividad física en casa. Abstract. In emergency situations that force the whole population – particularly adolescents – to stay at home, Physical Education represents an optimal strategy to contribute to adolescents’ physical activity at their home. Therefore, this study aims at showing a didactic proposal that, based on Middle Secondary School Physical Education curriculum, promotes adolescents’ physical activity at home. For this end, this proposal relies on a competence approach, including both communication and information technologies and the establishment of challenging activities, in order to tackle the different curricular contents related to health and quality of life, physical and motor fitness, games and sports, body expression and physical activities in natural environment. Assessment focuses on a series of instruments (rubric, diary, portfolio, observation sheets, and questionnaire) allowing to evaluate the degree of accomplishment for each assessment criterion. Finally, this proposal offers new avenues for Physical Education teachers to develop other didactic proposals facilitating not only Physical Education classes, but also physical activity at home.
APA, Harvard, Vancouver, ISO, and other styles
29
González-Rodríguez, César, and Santos Urbina-Ramírez. "Análisis de instrumentos para el diagnóstico de la competencia digital." Revista Interuniversitaria de Investigación en Tecnología Educativa, December1, 2020, 1–12. http://dx.doi.org/10.6018/riite.411101.
Full textAbstract:
La importancia que han cobrado las Tecnologías de la Información y Comunicación en la sociedad durante los últimos años ha provocado que la competencia digital sea considerada como clave en el diseño de las políticas educativas y, en consecuencia, que desde diversos ámbitos se hayan desarrollado múltiples instrumentos destinados a la evaluación de las habilidades y destrezas digitales de docentes, discentes y población en general. Es por ello que se ha considerado pertinente analizar diversos tipos de herramientas usadas en la última década en España para el diagnóstico de la competencia digital del alumnado de distintas etapas educativas prestando atención, entre otros aspectos, a los ítems utilizados, la estructura de las herramientas o la metodología empleada. Este trabajo profundiza en el análisis de una serie de investigaciones que, pese a compartir, en muchos casos, aspectos metodológicos, difieren en su visión y concepción de la competencia digital, algo que dificulta el establecimiento de pautas comunes de evaluación, ya que resulta complicado acordar cómo medir una variable cuando la definición de la misma se presta a múltiples interpretaciones. Precisamente la definición de un marco común de referencia en el ámbito educativo que sirva para abordar la evaluación de las habilidades digitales es uno de los retos de investigadores e instituciones, si bien no se trata de una tarea sencilla cuando las tecnologías digitales se caracterizan por los continuos y vertiginosos cambios The impact of Information and Communication Technologies on society in recent years has caused digital competence to be considered the key to designing educational policies and, consequently, the development of numerous instruments for the evaluation of the digital skills and abilities of teachers, students and the population, in general, in several fields. Therefore, it has been considered relevant to analyze various types of tools used in the past decade in Spain for the diagnosis of the digital competence of students from different educational stages, paying attention, among others, to the items used, the structure of the tools or the methodology. This work goes in depth in the analysis of a series investigations that, despite sharing some methodological aspects in many ways, differ in their vision and conception of the digital competence. This makes the establishment of some common evaluation guidelines more difficult, since it is complicated to agree on how to measure a variable when its own definition could be interpreted in several ways. The definition of a common frame of reference in the educational field that serves to address the evaluation of digital skills is precisely one of the challenges of researchers and institutions. However, it is not an easy task when digital technologies are characterized by continuous and vertiginous changes.
APA, Harvard, Vancouver, ISO, and other styles
30
Ballard, Su. "Information, Noise and et al." M/C Journal 10, no.5 (October1, 2007). http://dx.doi.org/10.5204/mcj.2704.
Full textAbstract:
The two companions scurry off when they hear a noise at the door. It was only a noise, but it was also a message, a bit of information producing panic: an interruption, a corruption, a rupture of communication. Was the noise really a message? Wasn’t it, rather, static, a parasite? Michael Serres, 1982. Since, ordinarily, channels have a certain amount of noise, and therefore a finite capacity, exact transmission is impossible. Claude Shannon, 1948. Reading Information At their most simplistic, there are two means for shifting information around – analogue and digital. Analogue movement depends on analogy to perform computations; it is continuous and the relationships between numbers are keyed as a continuous ordinal set. The digital set is discrete; moving one finger at a time results in a one-to-one correspondence. Nevertheless, analogue and digital are like the two companions in Serres’ tale. Each suffers the relationship of noise to information as internal rupture and external interference. In their examination of historical constructions of information, Hobart and Schiffman locate the noise of the analogue within its physical materials; they write, “All analogue machines harbour a certain amount of vagueness, known technically as ‘noise’. Which describes the disturbing influences of the machine’s physical materials on its calculations” (208). These “certain amounts of vagueness” are essential to Claude Shannon’s articulation of a theory for information transfer that forms the basis for this paper. In transforming the structures and materials through which it travels, information has left its traces in digital art installation. These traces are located in installation’s systems, structures and materials. The usefulness of information theory as a tool to understand these relationships has until recently been overlooked by a tradition of media art history that has grouped artworks according to the properties of the artwork and/or tied them into the histories of representation and perception in art theory. Throughout this essay I use the productive dual positioning of noise and information to address the errors and impurity inherent within the viewing experiences of digital installation. Information and Noise It is not hard to see why the fractured spaces of digital installation are haunted by histories of information science. In his 1948 essay “The Mathematical Theory of Communication” Claude Shannon developed a new model for communications technologies that articulated informational feedback processes. Discussions of information transmission through phone lines were occurring alongside the development of technology capable of computing multiple discrete and variable packets of information: that is, the digital computer. And, like art, information science remains concerned with the material spaces of transmission – whether conceptual, social or critical. In the context of art something is made to be seen, understood, viewed, or presented as a series of relationships that might be established between individuals, groups, environments, and sensations. Understood this way art is an aesthetic relationship between differing material bodies, images, representations, and spaces. It is an event. Shannon was adamant that information must not be confused with meaning. To increase efficiency he insisted that the message be separated from its components; in particular, those aspects that were predictable were not to be considered information (Hansen 79). The problem that Shannon had to contend with was noise. Unwanted and disruptive, noise became symbolic of the struggle to control the growth of systems. The more complex the system, the more noise needed to be addressed. Noise is both the material from which information is constructed, as well as being the matter which information resists. Weaver (Shannon’s first commentator) writes: In the process of being transmitted, it is unfortunately characteristic that certain things are added to the signal which were not intended by the information source. These unwanted additions may be distortions of sound (in telephony, for example) or static (in radio), or distortions in shape or shading of picture (television), or errors in transmission (telegraphy or facsimile), etc. All of these changes in the transmitted signal are called noise. (4). To enable more efficient message transmission, Shannon designed systems that repressed as much noise as possible, while also acknowledging that without some noise information could not be transmitted. Shannon’s conception of information meant that information would not change if the context changed. This was crucial if a general theory of information transmission was to be plausible and meant that a methodology for noise management could be foregrounded (Pask 123). Without meaning, information became a quantity, a yes or no decision, that Shannon called a “bit” (1). Shannon’s emphasis on separating signal or message from both predicability and external noise appeared to give information an identity where it could float free of a material substance and be treated independently of context. However, for this to occur information would have to become fixed and understood as an entity. Shannon went to pains to demonstrate that the separation of meaning and information was actually to enable the reverse. A fluidity of information and the possibilities for encoding it would mean that information, although measurable, did not have a finite form. Tied into the paradox of this equation is the crucial role of noise or error. In Shannon’s communication model information is not only complicit with noise; it is totally dependant upon it for understanding. Without noise, either encoded within the original message or present from sources outside the channel, information cannot get through. The model of sender-encoder-channel-signal (message)-decoder-receiver that Shannon constructed has an arrow inserting noise. Visually and schematically this noise is a disruption pointing up and inserting itself in the nice clean lines of the message. This does not mean that noise was a last minute consideration; rather noise was the very thing Shannon was working with (and against). It is present in every image we have of information. A source, message, transmitter, receiver and their attendant noises are all material infrastructures that serve to contextualise the information they transmit, receive, and disrupt. Figure 1. Claude Shannon “The Mathematical Theory of Communication” 1948. In his analytical discussion of the diagram, Shannon actually locates noise in two crucial places. The first position accorded noise is external, marked by the arrow that demonstrates how noise is introduced to the message channel whilst in transit. External noise confuses the purity of the message whilst equivocally adding new information. External noise has a particular materiality and enters the equation as unexplained variation and random error. This is disruptive presence rather than entropic coded pattern. Shannon offers this equivocal definition of noise to be everything that is outside the linear model of sender-channel-receiver; hence, anything can be noise if it enters a channel where it is unwelcome. Secondly, noise was defined as unpredictability or entropy found and encoded within the message itself. This for Shannon was an essential and, in some ways, positive role. Entropic forces invited continual reorganisation and (when engaging the laws of redundancy) assisted with the removal of repetition enabling faster message transmission (Shannon 48). Weaver calls this shifting relationship between entropy and message “equivocation” (11). Weaver identified equivocation as central to the manner in which noise and information operated. A process of equivocation identified the receiver’s knowledge. For Shannon, a process of equivocation mediated between useful information and noise, as both were “measured in the same units” (Hayles, Chaos 55). To eliminate noise completely is to sacrifice information. Information understood in this way is also about relationships between differing material bodies, representations, and spaces, connected together for the purposes of transmission. It, like the artwork, is an event. This would appear to suggest a correlation between information transmission and viewing in galleries. Far from it. Although, the contemporary information channel is essentially a tube with fixed walls, (it is still constrained by physical properties, bandwidth and so on) and despite the implicit spatialisation of information models, I am not proposing a direct correlation between information channels and installation spaces. This is because I am not interested in ‘reading’ the information of either environment. What I am suggesting is that both environments share this material of noise. Noise is present in four places. Firstly noise is within the media errors of transmission, and secondly, it is within the media of the installation, (neither of which are one way flows). Thirdly, the viewer or listener introduces noise as interference, and lastly, it is present in the very materials thorough which it travels. Noise layered on noise. Redundancy and Modulation So far in this paper I have discussed the relationship of information to noise. For the remainder, I want to address some particular processes or manifestations of noise in New Zealand artists’ collective, et al.’s maintenance of social solidarity–instance 5 (2006, exhibited as part of the SCAPE Biennal of Art in Public Space, Christchurch Art Gallery). The installation occupies a small alcove that is partially blocked by a military-style portable table stacked with newspapers. Inside the space are three grey wooden chairs, some headphones, and a modified data projection of Google Earth. It is not immediately clear if the viewer is allowed within the spaces of the alcove to listen to the headphones as monotonous voices fill the whole space intoning political, social, and religious platitudes. The headphones might be a tool to block out the noise. In the installation it is as if multiple messages have been sent but their source, channel, and transmitter are unintelligible to the receiver. All that is left is information divorced from meaning. As other works by et al. have demonstrated, social solidarity is not a fundamentalism with directed positions and singular leaders. For example, in rapture (2004) noise disrupts all presence as a portable shed quivers in response to underground nuclear explosions 40,000km away. In the fundamental practice (2005) the viewer is left attempting to decode the un-encoded, as again sound and large steel barriers control and determine only certain movements (see http://www.etal.name/ for some documentation of these projects) . maintenance of social solidarity–instance 5 is a development of the fundamental practice. To enter its spaces viewers slip around the table and find themselves extremely close to the projection screen. Despite the provision of copious media the viewer cannot control any aspect of the environment. On screen, and apparently integral to the Google Earth imagery, are five animated and imposing dark grey monolith forms. Because of their connection to the monotonous voices in the headphones, the monoliths seem to map the imposition of narrative, power, and force in various disputed territories. Like their sudden arrival in Kubrick’s 2001: A Space Odyssey (1968) it is the contradiction of the visibility and improbability of the monoliths that renders them believable. On the video landscape the five monoliths apparently house the dispassionate voices of many different media and political authorities. Their presence is both redundant and essential as they modulate the layering of media forces – and in between, error slips in. In a broad discussion of information Gilles Deleuze and Felix Guattari highlight the necessary role of redundancy commenting that: redundancy has two forms, frequency and resonance; the first concerns the significance of information, the second (I=I) concerns the subjectivity of communication. It becomes apparent that information and communication, and even significance and subjectification, are subordinate to redundancy (79). In maintenance of social solidarity–instance 5 patterns of frequency highlight the necessary role of entropy where it is coded into gaps in the vocal transmission. Frequency is a structuring of information tied to meaningful communication. Resonance, like the stack of un-decodable newspapers on the portable table, is the carrier of redundancy. It is in the gaps between the recorded voices that connections between the monoliths and the texts are made, and these two forms of redundancy emerge. As Shannon says, redundancy is a problem of language. This is because redundancy and modulation do not equate with relationship of signal to noise. Signal to noise is a representational relationship; frequency and resonance are not representational but relational. This means that an image that might be “real-time” interrupts our understanding that the real comes first with representation always trailing second (Virilio 65). In maintenance of social solidarity–instance 5 the monoliths occupy a fixed spatial ground, imposed over the shifting navigation of Google Earth (this is not to mistake Google Earth with the ‘real’ earth). Together they form a visual counterpoint to the texts reciting in the viewer’s ears, which themselves might present as real but again, they aren’t. As Shannon contended, information cannot be tied to meaning. Instead, in the race for authority and thus authenticity we find interlopers, noisy digital images that suggest the presence of real-time perception. The spaces of maintenance of social solidarity–instance 5 meld representation and information together through the materiality of noise. And across all the different modalities employed, the appearance of noise is not through formation, but through error, accident, or surprise. This is the last step in a movement away from the mimetic obedience of information and its adherence to meaning-making or representational systems. In maintenance of social solidarity–instance 5 we are forced to align real time with virtual spaces and suspend our disbelief in the temporal truths that we see on the screen before us. This brief introduction to the work has returned us to the relationship between analogue and digital materials. Signal to noise is an analogue relationship of presence and absence. No signal equals a break in transmission. On the other hand, a digital system, due to its basis in discrete bits, transmits through probability (that is, the transmission occurs through pattern and randomness, rather than presence and absence (Hayles, How We Became 25). In his use of Shannon’s theory for the study of information transmission, Schwartz comments that the shift in information theory from analogue to digital is a shift from an analogue relationship of signal to noise to one of the probability of error (318). As I have argued in this paper, if it is measured as a quantity, noise is productive; it adds information. In both digital and analogue systems it is predictability and repetition that do not contribute information. Von Neumann makes the distinction clear saying that to some extent the “precision” of the digital machine “is absolute.” Even though, error as a matter of normal operation and not solely … as an accident attributable to some definite breakdown, nevertheless creeps in (294). Error creeps in. In maintenance of social solidarity–instance 5, et al. disrupts signal transmission by layering ambiguities into the installation. Gaps are left for viewers to introduce misreadings of scale, space, and apprehension. Rather than selecting meaning out of information within nontechnical contexts, a viewer finds herself in the same sphere as information. Noise imbricates both information and viewer within a larger open system. When asked about the relationship with the viewer in her work, et al. collaborator p.mule writes: To answer the 1st question, communication is important, clarity of concept. To answer the 2nd question, we are all receivers of information, how we process is individual. To answer the 3rd question, the work is accessible if you receive the information. But the question remains: how do we receive the information? In maintenance of social solidarity–instance 5 the system dominates. Despite the use of sound engineering and sophisticated Google Earth mapping technologies, the work appears to be constructed from discarded technologies both analogue and digital. The ominous hovering monoliths suggest answers: that somewhere within this work are methodologies to confront the materialising forces of digital error. To don the headphones is to invite a position that operates as a filtering of power. The parameters for this power are in a constant state of flux. This means that whilst mapping these forces the work does not locate them. Sound is encountered and constructed. Furthermore, the work does not oppose digital and analogue, for as von Neumann comments “the real importance of the digital procedure lies in its ability to reduce the computational noise level to an extent which is completely unobtainable by any other (analogy) procedure” (295). maintenance of social solidarity–instance 5 shows how digital and analogue come together through the productive errors of modulation and redundancy. et al.’s research constantly turns to representational and meaning making systems. As one instance, maintenance of social solidarity–instance 5 demonstrates how the digital has challenged the logics of the binary in the traditions of information theory. Digital logics are modulated by redundancies and accidents. In maintenance of social solidarity–instance 5 it is not possible to have information without noise. If, as I have argued here, digital installation operates between noise and information, then, in a constant disruption of the legacies of representation, immersion, and interaction, it is possible to open up material languages for the digital. Furthermore, an engagement with noise and error results in a blurring of the structures of information, generating a position from which we can discuss the viewer as immersed within the system – not as receiver or meaning making actant, but as an essential material within the open system of the artwork. References Barr, Jim, and Mary Barr. “L. Budd et al.” Toi Toi Toi: Three Generations of Artists from New Zealand. Ed. Rene Block. Kassel: Museum Fridericianum, 1999. 123. Burke, Gregory, and Natasha Conland, eds. et al. the fundamental practice. Wellington: Creative New Zealand, 2005. Burke, Gregory, and Natasha Conland, eds. Venice Document. et al. the fundamental practice. Wellington: Creative New Zealand, 2006. Daly-Peoples, John. Urban Myths and the et al. Legend. 21 Aug. 2004. The Big Idea (reprint) http://www.thebigidea.co.nz/print.php?sid=2234>. Deleuze, Gilles, and Felix Guattari. A Thousand Plateaus: Capitalism and Schizophrenia. Trans. Brian Massumi. London: The Athlone Press, 1996. Hansen, Mark. New Philosophy for New Media. Cambridge, MA and London: MIT Press, 2004. Hayles, N. Katherine. How We Became Posthuman: Virtual Bodies in Cybernetics, Literature and Informatics. Chicago and London: U of Chicago P, 1999. Hayles, N. Katherine. Chaos Bound: Orderly Disorder in Contemporary Literature and Science. Ithaca and London: Cornell University, 1990. Hobart, Michael, and Zachary Schiffman. Information Ages: Literacy, Numeracy, and the Computer Revolution. Baltimore: Johns Hopkins UP, 1998. p.mule, et al. 2007. 2 Jul. 2007 http://www.etal.name/index.htm>. Pask, Gordon. An Approach to Cybernetics. London: Hutchinson, 1961. Paulson, William. The Noise of Culture: Literary Texts in a World of Information. Ithaca and London: Cornell University, 1988. Schwartz, Mischa. Information Transmission, Modulation, and Noise: A Unified Approach to Communication Systems. 3rd ed. New York: McGraw-Hill, 1980. Serres, Michel. The Parasite. Trans. Lawrence R. Schehr. Baltimore: John Hopkins UP, 1982. Shannon, Claude. A Mathematical Theory of Communication. July, October 1948. Online PDF. 27: 379-423, 623-656 (reprinted with corrections). 13 Jul. 2004 http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html>. Virilio, Paul. The Vision Machine. Trans. Julie Rose. Bloomington and Indianapolis: Indiana UP, British Film Institute, 1994. Von Neumann, John. “The General and Logical Theory of Automata.” Collected Works. Ed. A. H. Taub. Vol. 5. Oxford: Pergamon Press, 1963. Weaver, Warren. “Recent Contributions to the Mathematical Theory of Communication.” The Mathematical Theory of Commnunication. Eds. Claude Shannon and Warren Weaver. paperback, 1963 ed. Urbana and Chicago: U of Illinois P, 1949. 1-16. Work Discussed et al. maintenance of social solidarity–instance 5 2006. Installation, Google Earth feed, newspapers, sound. Exhibited in SCAPE 2006 Biennial of Art in Public Space Christchurch Art Gallery, Christchurch, September 30-November 12. Images reproduced with the permission of et al. Photographs by Lee Cunliffe. Acknowledgments Research for this paper was conducted with the support of an Otago Polytechnic Resaerch Grant. Photographs of et al. maintenance of social solidarity–instance 5 by Lee Cunliffe. Citation reference for this article MLA Style Ballard, Su. "Information, Noise and et al." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/02-ballard.php>. APA Style Ballard, S. (Oct. 2007) "Information, Noise and et al.," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/02-ballard.php>.
APA, Harvard, Vancouver, ISO, and other styles
31
Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no.3 (July1, 2004). http://dx.doi.org/10.5204/mcj.2371.
Full textAbstract:
“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisements is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>
APA, Harvard, Vancouver, ISO, and other styles
32
Dieter, Michael. "Amazon Noir." M/C Journal 10, no.5 (October1, 2007). http://dx.doi.org/10.5204/mcj.2709.
Full textAbstract:
There is no diagram that does not also include, besides the points it connects up, certain relatively free or unbounded points, points of creativity, change and resistance, and it is perhaps with these that we ought to begin in order to understand the whole picture. (Deleuze, “Foucault” 37) Monty Cantsin: Why do we use a pervert software robot to exploit our collective consensual mind? Letitia: Because we want the thief to be a digital entity. Monty Cantsin: But isn’t this really blasphemic? Letitia: Yes, but god – in our case a meta-cocktail of authorship and copyright – can not be trusted anymore. (Amazon Noir, “Dialogue”) In 2006, some 3,000 digital copies of books were silently “stolen” from online retailer Amazon.com by targeting vulnerabilities in the “Search inside the Book” feature from the company’s website. Over several weeks, between July and October, a specially designed software program bombarded the Search Inside!™ interface with multiple requests, assembling full versions of texts and distributing them across peer-to-peer networks (P2P). Rather than a purely malicious and anonymous hack, however, the “heist” was publicised as a tactical media performance, Amazon Noir, produced by self-proclaimed super-villains Paolo Cirio, Alessandro Ludovico, and Ubermorgen.com. While controversially directed at highlighting the infrastructures that materially enforce property rights and access to knowledge online, the exploit additionally interrogated its own interventionist status as theoretically and politically ambiguous. That the “thief” was represented as a digital entity or machinic process (operating on the very terrain where exchange is differentiated) and the emergent act of “piracy” was fictionalised through the genre of noir conveys something of the indeterminacy or immensurability of the event. In this short article, I discuss some political aspects of intellectual property in relation to the complexities of Amazon Noir, particularly in the context of control, technological action, and discourses of freedom. Software, Piracy As a force of distribution, the Internet is continually subject to controversies concerning flows and permutations of agency. While often directed by discourses cast in terms of either radical autonomy or control, the technical constitution of these digital systems is more regularly a case of establishing structures of operation, codified rules, or conditions of possibility; that is, of guiding social processes and relations (McKenzie, “Cutting Code” 1-19). Software, as a medium through which such communication unfolds and becomes organised, is difficult to conceptualise as a result of being so event-orientated. There lies a complicated logic of contingency and calculation at its centre, a dimension exacerbated by the global scale of informational networks, where the inability to comprehend an environment that exceeds the limits of individual experience is frequently expressed through desires, anxieties, paranoia. Unsurprisingly, cautionary accounts and moral panics on identity theft, email fraud, pornography, surveillance, hackers, and computer viruses are as commonplace as those narratives advocating user interactivity. When analysing digital systems, cultural theory often struggles to describe forces that dictate movement and relations between disparate entities composed by code, an aspect heightened by the intensive movement of informational networks where differences are worked out through the constant exposure to unpredictability and chance (Terranova, “Communication beyond Meaning”). Such volatility partially explains the recent turn to distribution in media theory, as once durable networks for constructing economic difference – organising information in space and time (“at a distance”), accelerating or delaying its delivery – appear contingent, unstable, or consistently irregular (Cubitt 194). Attributing actions to users, programmers, or the software itself is a difficult task when faced with these states of co-emergence, especially in the context of sharing knowledge and distributing media content. Exchanges between corporate entities, mainstream media, popular cultural producers, and legal institutions over P2P networks represent an ongoing controversy in this respect, with numerous stakeholders competing between investments in property, innovation, piracy, and publics. Beginning to understand this problematic landscape is an urgent task, especially in relation to the technological dynamics that organised and propel such antagonisms. In the influential fragment, “Postscript on the Societies of Control,” Gilles Deleuze describes the historical passage from modern forms of organised enclosure (the prison, clinic, factory) to the contemporary arrangement of relational apparatuses and open systems as being materially provoked by – but not limited to – the mass deployment of networked digital technologies. In his analysis, the disciplinary mode most famously described by Foucault is spatially extended to informational systems based on code and flexibility. According to Deleuze, these cybernetic machines are connected into apparatuses that aim for intrusive monitoring: “in a control-based system nothing’s left alone for long” (“Control and Becoming” 175). Such a constant networking of behaviour is described as a shift from “molds” to “modulation,” where controls become “a self-transmuting molding changing from one moment to the next, or like a sieve whose mesh varies from one point to another” (“Postscript” 179). Accordingly, the crisis underpinning civil institutions is consistent with the generalisation of disciplinary logics across social space, forming an intensive modulation of everyday life, but one ambiguously associated with socio-technical ensembles. The precise dynamics of this epistemic shift are significant in terms of political agency: while control implies an arrangement capable of absorbing massive contingency, a series of complex instabilities actually mark its operation. Noise, viral contamination, and piracy are identified as key points of discontinuity; they appear as divisions or “errors” that force change by promoting indeterminacies in a system that would otherwise appear infinitely calculable, programmable, and predictable. The rendering of piracy as a tactic of resistance, a technique capable of levelling out the uneven economic field of global capitalism, has become a predictable catch-cry for political activists. In their analysis of multitude, for instance, Antonio Negri and Michael Hardt describe the contradictions of post-Fordist production as conjuring forth a tendency for labour to “become common.” That is, as productivity depends on flexibility, communication, and cognitive skills, directed by the cultivation of an ideal entrepreneurial or flexible subject, the greater the possibilities for self-organised forms of living that significantly challenge its operation. In this case, intellectual property exemplifies such a spiralling paradoxical logic, since “the infinite reproducibility central to these immaterial forms of property directly undermines any such construction of scarcity” (Hardt and Negri 180). The implications of the filesharing program Napster, accordingly, are read as not merely directed toward theft, but in relation to the private character of the property itself; a kind of social piracy is perpetuated that is viewed as radically recomposing social resources and relations. Ravi Sundaram, a co-founder of the Sarai new media initiative in Delhi, has meanwhile drawn attention to the existence of “pirate modernities” capable of being actualised when individuals or local groups gain illegitimate access to distributive media technologies; these are worlds of “innovation and non-legality,” of electronic survival strategies that partake in cultures of dispersal and escape simple classification (94). Meanwhile, pirate entrepreneurs Magnus Eriksson and Rasmus Fleische – associated with the notorious Piratbyrn – have promoted the bleeding away of Hollywood profits through fully deployed P2P networks, with the intention of pushing filesharing dynamics to an extreme in order to radicalise the potential for social change (“Copies and Context”). From an aesthetic perspective, such activist theories are complemented by the affective register of appropriation art, a movement broadly conceived in terms of antagonistically liberating knowledge from the confines of intellectual property: “those who pirate and hijack owned material, attempting to free information, art, film, and music – the rhetoric of our cultural life – from what they see as the prison of private ownership” (Harold 114). These “unruly” escape attempts are pursued through various modes of engagement, from experimental performances with legislative infrastructures (i.e. Kembrew McLeod’s patenting of the phrase “freedom of expression”) to musical remix projects, such as the work of Negativland, John Oswald, RTMark, Detritus, Illegal Art, and the Evolution Control Committee. Amazon Noir, while similarly engaging with questions of ownership, is distinguished by specifically targeting information communication systems and finding “niches” or gaps between overlapping networks of control and economic governance. Hans Bernhard and Lizvlx from Ubermorgen.com (meaning ‘Day after Tomorrow,’ or ‘Super-Tomorrow’) actually describe their work as “research-based”: “we not are opportunistic, money-driven or success-driven, our central motivation is to gain as much information as possible as fast as possible as chaotic as possible and to redistribute this information via digital channels” (“Interview with Ubermorgen”). This has led to experiments like Google Will Eat Itself (2005) and the construction of the automated software thief against Amazon.com, as process-based explorations of technological action. Agency, Distribution Deleuze’s “postscript” on control has proven massively influential for new media art by introducing a series of key questions on power (or desire) and digital networks. As a social diagram, however, control should be understood as a partial rather than totalising map of relations, referring to the augmentation of disciplinary power in specific technological settings. While control is a conceptual regime that refers to open-ended terrains beyond the architectural locales of enclosure, implying a move toward informational networks, data solicitation, and cybernetic feedback, there remains a peculiar contingent dimension to its limits. For example, software code is typically designed to remain cycling until user input is provided. There is a specifically immanent and localised quality to its actions that might be taken as exemplary of control as a continuously modulating affective materialism. The outcome is a heightened sense of bounded emergencies that are either flattened out or absorbed through reconstitution; however, these are never linear gestures of containment. As Tiziana Terranova observes, control operates through multilayered mechanisms of order and organisation: “messy local assemblages and compositions, subjective and machinic, characterised by different types of psychic investments, that cannot be the subject of normative, pre-made political judgments, but which need to be thought anew again and again, each time, in specific dynamic compositions” (“Of Sense and Sensibility” 34). This event-orientated vitality accounts for the political ambitions of tactical media as opening out communication channels through selective “transversal” targeting. Amazon Noir, for that reason, is pitched specifically against the material processes of communication. The system used to harvest the content from “Search inside the Book” is described as “robot-perversion-technology,” based on a network of four servers around the globe, each with a specific function: one located in the United States that retrieved (or “sucked”) the books from the site, one in Russia that injected the assembled documents onto P2P networks and two in Europe that coordinated the action via intelligent automated programs (see “The Diagram”). According to the “villains,” the main goal was to steal all 150,000 books from Search Inside!™ then use the same technology to steal books from the “Google Print Service” (the exploit was limited only by the amount of technological resources financially available, but there are apparent plans to improve the technique by reinvesting the money received through the settlement with Amazon.com not to publicise the hack). In terms of informational culture, this system resembles a machinic process directed at redistributing copyright content; “The Diagram” visualises key processes that define digital piracy as an emergent phenomenon within an open-ended and responsive milieu. That is, the static image foregrounds something of the activity of copying being a technological action that complicates any analysis focusing purely on copyright as content. In this respect, intellectual property rights are revealed as being entangled within information architectures as communication management and cultural recombination – dissipated and enforced by a measured interplay between openness and obstruction, resonance and emergence (Terranova, “Communication beyond Meaning” 52). To understand data distribution requires an acknowledgement of these underlying nonhuman relations that allow for such informational exchanges. It requires an understanding of the permutations of agency carried along by digital entities. According to Lawrence Lessig’s influential argument, code is not merely an object of governance, but has an overt legislative function itself. Within the informational environments of software, “a law is defined, not through a statue, but through the code that governs the space” (20). These points of symmetry are understood as concretised social values: they are material standards that regulate flow. Similarly, Alexander Galloway describes computer protocols as non-institutional “etiquette for autonomous agents,” or “conventional rules that govern the set of possible behavior patterns within a heterogeneous system” (7). In his analysis, these agreed-upon standardised actions operate as a style of management fostered by contradiction: progressive though reactionary, encouraging diversity by striving for the universal, synonymous with possibility but completely predetermined, and so on (243-244). Needless to say, political uncertainties arise from a paradigm that generates internal material obscurities through a constant twinning of freedom and control. For Wendy Hui Kyong Chun, these Cold War systems subvert the possibilities for any actual experience of autonomy by generalising paranoia through constant intrusion and reducing social problems to questions of technological optimisation (1-30). In confrontation with these seemingly ubiquitous regulatory structures, cultural theory requires a critical vocabulary differentiated from computer engineering to account for the sociality that permeates through and concatenates technological realities. In his recent work on “mundane” devices, software and code, Adrian McKenzie introduces a relevant analytic approach in the concept of technological action as something that both abstracts and concretises relations in a diffusion of collective-individual forces. Drawing on the thought of French philosopher Gilbert Simondon, he uses the term “transduction” to identify a key characteristic of technology in the relational process of becoming, or ontogenesis. This is described as bringing together disparate things into composites of relations that evolve and propagate a structure throughout a domain, or “overflow existing modalities of perception and movement on many scales” (“Impersonal and Personal Forces in Technological Action” 201). Most importantly, these innovative diffusions or contagions occur by bridging states of difference or incompatibilities. Technological action, therefore, arises from a particular type of disjunctive relation between an entity and something external to itself: “in making this relation, technical action changes not only the ensemble, but also the form of life of its agent. Abstraction comes into being and begins to subsume or reconfigure existing relations between the inside and outside” (203). Here, reciprocal interactions between two states or dimensions actualise disparate potentials through metastability: an equilibrium that proliferates, unfolds, and drives individuation. While drawing on cybernetics and dealing with specific technological platforms, McKenzie’s work can be extended to describe the significance of informational devices throughout control societies as a whole, particularly as a predictive and future-orientated force that thrives on staged conflicts. Moreover, being a non-deterministic technical theory, it additionally speaks to new tendencies in regimes of production that harness cognition and cooperation through specially designed infrastructures to enact persistent innovation without any end-point, final goal or natural target (Thrift 283-295). Here, the interface between intellectual property and reproduction can be seen as a site of variation that weaves together disparate objects and entities by imbrication in social life itself. These are specific acts of interference that propel relations toward unforeseen conclusions by drawing on memories, attention spans, material-technical traits, and so on. The focus lies on performance, context, and design “as a continual process of tuning arrived at by distributed aspiration” (Thrift 295). This later point is demonstrated in recent scholarly treatments of filesharing networks as media ecologies. Kate Crawford, for instance, describes the movement of P2P as processual or adaptive, comparable to technological action, marked by key transitions from partially decentralised architectures such as Napster, to the fully distributed systems of Gnutella and seeded swarm-based networks like BitTorrent (30-39). Each of these technologies can be understood as a response to various legal incursions, producing radically dissimilar socio-technological dynamics and emergent trends for how agency is modulated by informational exchanges. Indeed, even these aberrant formations are characterised by modes of commodification that continually spillover and feedback on themselves, repositioning markets and commodities in doing so, from MP3s to iPods, P2P to broadband subscription rates. However, one key limitation of this ontological approach is apparent when dealing with the sheer scale of activity involved, where mass participation elicits certain degrees of obscurity and relative safety in numbers. This represents an obvious problem for analysis, as dynamics can easily be identified in the broadest conceptual sense, without any understanding of the specific contexts of usage, political impacts, and economic effects for participants in their everyday consumptive habits. Large-scale distributed ensembles are “problematic” in their technological constitution, as a result. They are sites of expansive overflow that provoke an equivalent individuation of thought, as the Recording Industry Association of America observes on their educational website: “because of the nature of the theft, the damage is not always easy to calculate but not hard to envision” (“Piracy”). The politics of the filesharing debate, in this sense, depends on the command of imaginaries; that is, being able to conceptualise an overarching structural consistency to a persistent and adaptive ecology. As a mode of tactical intervention, Amazon Noir dramatises these ambiguities by framing technological action through the fictional sensibilities of narrative genre. Ambiguity, Control The extensive use of imagery and iconography from “noir” can be understood as an explicit reference to the increasing criminalisation of copyright violation through digital technologies. However, the term also refers to the indistinct or uncertain effects produced by this tactical intervention: who are the “bad guys” or the “good guys”? Are positions like ‘good’ and ‘evil’ (something like freedom or tyranny) so easily identified and distinguished? As Paolo Cirio explains, this political disposition is deliberately kept obscure in the project: “it’s a representation of the actual ambiguity about copyright issues, where every case seems to lack a moral or ethical basis” (“Amazon Noir Interview”). While user communications made available on the site clearly identify culprits (describing the project as jeopardising arts funding, as both irresponsible and arrogant), the self-description of the artists as political “failures” highlights the uncertainty regarding the project’s qualities as a force of long-term social renewal: Lizvlx from Ubermorgen.com had daily shootouts with the global mass-media, Cirio continuously pushed the boundaries of copyright (books are just pixels on a screen or just ink on paper), Ludovico and Bernhard resisted kickback-bribes from powerful Amazon.com until they finally gave in and sold the technology for an undisclosed sum to Amazon. Betrayal, blasphemy and pessimism finally split the gang of bad guys. (“Press Release”) Here, the adaptive and flexible qualities of informatic commodities and computational systems of distribution are knowingly posited as critical limits; in a certain sense, the project fails technologically in order to succeed conceptually. From a cynical perspective, this might be interpreted as guaranteeing authenticity by insisting on the useless or non-instrumental quality of art. However, through this process, Amazon Noir illustrates how forces confined as exterior to control (virality, piracy, noncommunication) regularly operate as points of distinction to generate change and innovation. Just as hackers are legitimately employed to challenge the durability of network exchanges, malfunctions are relied upon as potential sources of future information. Indeed, the notion of demonstrating ‘autonomy’ by illustrating the shortcomings of software is entirely consistent with the logic of control as a modulating organisational diagram. These so-called “circuit breakers” are positioned as points of bifurcation that open up new systems and encompass a more general “abstract machine” or tendency governing contemporary capitalism (Parikka 300). As a consequence, the ambiguities of Amazon Noir emerge not just from the contrary articulation of intellectual property and digital technology, but additionally through the concept of thinking “resistance” simultaneously with regimes of control. This tension is apparent in Galloway’s analysis of the cybernetic machines that are synonymous with the operation of Deleuzian control societies – i.e. “computerised information management” – where tactical media are posited as potential modes of contestation against the tyranny of code, “able to exploit flaws in protocological and proprietary command and control, not to destroy technology, but to sculpt protocol and make it better suited to people’s real desires” (176). While pushing a system into a state of hypertrophy to reform digital architectures might represent a possible technique that produces a space through which to imagine something like “our” freedom, it still leaves unexamined the desire for reformation itself as nurtured by and produced through the coupling of cybernetics, information theory, and distributed networking. This draws into focus the significance of McKenzie’s Simondon-inspired cybernetic perspective on socio-technological ensembles as being always-already predetermined by and driven through asymmetries or difference. As Chun observes, consequently, there is no paradox between resistance and capture since “control and freedom are not opposites, but different sides of the same coin: just as discipline served as a grid on which liberty was established, control is the matrix that enables freedom as openness” (71). Why “openness” should be so readily equated with a state of being free represents a major unexamined presumption of digital culture, and leads to the associated predicament of attempting to think of how this freedom has become something one cannot not desire. If Amazon Noir has political currency in this context, however, it emerges from a capacity to recognise how informational networks channel desire, memories, and imaginative visions rather than just cultivated antagonisms and counterintuitive economics. As a final point, it is worth observing that the project was initiated without publicity until the settlement with Amazon.com. There is, as a consequence, nothing to suggest that this subversive “event” might have actually occurred, a feeling heightened by the abstractions of software entities. To the extent that we believe in “the big book heist,” that such an act is even possible, is a gauge through which the paranoia of control societies is illuminated as a longing or desire for autonomy. As Hakim Bey observes in his conceptualisation of “pirate utopias,” such fleeting encounters with the imaginaries of freedom flow back into the experience of the everyday as political instantiations of utopian hope. Amazon Noir, with all its underlying ethical ambiguities, presents us with a challenge to rethink these affective investments by considering our profound weaknesses to master the complexities and constant intrusions of control. It provides an opportunity to conceive of a future that begins with limits and limitations as immanently central, even foundational, to our deep interconnection with socio-technological ensembles. References “Amazon Noir – The Big Book Crime.” http://www.amazon-noir.com/>. Bey, Hakim. T.A.Z.: The Temporary Autonomous Zone, Ontological Anarchy, Poetic Terrorism. New York: Autonomedia, 1991. Chun, Wendy Hui Kyong. Control and Freedom: Power and Paranoia in the Age of Fibre Optics. Cambridge, MA: MIT Press, 2006. Crawford, Kate. “Adaptation: Tracking the Ecologies of Music and Peer-to-Peer Networks.” Media International Australia 114 (2005): 30-39. Cubitt, Sean. “Distribution and Media Flows.” Cultural Politics 1.2 (2005): 193-214. Deleuze, Gilles. Foucault. Trans. Seán Hand. Minneapolis: U of Minnesota P, 1986. ———. “Control and Becoming.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 169-176. ———. “Postscript on the Societies of Control.” Negotiations 1972-1990. Trans. Martin Joughin. New York: Columbia UP, 1995. 177-182. Eriksson, Magnus, and Rasmus Fleische. “Copies and Context in the Age of Cultural Abundance.” Online posting. 5 June 2007. Nettime 25 Aug 2007. Galloway, Alexander. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press, 2004. Hardt, Michael, and Antonio Negri. Multitude: War and Democracy in the Age of Empire. New York: Penguin Press, 2004. Harold, Christine. OurSpace: Resisting the Corporate Control of Culture. Minneapolis: U of Minnesota P, 2007. Lessig, Lawrence. Code and Other Laws of Cyberspace. New York: Basic Books, 1999. McKenzie, Adrian. Cutting Code: Software and Sociality. New York: Peter Lang, 2006. ———. “The Strange Meshing of Impersonal and Personal Forces in Technological Action.” Culture, Theory and Critique 47.2 (2006): 197-212. Parikka, Jussi. “Contagion and Repetition: On the Viral Logic of Network Culture.” Ephemera: Theory & Politics in Organization 7.2 (2007): 287-308. “Piracy Online.” Recording Industry Association of America. 28 Aug 2007. http://www.riaa.com/physicalpiracy.php>. Sundaram, Ravi. “Recycling Modernity: Pirate Electronic Cultures in India.” Sarai Reader 2001: The Public Domain. Delhi, Sarai Media Lab, 2001. 93-99. http://www.sarai.net>. Terranova, Tiziana. “Communication beyond Meaning: On the Cultural Politics of Information.” Social Text 22.3 (2004): 51-73. ———. “Of Sense and Sensibility: Immaterial Labour in Open Systems.” DATA Browser 03 – Curating Immateriality: The Work of the Curator in the Age of Network Systems. Ed. Joasia Krysa. New York: Autonomedia, 2006. 27-38. Thrift, Nigel. “Re-inventing Invention: New Tendencies in Capitalist Commodification.” Economy and Society 35.2 (2006): 279-306. Citation reference for this article MLA Style Dieter, Michael. "Amazon Noir: Piracy, Distribution, Control." M/C Journal 10.5 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0710/07-dieter.php>. APA Style Dieter, M. (Oct. 2007) "Amazon Noir: Piracy, Distribution, Control," M/C Journal, 10(5). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0710/07-dieter.php>.
APA, Harvard, Vancouver, ISO, and other styles
33
Rossiter, Ned. "Creative Industries and the Limits of Critique from." M/C Journal 6, no.3 (June1, 2003). http://dx.doi.org/10.5204/mcj.2208.
Full textAbstract:
‘Every space has become ad space’. Steve Hayden, Wired Magazine, May 2003. Marshall McLuhan’s (1964) dictum that media technologies constitute a sensory extension of the body shares a conceptual affinity with Ernst Jünger’s notion of ‘“organic construction” [which] indicates [a] synergy between man and machine’ and Walter Benjamin’s exploration of the mimetic correspondence between the organic and the inorganic, between human and non-human forms (Bolz, 2002: 19). The logo or brand is co-extensive with various media of communication – billboards, TV advertisements, fashion labels, book spines, mobile phones, etc. Often the logo is interchangeable with the product itself or a way or life. Since all social relations are mediated, whether by communications technologies or architectonic forms ranging from corporate buildings to sporting grounds to family living rooms, it follows that there can be no outside for sociality. The social is and always has been in a mutually determining relationship with mediating forms. It is in this sense that there is no outside. Such an idea has become a refrain amongst various contemporary media theorists. Here’s a sample: There is no outside position anymore, nor is this perceived as something desirable. (Lovink, 2002a: 4) Both “us” and “them” (whoever we are, whoever they are) are all always situated in this same virtual geography. There’s no outside …. There is nothing outside the vector. (Wark, 2002: 316) There is no more outside. The critique of information is in the information itself. (Lash, 2002: 220) In declaring a universality for media culture and information flows, all of the above statements acknowledge the political and conceptual failure of assuming a critical position outside socio-technically constituted relations. Similarly, they recognise the problems inherent in the “ideology critique” of the Frankfurt School who, in their distinction between “truth” and “false-consciousness”, claimed a sort of absolute knowledge for the critic that transcended the field of ideology as it is produced by the culture industry. Althusser’s more complex conception of ideology, material practices and subject formation nevertheless also fell prey to the pretence of historical materialism as an autonomous “science” that is able to determine the totality, albeit fragmented, of lived social relations. One of the key failings of ideology critique, then, is its incapacity to account for the ways in which the critic, theorist or intellectual is implicated in the operations of ideology. That is, such approaches displace the reflexivity and power relationships between epistemology, ontology and their constitution as material practices within socio-political institutions and historical constellations, which in turn are the settings for the formation of ideology. Scott Lash abandons the term ideology altogether due to its conceptual legacies within German dialectics and French post-structuralist aporetics, both of which ‘are based in a fundamental dualism, a fundamental binary, of the two types of reason. One speaks of grounding and reconciliation, the other of unbridgeability …. Both presume a sphere of transcendence’ (Lash, 2002: 8). Such assertions can be made at a general level concerning these diverse and often conflicting approaches when they are reduced to categories for the purpose of a polemic. However, the work of “post-structuralists” such as Foucault, Deleuze and Guattari and the work of German systems theorist Niklas Luhmann is clearly amenable to the task of critique within information societies (see Rossiter, 2003). Indeed, Lash draws on such theorists in assembling his critical dispositif for the information age. More concretely, Lash (2002: 9) advances his case for a new mode of critique by noting the socio-technical and historical shift from ‘constitutive dualisms of the era of the national manufacturing society’ to global information cultures, whose constitutive form is immanent to informational networks and flows. Such a shift, according to Lash, needs to be met with a corresponding mode of critique: Ideologycritique [ideologiekritik] had to be somehow outside of ideology. With the disappearance of a constitutive outside, informationcritique must be inside of information. There is no outside any more. (2002: 10) Lash goes on to note, quite rightly, that ‘Informationcritique itself is branded, another object of intellectual property, machinically mediated’ (2002: 10). It is the political and conceptual tensions between information critique and its regulation via intellectual property regimes which condition critique as yet another brand or logo that I wish to explore in the rest of this essay. Further, I will question the supposed erasure of a “constitutive outside” to the field of socio-technical relations within network societies and informational economies. Lash is far too totalising in supposing a break between industrial modes of production and informational flows. Moreover, the assertion that there is no more outside to information too readily and simplistically assumes informational relations as universal and horizontally organised, and hence overlooks the significant structural, cultural and economic obstacles to participation within media vectors. That is, there certainly is an outside to information! Indeed, there are a plurality of outsides. These outsides are intertwined with the flows of capital and the imperial biopower of Empire, as Hardt and Negri (2000) have argued. As difficult as it may be to ascertain the boundaries of life in all its complexity, borders, however defined, nonetheless exist. Just ask the so-called “illegal immigrant”! This essay identifies three key modalities comprising a constitutive outside: material (uneven geographies of labour-power and the digital divide), symbolic (cultural capital), and strategic (figures of critique). My point of reference in developing this inquiry will pivot around an analysis of the importation in Australia of the British “Creative Industries” project and the problematic foundation such a project presents to the branding and commercialisation of intellectual labour. The creative industries movement – or Queensland Ideology, as I’ve discussed elsewhere with Danny Butt (2002) – holds further implications for the political and economic position of the university vis-à-vis the arts and humanities. Creative industries constructs itself as inside the culture of informationalism and its concomitant economies by the very fact that it is an exercise in branding. Such branding is evidenced in the discourses, rhetoric and policies of creative industries as adopted by university faculties, government departments and the cultural industries and service sectors seeking to reposition themselves in an institutional environment that is adjusting to ongoing structural reforms attributed to the demands by the “New Economy” for increased labour flexibility and specialisation, institutional and economic deregulation, product customisation and capital accumulation. Within the creative industries the content produced by labour-power is branded as copyrights and trademarks within the system of Intellectual Property Regimes (IPRs). However, as I will go on to show, a constitutive outside figures in material, symbolic and strategic ways that condition the possibility of creative industries. The creative industries project, as envisioned by the Blair government’s Department of Culture, Media and Sport (DCMS) responsible for the Creative Industry Task Force Mapping Documents of 1998 and 2001, is interested in enhancing the “creative” potential of cultural labour in order to extract a commercial value from cultural objects and services. Just as there is no outside for informationcritique, for proponents of the creative industries there is no culture that is worth its name if it is outside a market economy. That is, the commercialisation of “creativity” – or indeed commerce as a creative undertaking – acts as a legitimising function and hence plays a delimiting role for “culture” and, by association, sociality. And let us not forget, the institutional life of career academics is also at stake in this legitimating process. The DCMS cast its net wide when defining creative sectors and deploys a lexicon that is as vague and unquantifiable as the next mission statement by government and corporate bodies enmeshed within a neo-liberal paradigm. At least one of the key proponents of the creative industries in Australia is ready to acknowledge this (see Cunningham, 2003). The list of sectors identified as holding creative capacities in the CITF Mapping Document include: film, music, television and radio, publishing, software, interactive leisure software, design, designer fashion, architecture, performing arts, crafts, arts and antique markets, architecture and advertising. The Mapping Document seeks to demonstrate how these sectors consist of ‘... activities which have their origin in individual creativity, skill and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property’ (CITF: 1998/2001). The CITF’s identification of intellectual property as central to the creation of jobs and wealth firmly places the creative industries within informational and knowledge economies. Unlike material property, intellectual property such as artistic creations (films, music, books) and innovative technical processes (software, biotechnologies) are forms of knowledge that do not diminish when they are distributed. This is especially the case when information has been encoded in a digital form and distributed through technologies such as the internet. In such instances, information is often attributed an “immaterial” and nonrivalrous quality, although this can be highly misleading for both the conceptualisation of information and the politics of knowledge production. Intellectual property, as distinct from material property, operates as a scaling device in which the unit cost of labour is offset by the potential for substantial profit margins realised by distribution techniques availed by new information and communication technologies (ICTs) and their capacity to infinitely reproduce the digital commodity object as a property relation. Within the logic of intellectual property regimes, the use of content is based on the capacity of individuals and institutions to pay. The syndication of media content ensures that market saturation is optimal and competition is kept to a minimum. However, such a legal architecture and hegemonic media industry has run into conflict with other net cultures such as open source movements and peer-to-peer networks (Lovink, 2002b; Meikle, 2002), which is to say nothing of the digital piracy of software and digitally encoded cinematic forms. To this end, IPRs are an unstable architecture for extracting profit. The operation of Intellectual Property Regimes constitutes an outside within creative industries by alienating labour from its mode of information or form of expression. Lash is apposite on this point: ‘Intellectual property carries with it the right to exclude’ (Lash, 2002: 24). This principle of exclusion applies not only to those outside the informational economy and culture of networks as result of geographic, economic, infrastructural, and cultural constraints. The very practitioners within the creative industries are excluded from control over their creations. It is in this sense that a legal and material outside is established within an informational society. At the same time, this internal outside – to put it rather clumsily – operates in a constitutive manner in as much as the creative industries, by definition, depend upon the capacity to exploit the IP produced by its primary source of labour. For all the emphasis the Mapping Document places on exploiting intellectual property, it’s really quite remarkable how absent any elaboration or considered development of IP is from creative industries rhetoric. It’s even more astonishing that media and cultural studies academics have given at best passing attention to the issues of IPRs. Terry Flew (2002: 154-159) is one of the rare exceptions, though even here there is no attempt to identify the implications IPRs hold for those working in the creative industries sectors. Perhaps such oversights by academics associated with the creative industries can be accounted for by the fact that their own jobs rest within the modern, industrial institution of the university which continues to offer the security of a salary award system and continuing if not tenured employment despite the onslaught of neo-liberal reforms since the 1980s. Such an industrial system of traditional and organised labour, however, does not define the labour conditions for those working in the so-called creative industries. Within those sectors engaged more intensively in commercialising culture, labour practices closely resemble work characterised by the dotcom boom, which saw young people working excessively long hours without any of the sort of employment security and protection vis-à-vis salary, health benefits and pension schemes peculiar to traditional and organised labour (see McRobbie, 2002; Ross, 2003). During the dotcom mania of the mid to late 90s, stock options were frequently offered to people as an incentive for offsetting the often minimum or even deferred payment of wages (see Frank, 2000). It is understandable that the creative industries project holds an appeal for managerial intellectuals operating in arts and humanities disciplines in Australia, most particularly at Queensland University of Technology (QUT), which claims to have established the ‘world’s first’ Creative Industries faculty (http://www.creativeindustries.qut.com/). The creative industries provide a validating discourse for those suffering anxiety disorders over what Ruth Barcan (2003) has called the ‘usefulness’ of ‘idle’ intellectual pastimes. As a project that endeavours to articulate graduate skills with labour markets, the creative industries is a natural extension of the neo-liberal agenda within education as advocated by successive governments in Australia since the Dawkins reforms in the mid 1980s (see Marginson and Considine, 2000). Certainly there’s a constructive dimension to this: graduates, after all, need jobs and universities should display an awareness of market conditions; they also have a responsibility to do so. And on this count, I find it remarkable that so many university departments in my own field of communications and media studies are so bold and, let’s face it, stupid, as to make unwavering assertions about market demands and student needs on the basis of doing little more than sniffing the wind! Time for a bit of a reality check, I’d say. And this means becoming a little more serious about allocating funds and resources towards market research and analysis based on the combination of needs between students, staff, disciplinary values, university expectations, and the political economy of markets. However, the extent to which there should be a wholesale shift of the arts and humanities into a creative industries model is open to debate. The arts and humanities, after all, are a set of disciplinary practices and values that operate as a constitutive outside for creative industries. Indeed, in their creative industries manifesto, Stuart Cunningham and John Hartley (2002) loath the arts and humanities in such confused, paradoxical and hypocritical ways in order to establish the arts and humanities as a cultural and ideological outside. To this end, to subsume the arts and humanities into the creative industries, if not eradicate them altogether, is to spell the end of creative industries as it’s currently conceived at the institutional level within academe. Too much specialisation in one post-industrial sector, broad as it may be, ensures a situation of labour reserves that exceed market needs. One only needs to consider all those now unemployed web-designers that graduated from multi-media programs in the mid to late 90s. Further, it does not augur well for the inevitable shift from or collapse of a creative industries economy. Where is the standing reserve of labour shaped by university education and training in a post-creative industries economy? Diehard neo-liberals and true-believers in the capacity for perpetual institutional flexibility would say that this isn’t a problem. The university will just “organically” adapt to prevailing market conditions and shape their curriculum and staff composition accordingly. Perhaps. Arguably if the university is to maintain a modality of time that is distinct from the just-in-time mode of production characteristic of informational economies – and indeed, such a difference is a quality that defines the market value of the educational commodity – then limits have to be established between institutions of education and the corporate organisation or creative industry entity. The creative industries project is a reactionary model insofar as it reinforces the status quo of labour relations within a neo-liberal paradigm in which bids for industry contracts are based on a combination of rich technological infrastructures that have often been subsidised by the state (i.e. paid for by the public), high labour skills, a low currency exchange rate and the lowest possible labour costs. In this respect it is no wonder that literature on the creative industries omits discussion of the importance of unions within informational, networked economies. What is the place of unions in a labour force constituted as individualised units? The conditions of possibility for creative industries within Australia are at once its frailties. In many respects, the success of the creative industries sector depends upon the ongoing combination of cheap labour enabled by a low currency exchange rate and the capacity of students to access the skills and training offered by universities. Certainly in relation to matters such as these there is no outside for the creative industries. There’s a great need to explore alternative economic models to the content production one if wealth is to be successfully extracted and distributed from activities in the new media sectors. The suggestion that the creative industries project initiates a strategic response to the conditions of cultural production within network societies and informational economies is highly debateable. The now well documented history of digital piracy in the film and software industries and the difficulties associated with regulating violations to proprietors of IP in the form of copyright and trademarks is enough of a reason to look for alternative models of wealth extraction. And you can be sure this will occur irrespective of the endeavours of the creative industries. To conclude, I am suggesting that those working in the creative industries, be they content producers or educators, need to intervene in IPRs in such a way that: 1) ensures the alienation of their labour is minimised; 2) collectivising “creative” labour in the form of unions or what Wark (2001) has termed the “hacker class”, as distinct from the “vectoralist class”, may be one way of achieving this; and 3) the advocates of creative industries within the higher education sector in particular are made aware of the implications IPRs have for graduates entering the workforce and adjust their rhetoric, curriculum, and policy engagements accordingly. Works Cited Barcan, Ruth. ‘The Idleness of Academics: Reflections on the Usefulness of Cultural Studies’. Continuum: Journal of Media & Cultural Studies (forthcoming, 2003). Bolz, Norbert. ‘Rethinking Media Aesthetics’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 18-27. Butt, Danny and Rossiter, Ned. ‘Blowing Bubbles: Post-Crash Creative Industries and the Withering of Political Critique in Cultural Studies’. Paper presented at Ute Culture: The Utility of Culture and the Uses of Cultural Studies, Cultural Studies Association of Australia Conference, Melbourne, 5-7 December, 2002. Posted to fibreculture mailing list, 10 December, 2002, http://www.fibreculture.org/archives/index.html Creative Industry Task Force: Mapping Document, DCMS (Department of Culture, Media and Sport), London, 1998/2001. http://www.culture.gov.uk/creative/mapping.html Cunningham, Stuart. ‘The Evolving Creative Industries: From Original Assumptions to Contemporary Interpretations’. Seminar Paper, QUT, Brisbane, 9 May, 2003, http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf Cunningham, Stuart; Hearn, Gregory; Cox, Stephen; Ninan, Abraham and Keane, Michael. Brisbane’s Creative Industries 2003. Report delivered to Brisbane City Council, Community and Economic Development, Brisbane: CIRAC, 2003. http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/bccreportonly.pdf Flew, Terry. New Media: An Introduction. Oxford: Oxford University Press, 2002. Frank, Thomas. One Market under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy. New York: Anchor Books, 2000. Hartley, John and Cunningham, Stuart. ‘Creative Industries: from Blue Poles to fat pipes’, in Malcolm Gillies (ed.) The National Humanities and Social Sciences Summit: Position Papers. Canberra: DEST, 2002. Hayden, Steve. ‘Tastes Great, Less Filling: Ad Space – Will Advertisers Learn the Hard Lesson of Over-Development?’. Wired Magazine 11.06 (June, 2003), http://www.wired.com/wired/archive/11.06/ad_spc.html Hardt, Michael and Negri, Antonio. Empire. Cambridge, Mass.: Harvard University Press, 2000. Lash, Scott. Critique of Information. London: Sage, 2002. Lovink, Geert. Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002a. Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture. Cambridge, Mass.: MIT Press, 2002b. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. McRobbie, Angela. ‘Clubs to Companies: Notes on the Decline of Political Culture in Speeded up Creative Worlds’, Cultural Studies 16.4 (2002): 516-31. Marginson, Simon and Considine, Mark. The Enterprise University: Power, Governance and Reinvention in Australia. Cambridge: Cambridge University Press, 2000. Meikle, Graham. Future Active: Media Activism and the Internet. Sydney: Pluto Press, 2002. Ross, Andrew. No-Collar: The Humane Workplace and Its Hidden Costs. New York: Basic Books, 2003. Rossiter, Ned. ‘Processual Media Theory’, in Adrian Miles (ed.) Streaming Worlds: 5th International Digital Arts & Culture (DAC) Conference. 19-23 May. Melbourne: RMIT University, 2003, 173-184. http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia University Press, 1996. Wark, McKenzie. ‘Abstraction’ and ‘Hack’, in Hugh Brown, Geert Lovink, Helen Merrick, Ned Rossiter, David Teh, Michele Willson (eds). Politics of a Digital Present: An Inventory of Australian Net Culture, Criticism and Theory. Melbourne: Fibreculture Publications, 2001, 3-7, 99-102. Wark, McKenzie. ‘The Power of Multiplicity and the Multiplicity of Power’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 314-325. Links http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf http://www.creativeindustries.qut.com/ http://www.creativeindustries.qut.com/research/cirac/documents/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf http://www.creativeindustries.qut.com/research/cirac/documents/bccreportonly.pdf http://www.culture.gov.uk/creative/mapping.html http://www.fibreculture.org/archives/index.html http://www.wired.com/wired/archive/11.06/ad_spc.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rossiter, Ned. "Creative Industries and the Limits of Critique from " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/11-creativeindustries.php>. APA Style Rossiter, N. (2003, Jun 19). Creative Industries and the Limits of Critique from . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/11-creativeindustries.php>
APA, Harvard, Vancouver, ISO, and other styles
34
Leishman, Kirsty. "Flesh." M/C Journal 2, no.3 (May1, 1999). http://dx.doi.org/10.5204/mcj.1748.
Full textAbstract:
When I think of 'flesh' at this moment in human history, it's difficult not to think of the images on television and in print news reports in recent weeks. The first pictures of Kosovo's Albanian population being purged from their homes and amassed on the borders of neighbouring countries have left an impression. While cameras have stood witness from afar, we have been confronted by images of people being shot at point-blank range. Rows of corpses have lined up after ill-executed bombing raids by NATO forces and the more systematic slaughter of pro-independence activists by wayward Indonesian military groups in East Timor. Perhaps less expected, the deaths of school students in Columbine, Colorado, and the patrons of a gay club in the Soho district of London have added force to daily reminders on the insistence of the flesh to being. The significance of flesh to being extends beyond a simple matter of physical survival. It is often our reaction to the flesh of the other upon which we stake our very sense of self. Subjectivity becomes a constant process of negotiating the borders of the self, where an individual may either identify with the other they encounter, or attempt to deny or repress the other's existence. The recent events around the world are illustrative of the attempt by some individuals to repress others, who (inadvertently) threaten their stable sense of self, by way of the very final solution of death. It is often a response to an irrational fear of the inescapable physical flesh of others in various manifestations, such as ethnicity, gender and sexuality, that provokes such violent actions. In view of the on-going perpetration of violence towards others, it is little wonder that fantasies of transcending the flesh abound in many narratives. The successes of William Gibson's Neuromancer in 1986, and now the currently-released film The Matrix are contemporary testaments to the ongoing popularity of this fantasy of human self-creation without the limits imposed by 'meat'. While it might be argued that the leap into cyberspace is a denial of the flesh, it might also be argued that emergent technologies are almost certainly embraced because they seem to offer the possibility of allowing us to be more human than we are currently. New technologies seem to offer new ways of being in the world that will refine human communication and diffuse prejudices, where you will be loved for your personality and not hated for the colour of your skin. In this issue of M/C we consider a variety of ways of thinking about the flesh amidst the effects of media and new technologies. The feature article by Sean Aylward Smith asks "Where Does the Body End?", and thus also, where does technology begin? Smith deliberates on the difficulty of working with and about technology, where common-sense suggests that one should be able to define the distinction between technology and the body before embarking on the study of "any given socio-technical imbroglio". Working through the philosophical writings of Felix Guattari, Bruno Latour and then Guattari again, this time in conjunction with Gilles Deleuze, Smith progressively confuses the apparently neat distinction between technology and the body to argue for a subjectivity, or rather "a mode of individuation" as a haecceity, where humans are "collective assemblages" of agencies and affects. Peter Chen undertakes his deliberation on flesh in terms of its existence on the Internet in the form of pornography. In "Community Without Flesh: First Thoughts on the New Broadcasting Services Amendment (Online Services) Bill 1999", Chen argues that in adapting existing regulatory paradigms to the Internet, the Australian government has overlooked the informal communities of the virtual world. He suggests that in attempting to meet a perceived social need for regulation with political and administrative expedience, the government has ignored the potentially cohesive role they might play in the development of self-regulating communities who require little government intervention to produce socially beneficial outcomes. Chen predicts the formation of a new type of community, "whose desire for a feast of flesh" will ensure they are vigilant in their evasion of the cast of the regulators' net. Alan Macdougall's article might offer some practical solutions for those members of the virtual communities of cyberspace discussed by Chen. In "'And the Word Was Made Flesh, and Dwelt amongst Us'": Towards Pseudonymous Life on the Internet", Macdougall engages in a critical discussion of pseudonymity on the Internet, where users construct untraceable identities for use online. While Chen identified the concerns private citizens have when governments implement modes of surveillance for online activities, Macdougall acknowledges the threat individuals also experience from commercial interests gathering demographic information. He iterates the emergent technologies that are being developed to counter such intrusions, and considers the ways in which this "new flesh" will dwell amongst us. Axel Bruns continues the discussion of the Internet, and concurs with Macdougall's assessment that while online activity may present itself as ephemeral, in fact it has a presence that produces very real effects. In a defense of the significance of online publishing and communication, Bruns asks "How Solid Is the Flesh?"; he wonders whether the solidity and therefore the esteem that is generally attributed to publications in the 'real world' is a convincing argument when many books and print journals languish unread on obscure library shelves. On the contrary, Bruns argues, the 'flesh' is not left behind when the leap is taken into cyberspace. The ongoing explosion in available storage space on the Internet has the effect that cyberspace is becoming increasingly anephemeral. In "How Funny?: Spectacular Ani in Animated Television Cartoons" Simon-Astley Scholfield shifts our focus to a small screen of another kind in his consideration of the popular animated American 'kidult' cartoon series Ren and Stimpy and South Park. He notes that amid the uproar about the excessive depictions of violence and viscera in these comedy cartoons, an analysis of their representations of anal flesh has been conspicuously avoided. Scholfield's article addresses this oversight in a comparison between the two programmes. He concludes that while South Park explores subversive themes, "they have been twisted into misogynist and homophobic contexts". In contrast, the narrative outcomes in Ren and Stimpy posit a challenge to the "dominant homophobic culture". The 1997 spate of dead celebrities provides the flesh in Rebecca Farley's article, "The Word Made Flesh: Media Coverage of Dead Celebrities". Noticing the absence of pictures of dead celebrities' bodies in the coverage of their deaths, Farley wonders if when alive, celebrities fill a particular function, what do they do in death? Choosing to focus specifically on the deaths of Gianni Versace, Michael Hutchence and Mother Teresa, Farley argues that the sexually transgressive personae of Versace and Hutchence in life are replaced with "a pro-social narrative" that returns them to the bosom of family in death, while Mother Teresa, whose body "caused no trouble when it was alive, and conveniently wasn't mangled to death", is allowed to be present and photographed in death. Tseen Khoo's article, "Fetishising Flesh: Asian-Australian and Asian-Canadian Representation, Porno-Culinary Genres, and the Racially Marked Body" takes full advantage of the Internet to introduce the topic of flesh. Via a tour of various Websites, Khoo introduces the Web-surfer to the issues involved in representing the Asian body in diaspora, and the politically fraught issues for racial minority populations in majority 'white' nations. Khoo considers examples from Japanese-Canadian literature, metaphors of ingestion, and racial minority identity politics in the United States. The final submission to this issue is a work of creative writing by Hamish Kaden. "The Interminable Son" is the story of a man reconciling the death of his well-known feminist mother. The un-named character resurrects the memory of his mother through a Buddhist ceremony for the dead, and by conducting library research into her life as a prominent campaigner for women's right to have safe abortions. Kaden imparts the emotions of his character, while providing insight into an important health issue that effects many lives. This issue of M/C conceives of flesh in many forms and relationships. The cover image, designed by Damian Frost, should not go without mention as it provides a fresh vision from which to embark onto the smorgasbord of 'flesh'. Enjoy! Kirsty Leishman 'Flesh' Issue Editor Citation reference for this article MLA style: Kirsty Leishman. "Editorial: 'Flesh'." M/C: A Journal of Media and Culture 2.3 (1999). [your date of access] <http://www.uq.edu.au/mc/9905/edit.php>. Chicago style: Kirsty Leishman, "Editorial: 'Flesh'," M/C: A Journal of Media and Culture 2, no. 3 (1999), <http://www.uq.edu.au/mc/9905/edit.php> ([your date of access]). APA style: Kirsty Leishman. (1999) Editorial: 'flesh'. M/C: A Journal of Media and Culture 2(3). <http://www.uq.edu.au/mc/9905/edit.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
35
King, Ben. "Invasion." M/C Journal 2, no.2 (March1, 1999). http://dx.doi.org/10.5204/mcj.1741.
Full textAbstract:
The pop cultural moment that most typifies the social psychology of invasion for many of us is Orson Welles's 1938 coast to coast CBS radio broadcast of Invaders from Mars, a narration based on H.G. Wells's The War of the Worlds. News bulletins and scene broadcasts followed Welles's introduction, featuring, in contemporary journalistic style, reports of a "meteor" landing near Princeton, N.J., which "killed" 1500 people, and the discovery that it was in fact a "metal cylinder" containing strange creatures from Mars armed with "death rays" which would reduce all the inhabitants of the earth to space dust. Welles's broadcast caused thousands to believe that Martians were wreaking widespread havoc in New York and Jersey. New York streets were filled with families rushing to open spaces protecting their faces from the "gas raids", clutching sacred possessions and each other. Lines of communication were clogged, massive traffic jams ensued, and people evacuated their homes in a state of abject terror while armouries in neighbouring districts prepared to join in the "battle". Some felt it was a very cruel prank, especially after the recent war scare in Europe that featured constant interruption of regular radio programming. Many of the thousands of questions directed at police in the hours following the broadcast reflected the concerns of the residents of London and Paris during the tense days before the Munich agreement. The media had undergone that strange metamorphosis that occurs when people depend on it for information that affects themselves directly. But it was not a prank. Three separate announcements made during the broadcast stressed its fictional nature. The introduction to the program stated "the Columbia Broadcasting System and its affiliated stations present Orson Welles and the Mercury Theatre on the Air in The War of the Worlds by H.G. Wells", as did the newspaper listing of the program "Today: 8:00-9:00 -- Play: H.G. Wells's 'War of the Worlds' -- WABC". Welles, rather innocently, wanted to play with the conventions of broadcasting and grant his audience a bit of legitimately unsettling, though obviously fictitious, verisimilitude. There are not too many instances in modern history where we can look objectively at such incredible reactions to media soundbytes. That evening is a prototype for the impact media culture can have on an audience whose minds are prepped for impending disaster. The interruption of scheduled radio invoked in the audience a knee-jerk response that dramatically illustrated the susceptibility of people to the discourse of invasion, as well as the depth of the relationship between the audience and media during tense times. These days, the media itself are often regarded as the invaders. The endless procession of information that grows alongside technology's ability to present it is feared as much as it is loved. In the current climate of information and technological overload, invasion has swum from the depths of our unconscious paranoia and lurks impatiently in the shallows. There is so much invasion and so much to feel invaded about: the war in Kosovo (one of over sixty being fought today) is getting worse with the benevolence and force of the UN dwindling in a cloud of bureaucracy and failed talks, Ethiopia and Eritrea are going at it again, the ideology of the Olympic Games in Sydney has gone from a positive celebration of the millennium to a revenue-generating boys club of back scratchers, Internet smut is still everywhere, and most horrifically, Baywatch came dangerously close to being shot on location on the East Coast of Australia. In this issue of M/C we take a look at literal and allegorical invasions from a variety of cleverly examined aspects of our culture. Firstly, Axel Bruns takes a look a subtle invasion that is occurring on the Web in "Invading the Ivory Tower: Hypertext and the New Dilettante Scholars". He points to the way the Internet's function as a research tool is changing the nature of academic writing due to its interactivity and potential to be manipulated in a way that conventional written material cannot. Axel investigates the web browser's ability to invade the text and the elite world of academic publishing via the format of hypertext itself rather than merely through ideas. Felicity Meakins's article Shooting Baywatch: Resisting Cultural Invasion examines media and community reactions to the threat of having the television series Baywatch shot on Australian beaches. Felicity looks at the cultural cringe that has surrounded the relationship between Australia and America over the years and is manifested by our response to American accents in the media. American cultural imperialism has come to signify a great deal in the dwindling face of Aussie institutions like mateship and egalitarianism. In a similarly driven piece called "A Decolonising Doctor? British SF Invasion Narratives", Nick Caldwell investigates some of the implications of the "Britishness" of the cult television series Doctor Who, where insularity and cultural authority are taken to extremes during the ubiquitous intergalactic invasions. Paul Mc Cormack's article "Screen II: The Invasion of the Attention Snatchers" turns from technologically superior invaders to an invasion by technology itself -- he considers how the television has irreversibly invaded our lives and claimed a dominant place in the domestic sphere. Recently, the (Internet-connected) personal computer has begun a similar invasion: what space will it eventually claim? Sandra Brunet's "Is Sustainable Tourism Really Sustainable? Protecting the Icon in the Commodity at Sites of Invasion" explores the often forgotten Kangaroo Island off the coast of South Australia. She looks at ways in which the image of the island is constructed by the government and media for eco-tourism and how faithful this representation is to the farmers, fishermen and other inhabitants of the island. Paul Starr's article "Special Effects and the Invasive Camera: Enemy of the State and The Conversation" rounds off the issue with a look at the troubled relationship between cutting-edge special effects in Hollywood action movies and the surveillance technologies that recent movies such as Enemy of the State show as tools in government conspiracies. The depiction of high-tech gadgetry as 'cool' and 'evil' at the same time, he writes, leads to a collapse of meaning. This issue of M/C succeeds in pointing out sites of invasion in unusual places, continuing the journal's tradition of perception in the face of new media culture. I hope you enjoy this second issue of the second volume: 'invasion'. Ben King 'Invasion' Issue Editor Citation reference for this article MLA style: Ben King. "Editorial: 'Invasion'." M/C: A Journal of Media and Culture 2.2 (1999). [your date of access] <http://www.uq.edu.au/mc/9903/edit.php>. Chicago style: Ben King, "Editorial: 'Invasion'," M/C: A Journal of Media and Culture 2, no. 2 (1999), <http://www.uq.edu.au/mc/9903/edit.php> ([your date of access]). APA style: Ben King. (1999) Editorial: 'invasion'. M/C: A Journal of Media and Culture 2(2). <http://www.uq.edu.au/mc/9903/edit.php> ([your date of access]).
APA, Harvard, Vancouver, ISO, and other styles
36
Livingstone,RandallM. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no.6 (November23, 2010). http://dx.doi.org/10.5204/mcj.315.
Full textAbstract:
Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.
APA, Harvard, Vancouver, ISO, and other styles
37
Howarth, Anita. "A Hunger Strike - The Ecology of a Protest: The Case of Bahraini Activist Abdulhad al-Khawaja." M/C Journal 15, no.3 (June26, 2012). http://dx.doi.org/10.5204/mcj.509.
Full textAbstract:
Introduction Since December 2010 the dramatic spectacle of the spread of mass uprisings, civil unrest, and protest across North Africa and the Middle East have been chronicled daily on mainstream media and new media. Broadly speaking, the Arab Spring—as it came to be known—is challenging repressive, corrupt governments and calling for democracy and human rights. The convulsive events linked with these debates have been striking not only because of the rapid spread of historically momentous mass protests but also because of the ways in which the media “have become inextricably infused inside them” enabling the global media ecology to perform “an integral part in building and mobilizing support, co-ordinating and defining the protests within different Arab societies as well as trans-nationalizing them” (Cottle 295). Images of mass protests have been juxtaposed against those of individuals prepared to self-destruct for political ends. Video clips and photographs of the individual suffering of Tunisian Mohamed Bouazizi’s self-immolation and the Bahraini Abdulhad al-Khawaja’s emaciated body foreground, in very graphic ways, political struggles that larger events would mask or render invisible. Highlighting broad commonalties does not assume uniformity in patterns of protest and media coverage across the region. There has been considerable variation in the global media coverage and nature of the protests in North Africa and the Middle East (Cottle). In Tunisia, Egypt, Libya, and Yemen uprisings overthrew regimes and leaders. In Syria it has led the country to the brink of civil war. In Bahrain, the regime and its militia violently suppressed peaceful protests. As a wave of protests spread across the Middle East and one government after another toppled in front of 24/7 global media coverage, Bahrain became the “Arab revolution that was abandoned by the Arabs, forsaken by the West … forgotten by the world,” and largely ignored by the global media (Al-Jazeera English). Per capita the protests have been among the largest of the Arab Spring (Human Rights First) and the crackdown as brutal as elsewhere. International organizations have condemned the use of military courts to trial protestors, the detaining of medical staff who had treated the injured, and the use of torture, including the torture of children (Fisher). Bahraini and international human rights organizations have been systematically chronicling these violations of human rights, and posting on Websites distressing images of tortured bodies often with warnings about the graphic depictions viewers are about to see. It was in this context of brutal suppression, global media silence, and the reluctance of the international community to intervene, that the Bahraini-Danish human rights activist Abdulhad al-Khawaja launched his “death or freedom” hunger strike. Even this radical action initially failed to interest international editors who were more focused on Egypt, Libya, and Syria, but media attention rose in response to the Bahrain Formula 1 race in April 2012. Pro-democracy activists pledged “days of rage” to coincide with the race in order to highlight continuing human rights abuses in the kingdom (Turner). As Al Khawaja’s health deteriorated the Bahraini government resisted calls for his release (Article 19) from the Danish government who requested that Al Khawaja be extradited there on “humanitarian grounds” for hospital treatment (Fisk). This article does not explore the geo-politics of the Bahraini struggle or the possible reasons why the international community—in contrast to Syria and Egypt—has been largely silent and reluctant to debate the issues. Important as they are, those remain questions for Middle Eastern specialists to address. In this article I am concerned with the overlapping and interpenetration of two ecologies. The first ecology is the ethical framing of a prison hunger strike as a corporeal-environmental act of (self) destruction intended to achieve political ends. The second ecology is the operation of global media where international inaction inadvertently foregrounds the political struggles that larger events and discourses surrounding Egypt, Libya, and Syria overshadow. What connects these two ecologies is the body of the hunger striker, turned into a spectacle and mediated via a politics of affect that invites a global public to empathise and so enter into his suffering. The connection between the two lies in the emaciated body of the hunger striker. An Ecological Humanities Approach This exploration of two ecologies draws on the ecological humanities and its central premise of connectivity. The ecological humanities critique the traditional binaries in Western thinking between nature and culture; the political and social; them and us; the collective and the individual; mind, body and emotion (Rose & Robin, Rieber). Such binaries create artificial hierarchies, divisions, and conflicts that ultimately impede the ability to respond to crises. Crises are major changes that are “out of control” driven—primarily but not exclusively—by social, political, and cultural forces that unleash “runaway systems with their own dynamics” (Rose & Robin 1). The ecological humanities response to crises is premised on the recognition of the all-inclusive connectivity of organisms, systems, and environments and an ethical commitment to action from within this entanglement. A founding premise of connectivity, first articulated by anthropologist and philosopher Gregory Bateson, is that the “unit of survival is not the individual or the species, but the organism-and-its-environment” (Rose & Robin 2). This highlights a dialectic in which an organism is shaped by and shapes the context in which it finds itself. Or, as Harries-Jones puts it, relations are recursive as “events continually enter into, become entangled with, and then re-enter the universe they describe” (3). This ensures constantly evolving ecosystems but it also means any organism that “deteriorates its environment commits suicide” (Rose & Robin 2) with implications for the others in the eco-system. Bateson’s central premise is that organisms are simultaneously independent, as separate beings, but also interdependent. Interactions are not seen purely as exchanges but as dynamic, dialectical, dialogical, and mutually constitutive. Thus, it is presumed that the destruction or protection of others has consequences for oneself. Another dimension of interactions is multi-modality, which implies that human communication cannot be reduced to a single mode such as words, actions, or images but needs to be understood in the complexity of inter-relations between these (see Rieber 16). Nor can dissemination be reduced to a single technological platform whether this is print, television, Internet, or other media (see Cottle). The final point is that interactions are “biologically grounded but not determined” in that the “cognitive, emotional and volitional processes” underpinning face-to-face or mediated communication are “essentially indivisible” and any attempt to separate them by privileging emotion at the expense of thought, or vice versa, is likely to be unhealthy (Rieber 17). This is most graphically demonstrated in a politically-motivated hunger strike where emotion and volition over-rides the survivalist instinct. The Ecology of a Prison Hunger Strike The radical nature of a hunger strike inevitably gives rise to medico-ethical debates. Hunger strikes entail the voluntary refusal of sustenance by an individual and, when prolonged, such deprivation sets off a chain reaction as the less important components in the internal body systems shut down to protect the brain until even that can no longer be protected (see Basoglu et al). This extreme form of protest—essentially an act of self-destruction—raises ethical issues over whether or not doctors or the state should intervene to save a life for humanitarian or political reasons. In 1975 and 1991, the World Medical Association (WMA) sought to negotiate this by distinguishing between, on the one hand, the mentally/psychological impaired individual who chooses a “voluntary fast” and, on the other hand, the hunger striker who chooses a form of protest action to secure an explicit political goal fully aware of fatal consequences of prolonged action (see Annas, Reyes). This binary enables the WMA to label the action of the mentally impaired suicide while claiming that to do so for political protesters would be a “misconception” because the “striker … does not want to die” but to “live better” by obtaining certain political goals for himself, his group or his country. “If necessary he is willing to sacrifice his life for his case, but the aim is certainly not suicide” (Reyes 11). In practice, the boundaries between suicide and political protest are likely to be much more blurred than this but the medico-ethical binary is important because it informs discourses about what form of intervention is ethically appropriate. In the case of the “suicidal” the WMA legitimises force-feeding by a doctor as a life-saving act. In the case of the political protestor, it is de-legitimised in discourses of an infringement of freedom of expression and an act of torture because of the pain involved (see Annas, Reyes). Philosopher Michel Foucault argued that prison is a key site where the embodied subject is explicitly governed and where the exercising of state power in the act of incarceration means the body of the imprisoned no longer solely belongs to the individual. It is also where the “body’s range of significations” is curtailed, “shaped and invested by the very forces that detain and imprison it” (Pugliese 2). Thus, prison creates the circumstances in which the incarcerated is denied the “usual forms of protest and judicial safeguards” available outside its confines. The consequence is that when presented with conditions that violate core beliefs he/she may view acts of self-destruction—such as hunger strikes or lip sewing—as one of the few “means of protesting against, or demanding attention” or achieving political ends still available to them (Reyes 11; Pugliese). The hunger strike implicates the state, which, in the act of imprisoning, has assumed a measure of power and responsibility for the body of the individual. If a protest action is labelled suicidal by medical professionals—for instance at Guantanamo—then the force-feeding of prisoners can be legitimised within the WMA guidelines (Annas). There is considerable political temptation to do so particularly when the hunger striker has become an icon of resistance to the state, the knowledge of his/her action has transcended prison confines, and the alienating conditions that prompted the action are being widely debated in the media. This poses a two-fold danger for the state. On the one hand, there is the possibility that the slow emaciation and death while imprisoned, if covered by the media, may become a spectacle able to mobilise further resistance that can destabilise the polity. On the other hand, there is the fear that in the act of dying, and the spectacle surrounding death, the hunger striker would have secured the public attention to the very cause they are championing. Central to this is whether or not the act of self-destruction is mediated. It is far from inevitable that the media will cover a hunger strike or do so in ways that enable the hunger striker’s appeal to the emotions of others. However, when it does, the international scrutiny and condemnation that follows may undermine the credibility of the state—as happened with the death of the IRA member Bobby Sands in Northern Ireland (Russell). The Media Ecology and the Bahrain Arab Spring The IRA’s use of an “ancient tactic ... to make a blunt appeal to sympathy and emotion” in the form of the Sands hunger strike was seen as “spectacularly successful in gaining worldwide publicity” (Willis 1). Media ecology has evolved dramatically since then. Over the past 20 years communication flows between the local and the global, traditional media formations (broadcast and print), and new communication media (Internet and mobile phones) have escalated. The interactions of the traditional media have historically shaped and been shaped by more “top-down” “politics of representation” in which the primary relationship is between journalists and competing public relations professionals servicing rival politicians, business or NGOs desire for media attention and framing issues in a way that is favourable or sympathetic to their cause. However, rapidly evolving new media platforms offer bottom up, user-generated content, a politics of connectivity, and mobilization of ordinary people (Cottle 31). However, this distinction has increasingly been seen as offering too rigid a binary to capture the complexity of the interactions between traditional and new media as well as the events they capture. The evolution of both meant their content increasingly overlaps and interpenetrates (see Bennett). New media technologies “add new communicative ingredients into the media ecology mix” (Cottle 31) as well as new forms of political protests and new ways of mobilizing dispersed networks of activists (Juris). Despite their pervasiveness, new media technologies are “unlikely to displace the necessity for coverage in mainstream media”; a feature noted by activist groups who have evolved their own “carnivalesque” tactics (Cottle 32) capable of creating the spectacle that meets television demands for action-driven visuals (Juris). New media provide these groups with the tools to publicise their actions pre- and post-event thereby increasing the possibility that mainstream media might cover their protests. However there is no guarantee that traditional and new media content will overlap and interpenetrate as initial coverage of the Bahrain Arab Spring highlights. Peaceful protests began in February 2011 but were violently quelled often by Saudi, Qatari and UAE militia on behalf of the Bahraini government. Mass arrests were made including that of children and medical personnel who had treated those wounded during the suppression of the protests. What followed were a long series of detentions without trial, military court rulings on civilians, and frequent use of torture in prisons (Human Rights Watch 2012). By the end of 2011, the country had the highest number of political prisoners per capita of any country in the world (Amiri) but received little coverage in the US. The Libyan uprising was afforded the most broadcast time (700 minutes) followed by Egypt (500 minutes), Syria (143), and Bahrain (34) (Lobe). Year-end round-ups of the Arab Spring on the American Broadcasting Corporation ignored Bahrain altogether or mentioned it once in a 21-page feature (Cavell). This was not due to a lack of information because a steady stream has flowed from mobile phones, Internet sites and Twitter as NGOs—Bahraini and international—chronicled in images and first-hand accounts the abuses. However, little of this coverage was picked up by the US-dominated global media. It was in this context that the Bahraini-Danish human rights activist Abdulhad Al Khawaja launched his “freedom or death” hunger strike in protest against the violent suppression of peaceful demonstrations, the treatment of prisoners, and the conduct of the trials. Even this radical action failed to persuade international editors to cover the Bahrain Arab Spring or Al Khawaja’s deteriorating health despite being “one of the most important stories to emerge over the Arab Spring” (Nallu). This began to change in April 2012 as a number of things converged. Formula 1 pressed ahead with the Bahrain Grand Prix, and pro-democracy activists pledged “days of rage” over human rights abuses. As these were violently suppressed, editors on global news desks increasingly questioned the government and Formula 1 “spin” that all was well in the kingdom (see BBC; Turner). Claims by the drivers—many of who were sponsored by the Bahraini government—that this was a sports event, not a political one, were met with derision and journalists more familiar with interviewing superstars were diverted into covering protests because their political counterparts had been denied entry to the country (Fisk). This combination of media events and responses created the attention, interest, and space in which Al Khawaja’s deteriorating condition could become a media spectacle. The Mediated Spectacle of Al Khawaja’s Hunger Strike Journalists who had previously struggled to interest editors in Bahrain and Al Khawaja’s plight found that in the weeks leading up to the Grand Prix and since “his condition rapidly deteriorated”’ and there were “daily updates with stories from CNN to the Hindustan Times” (Nulla). Much of this mainstream news was derived from interviews and tweets from Al Khawaja’s family after each visit or phone call. What emerged was an unprecedented composite—a diary of witnesses to a hunger strike interspersed with the family’s struggles with the authorities to get access to him and their almost tangible fear that the Bahraini government would not relent and he would die. As these fears intensified 48 human rights NGOs called for his release from prison (Article 19) and the Danish government formally requested his extradition for hospital treatment on “humanitarian grounds”. Both were rejected. As if to provide evidence of Al Khawaja’s tenuous hold on life, his family released an image of his emaciated body onto Twitter. This graphic depiction of the corporeal-environmental act of (self) destruction was re-tweeted and posted on countless NGO and news Websites (see Al-Jazeera). It was also juxtaposed against images of multi-million dollar cars circling a race-track, funded by similarly large advertising deals and watched by millions of people around the world on satellite channels. Spectator sport had become a grotesque parody of one man’s struggle to speak of what was going on in Bahrain. In an attempt to silence the criticism the Bahraini government imposed a de facto news blackout denying all access to Al Khawaja in hospital where he had been sent after collapsing. The family’s tweets while he was held incommunicado speak of their raw pain, their desperation to find out if he was still alive, and their grief. They also provided a new source of information, and the refrain “where is alkhawaja,” reverberated on Twitter and in global news outlets (see for instance Der Spiegel, Al-Jazeera). In the days immediately after the race the Danish prime minister called for the release of Al Khawaja, saying he is in a “very critical condition” (Guardian), as did the UN’s Ban-Ki Moon (UN News and Media). The silencing of Al Khawaja had become a discourse of callousness and as global media pressure built Bahraini ministers felt compelled to challenge this on non-Arabic media, claiming Al Khawaja was “eating” and “well”. The Bahraini Prime Minister gave one of his first interviews to the Western media in years in which he denied “AlKhawaja’s health is ‘as bad’ as you say. According to the doctors attending to him on a daily basis, he takes liquids” (Der Spiegel Online). Then, after six days of silence, the family was allowed to visit. They tweeted that while incommunicado he had been restrained and force-fed against his will (Almousawi), a statement almost immediately denied by the military hospital (Lebanon Now). The discourses of silence and callousness were replaced with discourses of “torture” through force-feeding. A month later Al Khawaja’s wife announced he was ending his hunger strike because he was being force-fed by two doctors at the prison, family and friends had urged him to eat again, and he felt the strike had achieved its goal of drawing the world’s attention to Bahrain government’s response to pro-democracy protests (Ahlul Bayt News Agency). Conclusion This article has sought to explore two ecologies. The first is of medico-ethical discourses which construct a prison hunger strike as a corporeal-environmental act of (self) destruction to achieve particular political ends. The second is of shifting engagement within media ecology and the struggle to facilitate interpenetration of content and discourses between mainstream news formations and new media flows of information. I have argued that what connects the two is the body of the hunger striker turned into a spectacle, mediated via a politics of affect which invites empathy and anger to mobilise behind the cause of the hunger striker. The body of the hunger striker is thereby (re)produced as a feature of the twin ecologies of the media environment and the self-environment relationship. References Ahlul Bayt News Agency. “Bahrain: Abdulhadi Alkhawaja’s Statement about Ending his Hunger Strike.” (29 May 2012). 1 June 2012 ‹http://abna.ir/data.asp?lang=3&id=318439›. Al-Akhbar. “Family Concerned Al-Khawaja May Be Being Force Fed.” Al-Akhbar English. (27 April 2012). 1 June 2012 ‹http://english.al-akhbar.com/content/family-concerned-al-khawaja-may-be-being-force-fed›. Al-Jazeera. “Shouting in the Dark.” Al-Jazeera English. (3 April 2012). 1 June 2012 ‹http://www.aljazeera.com/programmes/2011/08/201184144547798162.html› ——-. “Bahrain Says Hunger Striker in Good Health.” Al-Jazeera English. (27 April 2012). 1 June 2012 ‹http://www.aljazeera.com/news/middleeast/2012/04/2012425182261808.html> Almousawi, Khadija. (@Tublani 2010). “Sad cus I had to listen to dear Hadi telling me how he was drugged, restrained, force fed and kept incommunicado for five days.” (30 April 2012). 3h. Tweet. 1 June 2012. Amiri, Ranni. “Bahrain by the Numbers.” CounterPunch. (December 30-31). 1 June 2012 ‹http://www.counterpunch.org/2011/12/30/bahrain-by-the-numbers›. Annas, George. “Prison Hunger Strikes—Why the Motive Matters.” Hastings Centre Report. 12.6 (1982): 21-22. ——-. “Hunger Strikes at Guantanamo—Medical Ethics and Human Rights in a ‘Legal Black Hole.’” The New England Journal of Medicine 355 (2006): 1377-92. Article 19. “Bahrain: Forty-Eight Rights Groups Call on King to Free Abdulhadi Al-Khawaja, Whose Life is at Risk in Prison.” Article 19. (17 March 2012). 1 June 2012 ‹http://www.article19.org/resources.php/resource/2982/en/bahrain:-forty-eight-rights-groups-call-on-king-to-free-abdulhadi-al-khawaja,-whose-life-is-at-risk-in-prison›. Arsenault, Chris. “Starving for a Cause.” Al-Jazeera English. (11 April 2012). 1 June 2012 ‹http://www.aljazeera.com/indepth/features/2012/04/2012410123154923754.html›. British Broadcasting Corporation. “Bahrain activist Khawaja ends hunger strike.” (29 May 2012). 1 June 2012 ‹http://www.bbc.co.uk/news/world-18239695›. Basoglu, Mustafa.,Yesim Yetimalar, Nevin Gurgor, Secim Buyukcatalbas, and Yaprak Secil. “Neurological Complications of Prolonged Hunger Strike.” European Journal of Neurology 13 (2006): 1089-97. Bateson, Gregory. Steps to an Ecology of Mind. London: Granada Publishing, 1973 [1972]. Beresford, David. Ten Men Dead. New York: Atlantic Press, 1987. Bennett, W. Lance. News: The Politics of Illusion. New York: Longman, 2003 Blight, Gary., Sheila Pulham, and Paul Torpey. “Arab Spring: An Interactive Timeline of Middle East Protests.” Guardian. (5 January 2012). 1 June 2012 ‹http://www.guardian.co.uk/world/interactive/2011/mar/22/middle-east-protest-interactive-timeline›. Cavell, Colin. “Bahrain: How the US Mainstream Media Turn a Blind Eye to Washington’s Despotic Arab Ally.” Global Researcher. (8 April 2012). 1 June 2012 ‹http://www.globalresearch.ca/index.php?context=va&aid=30176›. CockBurn, Patrick. “Fears Grow for Bahraini Activist on Hunger Strike.” The Independent. (28 April 2012). 1 June 2012. ‹http://www.independent.co.uk/news/world/middle-east/fears-grow-for-bahraini-activist-on-hunger-strike-7685168.html›. Cottle, Simon, and Libby Lester. Eds. Transnational Protests and the Media. New York: Peter Lang, 2011. Der Spiegel Online. “Interview with Bahrain’s Prime Minister: The Opposition are ‘Terrorizing the Rest of the Country.’” (27 April 2012). 1 June 2012 ‹http://www.spiegel.de/international/world/0,1518,830045,00.html›. Fairclough, Norman. Discourse and Social Change. Cambridge: Cambridge University Press, 1992. Fisher, Marc. “Arab Spring Yields Different Outcomes in Bahrain, Egypt and Libya.” Washington Post and Foreign Policy. (21 December 2011). 1 June 2012 ‹http://www.washingtonpost.com/world/arab-spring-yields-different-outcomes-in-bahrain-egypt-and-libya/2011/12/15/gIQAY6h57O_story.html›. Fisk, Robert. “Bahrain Grand Prix: This is Politics, Not Sport. If the Drivers Can’t See This They are the Pits.” Belfast Telegraph. (21 April 2012). 1 June 2012 ‹http://www.belfasttelegraph.co.uk/opinion/columnists/robert-fisk/bahrain-grand-prix-this-is-politics-not-sport-if-drivers-cant-see-that-they-are-the-pits-16148159.html›. Foucault, Michel. Discipline and Punish. Trans. Alan Sheridan. Harmondsworth: Penguin, 1982. Front Line Defenders. “Bahrain: Authorities Should Provide a ‘Proof of Live’ to Confirm that Abdulhadi Al-Khawaja on Day 78 of Hunger Strike is Still Alive.” (2012). 1 June 2012 ‹http://www.frontlinedefenders.org/node/18153›. Guardian. “Denmark PM to Bahrain: Release Jailed Activist.” (11 April 2012). June 2012 ‹http://www.guardian.co.uk/world/feedarticle/10189057›. Hammond, Andrew. “Bahrain ‘Day of Rage’ Planned for Formula One Grand Prix.” Huffington Post. (18 April 2012). 1 June 2012 ‹http://www.huffingtonpost.com/2012/04/18/bahrain-day-of-rage_n_1433861.html›. Hammond, Andrew, and Al-Jawahiry, Warda. “Game of Brinkmanship in Bahrain over Hunger Strike.” (19 April 2012). 1 June 2012 ‹http://www.trust.org/alertnet/news/game-of-brinkmanship-in-bahrain-over-hunger-strike›. Harries-Jones, Peter. A Recursive Vision: Ecological Understanding and Gregory Bateson. Toronto: University of Toronto Press, 1995. Human Rights First. “Human Rights First Awards Prestigious Medal of Liberty to Bahrain Centre for Human Rights.” (26 April 2012). 1 June 2012 ‹http://www.humanrightsfirst.org/2012/04/26/human-rights-first-awards›. Juris, Jeffrey. Networking Futures. Durham DC: Duke University Press, 2008. Kerr, Simeon. “Bahrain’s Forgotten Uprising Has Not Gone Away.” Financial Times. (20 April 2012). 1 June 2012 ‹http://www.ft.com/cms/s/0/1687bcc2-8af2-11e1-912d-00144feab49a.html#axzz1sxIjnhLi›. Lebanon Now. “Bahrain Hunger Striker Not Force-Fed, Hospital Says.” (29 April 2012). 1 June 2012 ‹http://www.nowlebanon.com/NewsArticleDetails.aspx?ID=391037›. Lobe, Jim. “‘Arab Spring’” Dominated TV Foreign News in 2011.” Nation of Change. (January 3, 2011). 1 June 2012 ‹http://www.nationofchange.org/arab-spring-dominated-tv-foreign-news-2011-1325603480›. Nallu, Preethi. “How the Media Failed Abdulhadi.” Jadaliyya. (2012). 1 June 2012 ‹http://www.jadaliyya.com/pages/index/5181/how-the-media-failed-abdulhadi›. Plunkett, John. “The Voice Pips Britain's Got Talent as Ratings War Takes New Twist.” Guardian. (23 April 2012). 1 June 2012 ‹http://www.guardian.co.uk/media/2012/apr/23/the-voice-britains-got-talent›. Pugliese, Joseph. “Penal Asylum: Refugees, Ethics, Hospitality.” Borderlands. 1.1 (2002). 1 June 2012 ‹http://www.borderlands.net.au/vol1no1_2002/pugliese.html›. Reuters. “Protests over Bahrain F1.” (19 April 2012). 1 June 2012 ‹http://uk.reuters.com/video/2012/04/19/protests-over-bahrain-f?videoId=233581507›. Reyes, Hernan. “Medical and Ethical Aspects of Hunger Strikes in Custody and the Issue of Torture.” Research in Legal Medicine 19.1 (1998). 1 June 2012 ‹http://www.icrc.org/eng/resources/documents/article/other/health-article-010198.htm›. Rieber, Robert. Ed. The Individual, Communication and Society: Essays in Memory of Gregory Bateson. Cambridge: Cambridge University Press, 1989. Roberts, David. “Blame Iran: A Dangerous Response to the Bahraini Uprising.” (20 August 2011). 1 June 2012 ‹http://www.guardian.co.uk/commentisfree/2011/aug/20/bahraini-uprising-iran› Rose, Deborah Bird and Libby Robin. “The Ecological Humanities in Action: An Invitation.” Australian Humanities Review 31-32 (April 2004). 1 June 2012 ‹http://www.australianhumanitiesreview.org/archive/Issue-April-2004/rose.html›. Russell, Sharman. Hunger: An Unnatural History. New York: Basic Books, 2005. Turner, Maran. “Bahrain’s Formula 1 is an Insult to Country’s Democratic Reformers.” CNN. (20 April 2012). 1 June 2012. ‹http://articles.cnn.com/2012-04-20/opinion/opinion_bahrain-f1-hunger-strike_1_abdulhadi-al-khawaja-bahraini-government-bahrain-s-formula?_s=PM:OPINION›. United Nations News & Media. “UN Chief Calls for Respect of Human Rights of Bahraini People.” (24 April 2012). 1 June 2012 ‹http://www.unmultimedia.org/radio/english/2012/04/un-chief-calls-respect-of-human-rights-of-bahraini-people›. Willis, David. “IRA Capitalises on Hunger Strike to Gain Worldwide Attention”. Christian Science Monitor. (29 April 1981): 1.
APA, Harvard, Vancouver, ISO, and other styles
38
Quinan,C.L., and Hannah Pezzack. "A Biometric Logic of Revelation: Zach Blas’s SANCTUM (2018)." M/C Journal 23, no.4 (August12, 2020). http://dx.doi.org/10.5204/mcj.1664.
Full textAbstract:
Ubiquitous in airports, border checkpoints, and other securitised spaces throughout the world, full-body imaging scanners claim to read bodies in order to identify if they pose security threats. Millimetre-wave body imaging machines—the most common type of body scanner—display to the operating security agent a screen with a generic body outline. If an anomaly is found or if an individual does not align with the machine’s understanding of an “average” body, a small box is highlighted and placed around the “problem” area, prompting further inspection in the form of pat-downs or questioning. In this complex security regime governed by such biometric, body-based technologies, it could be argued that nonalignment with bodily normativity as well as an attendant failure to reveal oneself—to become “transparent” (Hall 295)—marks a body as dangerous. As these algorithmic technologies become more pervasive, so too does the imperative to critically examine their purported neutrality and operative logic of revelation and readability.Biometric technologies are marketed as excavators of truth, with their optic potency claiming to demask masquerading bodies. Failure and bias are, however, an inescapable aspect of such technologies that work with narrow parameters of human morphology. Indeed, surveillance technologies have been taken to task for their inherent racial and gender biases (Browne; Pugliese). Facial recognition has, for example, been critiqued for its inability to read darker skin tones (Buolamwini and Gebru), while body scanners have been shown to target transgender bodies (Keyes; Magnet and Rodgers; Quinan). Critical security studies scholar Shoshana Magnet argues that error is endemic to the technological functioning of biometrics, particularly since they operate according to the faulty notion that bodies are “stable” and unchanging repositories of information that can be reified into code (Magnet 2).Although body scanners are presented as being able to reliably expose concealed weapons, they are riddled with incompetencies that misidentify and over-select certain demographics as suspect. Full-body scanners have, for example, caused considerable difficulties for transgender travellers, breast cancer patients, and people who use prosthetics, such as artificial limbs, colonoscopy bags, binders, or prosthetic genitalia (Clarkson; Quinan; Spalding). While it is not in the scope of this article to detail the workings of body imaging technologies and their inconsistencies, a growing body of scholarship has substantiated the claim that these machines unfairly impact those identifying as transgender and non-binary (see, e.g., Beauchamp; Currah and Mulqueen; Magnet and Rogers; Sjoberg). Moreover, they are constructed according to a logic of binary gender: before each person enters the scanner, transportation security officers must make a quick assessment of their gender/sex by pressing either a blue (corresponding to “male”) or pink (corresponding to “female”) button. In this sense, biometric, computerised security systems control and monitor the boundaries between male and female.The ability to “reveal” oneself is henceforth predicated on having a body free of “abnormalities” and fitting neatly into one of the two sex categorisations that the machine demands. Transgender and gender-nonconforming individuals, particularly those who do not have a binary gender presentation or whose presentation does not correspond to the sex marker in their documentation, also face difficulties if the machine flags anomalies (Quinan and Bresser). Drawing on a Foucauldian analysis of power as productive, Toby Beauchamp similarly illustrates how surveillance technologies not only identify but also create and reshape the figure of the dangerous subject in relation to normative configurations of gender, race, and able-bodiedness. By mobilizing narratives of concealment and disguise, heightened security measures frame gender nonconformity as dangerous (Beauchamp, Going Stealth). Although national and supranational authorities market biometric scanning technologies as scientifically neutral and exact methods of identification and verification and as an infallible solution to security risks, such tools of surveillance are clearly shaped by preconceptions and prejudgements about race, gender, and bodily normativity. Not only are they encoded with “prototypical whiteness” (Browne) but they are also built on “grossly stereotypical” configurations of gender (Clarkson).Amongst this increasingly securitised landscape, creative forms of artistic resistance can offer up a means of subverting discriminatory policing and surveillance practices by posing alternate visualisations that reveal and challenge their supposed objectivity. In his 2018 audio-video artwork installation entitled SANCTUM, UK-based American artist Zach Blas delves into how biometric technologies, like those described above, both reveal and (re)shape ontology by utilising the affectual resonance of sexual submission. Evoking the contradictory notions of oppression and pleasure, Blas describes SANCTUM as “a mystical environment that perverts sex dungeons with the apparatuses and procedures of airport body scans, biometric analysis, and predictive policing” (see full description at https://zachblas.info/works/sanctum/).Depicting generic mannequins that stand in for the digitalised rendering of the human forms that pass through body scanners, the installation transports the scanners out of the airport and into a queer environment that collapses sex, security, and weaponry; an environment that is “at once a prison-house of algorithmic capture, a sex dungeon with no genitals, a weapons factory, and a temple to security.” This artistic reframing gestures towards full-body scanning technology’s germination in the military, prisons, and other disciplinary systems, highlighting how its development and use has originated from punitive—rather than protective—contexts.In what follows, we adopt a methodological approach that applies visual analysis and close reading to scrutinise a selection of scenes from SANCTUM that underscore the sadomasochistic power inherent in surveillance technologies. Analysing visual and aural elements of the artistic intervention allows us to complicate the relationship between transparency and recognition and to problematise the dynamic of mandatory complicity and revelation that body scanners warrant. In contrast to a discourse of visibility that characterises algorithmically driven surveillance technology, Blas suggests opacity as a resistance strategy to biometrics' standardisation of identity. Taking an approach informed by critical security studies and queer theory, we also argue that SANCTUM highlights the violence inherent to the practice of reducing the body to a flat, inert surface that purports to align with some sort of “core” identity, a notion that contradicts feminist and queer approaches to identity and corporeality as fluid and changing. In close reading this artistic installation alongside emerging scholarship on the discriminatory effects of biometric technology, this article aims to highlight the potential of art to queer the supposed objectivity and neutrality of biometric surveillance and to critically challenge normative logics of revelation and readability.Corporeal Fetishism and Body HorrorThroughout both his artistic practice and scholarly work, Blas has been critical of the above narrative of biometrics as objective extractors of information. Rather than looking to dominant forms of representation as a means for recognition and social change, Blas’s work asks that we strive for creative techniques that precisely queer biometric and legal systems in order to make oneself unaccounted for. For him, “transparency, visibility, and representation to the state should be used tactically, they are never the end goal for a transformative politics but are, ultimately, a trap” (Blas and Gaboury 158). While we would simultaneously argue that invisibility is itself a privilege that is unevenly distributed, his creative work attempts to refuse a politics of visibility and to embrace an “informatic opacity” that is attuned to differences in bodies and identities (Blas).In particular, Blas’s artistic interventions titled Facial Weaponization Suite (2011-14) and Face Cages (2013-16) protest against biometric recognition and the inequalities that these technologies propagate by making masks and wearable metal objects that cannot be detected as human faces. This artistic-activist project contests biometric facial recognition and their attendant inequalities by, as detailed on the artist’s website,making ‘collective masks’ in workshops that are modelled from the aggregated facial data of participants, resulting in amorphous masks that cannot be detected as human faces by biometric facial recognition technologies. The masks are used for public interventions and performances.One mask explores blackness and the racist implications that undergird biometric technologies’ inability to detect dark skin. Meanwhile another mask, which he calls the “Fag Face Mask”, points to the heteronormative underpinnings of facial recognition. Created from the aggregated facial data of queer men, this amorphous pink mask implicitly references—and contests—scientific studies that have attempted to link the identification of sexual orientation through rapid facial recognition techniques.Building on this body of creative work that has advocated for opacity as a tool of social and political transformation, SANCTUM resists the revelatory impulses of biometric technology by turning to the use and abuse of full-body imaging. The installation opens with a shot of a large, dark industrial space. At the far end of a red, spotlighted corridor, a black mask flickers on a screen. A shimmering, oscillating sound reverberates—the opening bars of a techno track—that breaks down in rhythm while the mask evaporates into a cloud of smoke. The camera swivels, and a white figure—the generic mannequin of the body scanner screen—is pummelled by invisible forces as if in a wind tunnel. These ghostly silhouettes appear and reappear in different positions, with some being whipped and others stretched and penetrated by a steel anal hook. Rather than conjuring a traditional horror trope of the body’s terrifying, bloody interior, SANCTUM evokes a new kind of feared and fetishized trope that is endemic to the current era of surveillance capitalism: the abstracted body, standardised and datafied, created through the supposedly objective and efficient gaze of AI-driven machinery.Resting on the floor in front of the ominous animated mask are neon fragments arranged in an occultist formation—hands or half a face. By breaking the body down into component parts— “from retina to fingerprints”—biometric technologies “purport to make individual bodies endlessly replicable, segmentable and transmissible in the transnational spaces of global capital” (Magnet 8). The notion that bodies can be seamlessly turned into blueprints extracted from biological and cultural contexts has been described by Donna Haraway as “corporeal fetishism” (Haraway, Modest). In the context of SANCTUM, Blas illustrates the dangers of mistaking a model for a “concrete entity” (Haraway, “Situated” 147). Indeed, the digital cartography of the generic mannequin becomes no longer a mode of representation but instead a technoscientific truth.Several scenes in SANCTUM also illustrate a process whereby substances are extracted from the mannequins and used as tools to enact violence. In one such instance, a silver webbing is generated over a kneeling figure. Upon closer inspection, this geometric structure, which is reminiscent of Blas’s earlier Face Cages project, is a replication of the triangulated patterns produced by facial recognition software in its mapping of distance between eyes, nose, and mouth. In the next scene, this “map” breaks apart into singular shapes that float and transform into a metallic whip, before eventually reconstituting themselves as a penetrative douche hose that causes the mannequin to spasm and vomit a pixelated liquid. Its secretions levitate and become the webbing, and then the sequence begins anew.In another scene, a mannequin is held upside-down and force-fed a bubbling liquid that is being pumped through tubes from its arms, legs, and stomach. These depictions visualise Magnet’s argument that biometric renderings of bodies are understood not to be “tropic” or “historically specific” but are instead presented as “plumbing individual depths in order to extract core identity” (5). In this sense, this visual representation calls to mind biometrics’ reification of body and identity, obfuscating what Haraway would describe as the “situatedness of knowledge”. Blas’s work, however, forces a critique of these very systems, as the materials extracted from the bodies of the mannequins in SANCTUM allude to how biometric cartographies drawn from travellers are utilised to justify detainment. These security technologies employ what Magnet has referred to as “surveillant scopophilia,” that is, new ways and forms of looking at the human body “disassembled into component parts while simultaneously working to assuage individual anxieties about safety and security through the promise of surveillance” (17). The transparent body—the body that can submit and reveal itself—is ironically represented by the distinctly genderless translucent mannequins. Although the generic mannequins are seemingly blank slates, the installation simultaneously forces a conversation about the ways in which biometrics draw upon and perpetuate assumptions about gender, race, and sexuality.Biometric SubjugationOn her 2016 critically acclaimed album HOPELESSNESS, openly transgender singer, composer, and visual artist Anohni performs a deviant subjectivity that highlights the above dynamics that mark the contemporary surveillance discourse. To an imagined “daddy” technocrat, she sings:Watch me… I know you love me'Cause you're always watching me'Case I'm involved in evil'Case I'm involved in terrorism'Case I'm involved in child molestersEvoking a queer sexual frisson, Anohni describes how, as a trans woman, she is hyper-visible to state institutions. She narrates a voyeuristic relation where trans bodies are policed as threats to public safety rather than protected from systemic discrimination. Through the seemingly benevolent “daddy” character and the play on ‘cause (i.e., because) and ‘case (i.e., in case), she highlights how gender-nonconforming individuals are predictively surveilled and assumed to already be guilty. Reflecting on daddy-boy sexual paradigms, Jack Halberstam reads the “sideways” relations of queer practices as an enactment of “rupture as substitution” to create a new project that “holds on to vestiges of the old but distorts” (226). Upending power and control, queer art has the capacity to both reveal and undermine hegemonic structures while simultaneously allowing for the distortion of the old to create something new.Employing the sublimatory relations of bondage, discipline, sadism, and masochism (BDSM), Blas’s queer installation similarly creates a sideways representation that re-orientates the logic of the biometric scanners, thereby unveiling the always already sexualised relations of scrutiny and interrogation as well as the submissive complicity they demand. Replacing the airport environment with a dark and foreboding mise-en-scène allows Blas to focus on capture rather than mobility, highlighting the ways in which border checkpoints (including those instantiated by the airport) encourage free travel for some while foreclosing movement for others. Building on Sara Ahmed’s “phenomenology of being stopped”, Magnet considers what happens when we turn our gaze to those “who fail to pass the checkpoint” (107). In SANCTUM, the same actions are played out again and again on spectral beings who are trapped in various states: they shudder in cages, are chained to the floor, or are projected against the parameters of mounted screens. One ghostly figure, for instance, lies pinned down by metallic grappling hooks, arms raised above the head in a recognisable stance of surrender, conjuring up the now-familiar image of a traveller standing in the cylindrical scanner machine, waiting to be screened. In portraying this extended moment of immobility, Blas lays bare the deep contradictions in the rhetoric of “freedom of movement” that underlies such spaces.On a global level, media reporting, scientific studies, and policy documents proclaim that biometrics are essential to ensuring personal safety and national security. Within the public imagination, these technologies become seductive because of their marked ability to identify terrorist attackers—to reveal threatening bodies—thereby appealing to the anxious citizen’s fear of the disguised suicide bomber. Yet for marginalised identities prefigured as criminal or deceptive—including transgender and black and brown bodies—the inability to perform such acts of revelation via submission to screening can result in humiliation and further discrimination, public shaming, and even tortuous inquiry – acts that are played out in SANCTUM.Masked GenitalsFeminist surveillance studies scholar Rachel Hall has referred to the impetus for revelation in the post-9/11 era as a desire for a universal “aesthetics of transparency” in which the world and the body is turned inside-out so that there are no longer “secrets or interiors … in which terrorists or terrorist threats might find refuge” (127). Hall takes up the case study of Umar Farouk Abdulmutallab (infamously known as “the Underwear Bomber”) who attempted to detonate plastic explosives hidden in his underwear while onboard a flight from Amsterdam to Detroit on 25 December 2009. Hall argues that this event signified a coalescence of fears surrounding bodies of colour, genitalia, and terrorism. News reports following the incident stated that Abdulmutallab tucked his penis to make room for the explosive, thereby “queer[ing] the aspiring terrorist by indirectly referencing his willingness … to make room for a substitute phallus” (Hall 289). Overtly manifested in the Underwear Bomber incident is also a desire to voyeuristically expose a hidden, threatening interiority, which is inherently implicated with anxieties surrounding gender deviance. Beauchamp elaborates on how gender deviance and transgression have coalesced with terrorism, which was exemplified in the wake of the 9/11 attacks when the United States Department of Homeland Security issued a memo that male terrorists “may dress as females in order to discourage scrutiny” (“Artful” 359). Although this advisory did not explicitly reference transgender populations, it linked “deviant” gender presentation—to which we could also add Abdulmutallab’s tucking of his penis—with threats to national security (Beauchamp, Going Stealth). This also calls to mind a broader discussion of the ways in which genitalia feature in the screening process. Prior to the introduction of millimetre-wave body scanning technology, the most common form of scanner used was the backscatter imaging machine, which displayed “naked” body images of each passenger to the security agent. Due to privacy concerns, these machines were replaced by the scanners currently in place which use a generic outline of a passenger (exemplified in SANCTUM) to detect possible threats.It is here worth returning to Blas’s installation, as it also implicitly critiques the security protocols that attempt to reveal genitalia as both threatening and as evidence of an inner truth about a body. At one moment in the installation a bayonet-like object pierces the blank crotch of the mannequin, shattering it into holographic fragments. The apparent genderlessness of the mannequins is contrasted with these graphic sexual acts. The penetrating metallic instrument that breaks into the loin of the mannequin, combined with the camera shot that slowly zooms in on this action, draws attention to a surveillant fascination with genitalia and revelation. As Nicholas L. Clarkson documents in his analysis of airport security protocols governing prostheses, including limbs and packies (silicone penis prostheses), genitals are a central component of the screening process. While it is stipulated that physical searches should not require travellers to remove items of clothing, such as underwear, or to expose their genitals to staff for inspection, prosthetics are routinely screened and examined. This practice can create tensions for trans or disabled passengers with prosthetics in so-called “sensitive” areas, particularly as guidelines for security measures are often implemented by airport staff who are not properly trained in transgender-sensitive protocols.ConclusionAccording to media technologies scholar Jeremy Packer, “rather than being treated as one to be protected from an exterior force and one’s self, the citizen is now treated as an always potential threat, a becoming bomb” (382). Although this technological policing impacts all who are subjected to security regimes (which is to say, everyone), this amalgamation of body and bomb has exacerbated the ways in which bodies socially coded as threatening or deceptive are targeted by security and surveillance regimes. Nonetheless, others have argued that the use of invasive forms of surveillance can be justified by the state as an exchange: that citizens should willingly give up their right to privacy in exchange for safety (Monahan 1). Rather than subscribing to this paradigm, Blas’ SANCTUM critiques the violence of mandatory complicity in this “trade-off” narrative. Because their operationalisation rests on normative notions of embodiment that are governed by preconceptions around gender, race, sexuality and ability, surveillance systems demand that bodies become transparent. This disproportionally affects those whose bodies do not match norms, with trans and queer bodies often becoming unreadable (Kafer and Grinberg). The shadowy realm of SANCTUM illustrates this tension between biometric revelation and resistance, but also suggests that opacity may be a tool of transformation in the face of such discriminatory violations that are built into surveillance.ReferencesAhmed, Sara. “A Phenomenology of Whiteness.” Feminist Theory 8.2 (2007): 149–68.Beauchamp, Toby. “Artful Concealment and Strategic Visibility: Transgender Bodies and U.S. State Surveillance after 9/11.” Surveillance & Society 6.4 (2009): 356–66.———. Going Stealth: Transgender Politics and U.S. Surveillance Practices. Durham, NC: Duke UP, 2019.Blas, Zach. “Informatic Opacity.” The Journal of Aesthetics and Protest 9 (2014). <http://www.joaap.org/issue9/zachblas.htm>.Blas, Zach, and Jacob Gaboury. 2016. “Biometrics and Opacity: A Conversation.” Camera Obscura: Feminism, Culture, and Media Studies 31.2 (2016): 154-65.Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research 81 (2018): 1-15.Browne, Simone. Dark Matters: On the Surveillance of Blackness. Durham, NC: Duke UP, 2015.Clarkson, Nicholas L. “Incoherent Assemblages: Transgender Conflicts in US Security.” Surveillance & Society 17.5 (2019): 618-30.Currah, Paisley, and Tara Mulqueen. “Securitizing Gender: Identity, Biometrics, and Transgender Bodies at the Airport.” Social Research 78.2 (2011): 556-82.Halberstam, Jack. The Queer Art of Failure. Durham: Duke UP, 2011.Hall, Rachel. “Terror and the Female Grotesque: Introducing Full-Body Scanners to U.S. Airports.” Feminist Surveillance Studies. Eds. Rachel E. Dubrofsky and Shoshana Amielle Magnet. Durham, NC: Duke UP, 2015. 127-49.Haraway, Donna. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14.3 (1988): 575-99.———. Modest_Witness@Second_Millennium. FemaleMan_Meets_OncoMouse: Feminism and Technoscience. New York: Routledge, 1997.Kafer, Gary, and Daniel Grinberg. “Queer Surveillance.” Surveillance & Society 17.5 (2019): 592-601.Keyes, O.S. “The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition.” Proceedings of the ACM on Human-Computer Interaction 2. CSCW, Article 88 (2018): 1-22.Magnet, Shoshana Amielle. When Biometrics Fail: Gender, Race, and the Technology of Identity. Durham: Duke UP, 2011.Magnet, Shoshana, and Tara Rodgers. “Stripping for the State: Whole Body Imaging Technologies and the Surveillance of Othered Bodies.” Feminist Media Studies 12.1 (2012): 101–18.Monahan, Torin. Surveillance and Security: Technological Politics and Power in Everyday Life. New York: Routledge, 2006.Packer, Jeremy. “Becoming Bombs: Mobilizing Mobility in the War of Terror.” Cultural Studies 10.5 (2006): 378-99.Pugliese, Joseph. “In Silico Race and the Heteronomy of Biometric Proxies: Biometrics in the Context of Civilian Life, Border Security and Counter-Terrorism Laws.” Australian Feminist Law Journal 23 (2005): 1-32.Pugliese, Joseph. Biometrics: Bodies, Technologies, Biopolitics New York: Routledge, 2010.Quinan, C.L. “Gender (In)securities: Surveillance and Transgender Bodies in a Post-9/11 Era of Neoliberalism.” Eds. Stef Wittendorp and Matthias Leese. Security/Mobility: Politics of Movement. Manchester: Manchester UP, 2017. 153-69.Quinan, C.L., and Nina Bresser. “Gender at the Border: Global Responses to Gender Diverse Subjectivities and Non-Binary Registration Practices.” Global Perspectives 1.1 (2020). <https://doi.org/10.1525/gp.2020.12553>.Sjoberg, Laura. “(S)he Shall Not Be Moved: Gender, Bodies and Travel Rights in the Post-9/11 Era.” Security Journal 28.2 (2015): 198-215.Spalding, Sally J. “Airport Outings: The Coalitional Possibilities of Affective Rupture.” Women’s Studies in Communication 39.4 (2016): 460-80.
APA, Harvard, Vancouver, ISO, and other styles
39
Filinich, Renzo, and Tamara Jesus Chibey. "Becoming and Individuation on the Encounter between Technical Apparatus and Natural System." M/C Journal 23, no.4 (August12, 2020). http://dx.doi.org/10.5204/mcj.1651.
Full textAbstract:
This essay sheds lights on the framing process during the research on the crossing between natural and artificial systems. To approach this, we must outline the machine-natural system relation. From this notion, technology is not seen as an external thing, nor even in contrast to an imaginary of nature, but as an effect that emerges from our thinking and revealing being that, in many cases, may be reduced to an issue of knowledge and action. Here, we want to consider the concept of transduction from Gilbert Simondon as one possible framework for considering the socio-technological actions at stake. His thought offers a detailed conceptual vocabulary for the question of individuation as a “revelation process”, a concern with how things come into existence and proceed temporally as projective entities.Moreover, our approach to the work of philosopher Simondon marked the starting point of our interest and approach to the issue of technique and its politics. From this perspective, the reflection given by Simondon in his thesis on the Individuation and the Mode of Existence of Technical Objects, is to trace certain reasons that are necessary for the development of this project and helping to explain it. In first place, Simondon does not state a specific regime of “human individuation”. The possibility of a psychic and collective individuation is produced, as is manifested when addressing the structure of his main thesis, at the heart of biological individuation; Simondon strongly attacks the anthropocentric tendencies that attempt to establish a defining boundary between biological and psychic reality. We may presume, then, that the issue of language as a defining and differencing element of the human does not interest him; it is at this point that our project begins to focus on employing the transduction of the téchnē as a metaphor of life (Espinoza Lolas et al.); regarding the limits that language may imply for the conformation and expression of the psychic reality. In second place, this critique to the economy of attention present across our research and in Simondon’s thinking seeks to introduce a hypothesis raised in another direction: towards the issue of the technique. During the introduction of his Mode of Existence of Technical Objects, Simondon shows some urgency in the need to approach the reality of technical objects as an autonomous reality and as a configuring reality of the psychic and collective individualisation. Facing the general importance granted to language as a key element of the historical and hermeneutical, even ontological, aspects of the human being, Simondon considers that the technique is the reality that plays the fundamental role of mediating between the human being and the world.Following these observations, a possible question that will guide our research arises: How do the technologisation and informatisation of the cultural techniques alter the nature itself of the knowing of the affection of being with others (people, things, animals)? In the hypothesis of this investigation we claim that—insofar as we deliver an approach and perspective on the technologisation of the world as a process of individuation (considering Simondon’s concept in this becoming, in which an artificial agent and its medium may get out of phase to solve its tensions and give rise to physical or living individuals that constitute their system and go through a series of metastable equilibria)—it’s possible to prove this capacity of invention as a clear example of a form of transindividual individuation (referring to the human being), that thanks to the information that the artificial agent acquires and recovers by means of its “imagination”, which integrates in its perception and affectivity, enables the creation of new norms or artifacts installing in its becoming, as is the case of bioeconomy and cognitive capitalism (Fumagalli 219). It is imperious to observe and analyse the fact that the concept of nature must be integrated along with the concept of Cosmotecnia (Hui 3) to avoid the opposition between nature and technique in conceptual terms, and that is the reason why in the following section we will mention a third memory that is inscribed in this concept. There is no linear time development in human history from nature to technique, from nature to politics.The Extended MindThe idea of memory as something transmissible is important when thinking of the present, there is no humanity outside the technical, neither prior to the technical, and it is important to safeguard this idea to highlight the phýsis/téchnē dichotomy presented by Simondon and Stigler. It is erroneous to think that some entity may exceed the human, that it has any exteriority when it is the materialization of the human forms, or even more, that the human is crossed by it and is not separable. For French philosopher Bernard Stiegler there is no human nature without technique, and vice versa (Stigler 223). Here appears the issue of knowing which are the limits where “the body of the human me might stop” (Hutinel 44), a first glimpse of externalized memory was the flint axe, which is made by using other tools, even when its use is unknown. Its mere existence preserves a knowledge that goes beyond who made it, or its genetic or epigenetic transmission is preserved beyond the organic.We raise the question about a phýsis coming from the téchnē, it is a central topic that dominates the discussion nowadays, about technology and its ability to have a transforming effect over every area of contemporary life and human beings themselves. It is being “revealed” that the true qualitative novelty of the technological improves that happen in front of our eyes resides not only in the appearance of new practices that are related to any particular scientific research. We must point out the evident tension between bíos and zôê during the process of this adaptation, which is an ontological one, but we also witness how the recursivity becomes a modus operandi during this process, which is both social and technological. Just as the philosophy of nature, the philosophy of biology confronts its own limit under the light shed by the recursive algorithms implemented as a dominant way of adaptation, which is what Deleuze called societies of control (Deleuze 165). At the same time, there is an artificial selection (instead of a natural selection) imposed by the politics of transhumanism (for example, human improvement, genetic engineering).In this direction, a first aspect to consider resides in that life, held as an object of power and politics, does not constitute a “natural life”, but the result of a technical production from which its “nature” develops, as well as the possibilities of its deployment. Now then, it is precisely due to this gesture that Stiegler longs to distinguish between what is originary in mankind and its artefactual or artificial becoming: “the prosthesis is not a simple extension of the human body, it is the constitution of said body insofar as ‘human’ (the quotes belong to the constitution). It is not a ‘medium’ for mankind, but its end, and it is known the essential mistakenness of the expression, ‘the end of mankind’” (Stiegler 9). Before such phenomena, it is appropriate to lay out a reflexive methodology centered in observing and analysing the aforementioned idea by Stiegler that there is no mankind without techniques; and there is no technique without mankind (Stigler 223). This implies that this idea of téchnē comprises both the techniques needed to create things, as the technical products resulting from these techniques. The word “techniques” also becomes ambiguous among the modern technology of machines and the primitive “tools” and their techniques, whether they have become art of craft, things that we would not necessarily think as “technology”. What Stiegler is suggesting here is to describe the scope of the term téchnē within an ontogenetic and phylogenetic process of the human being; providing us a reflection about what do we “possess as a fundamental thing” for our being as humans is also fundamental to how “we experience time” since the externalization of our memory into our tools, which Stiegler understands as a “third kind” of memory which is separated from the internal memory that is individually acquired from our brain (epigenetic), and the biological evolutive memory that is inherited from our ancestors (phylogenetic); Stiegler calls this kind of evolutive process epiphylogenetic or epiphylogenesis. Therefore, we could argue that we are defined by this process of epiphylogenesis, and that we are constituted by a past that we ourselves, as individuals, have not lived; this past is delivered to us through culture, which is the fusion of the “technical objects that embody the knowledge of our ancestors, tools that we adopt to transform our surroundings” (Stiegler 177). These supports of external memory (this is, exteriorisations of the consciousness) provide a new collectivisation of the consciousness that exists beyond the individual.The current trend of investigation of ontogeny and phylogeny is driven by the growing consensus both in sciences and humanities in that the living world in every one of its aspects – biologic, semiotic, economic, affective, social, etc. – escapes the finite scheme of description and representation. It is for this reason that authors such as Matteo Pasquinelli refer, in a more modest way, to the idea of “augmented intelligence” (9), reminding us that there is a posthuman legacy between human and machine that still is problematic, “though the machines manifest different degrees of autonomous agency” (Pasquinelli 11).For Simondon, and this is his revolutionary contribution to philosophy, one should think individuation not from the perspective of the individual, but from the point of view of the process that originated it. In other words, individuation must be thought in terms of a process that not only takes for granted the individual but understands it as a result.In Simondon’s words:If, on the contrary, one supposes that individuation does not only produce the individual, one would not attempt to pass quickly through the stage of individuation in order arrive at the final reality that is the individual--one would attempt to grasp the ontogenesis in the entire progression of its reality, and to know the individual through the individuation, rather than the individuation through the individual. (5)Therefore, the epistemological problem does not fall in how the téchnē flees the human domain in its course to become technologies, but in how these “exteriorization” processes (Stiegler 213) alter the concepts themselves of number, image, comparison, space, time, or city, to give a few examples. However, the anthropological category of “exteriorization” does not bring entirely justice to these processes, as they work in a retroactive and recursive manner in the original techniques. Along with the concept of text and book, the practice of reading has also changed during the course of digitalisation and algorithmisation of the processing of knowledge; alongside with the concept of comparison, the practice of comparison has changed since the comparison (i.e. of images) has become an operation that is based in the extraction of data and automatic learning. On the other side, in reverse, we must consider, in an archeological and mediatic fashion, the technological state of life as a starting point from which we must ask what cultural techniques were employed in first place. Asking: How does the informatisation of the cultural techniques produce new forms of subjectivity? How does the concept of cultural techniques already imply the idea of “chains of operations” and, therefore, a permanent (retro)coupling between the living and the non-living agency?This reveals that classical cultural techniques such as indexation or labelling, for example, have acquired ontological powers in the Google era: only what is labelled exists; only what can be searched is absolute. At the same time, in the fantasies of the mediatic corporations, the variety of objects that can be labelled (including people) tends to be coextensive with the world of the phenomena itself (if not the real world), which will then always be only an augmented version of itself.Technology became important for contemporary knowledge only through mediation; therefore, the use of tools could not be the consequence of an extremely well-developed brain. On the contrary, the development of increasingly sophisticated tools took place at the same pace as the development of the brain, as Leroi-Gourhan attempts to probe when studying the history of tools together with the history of the human skeleton and brain. And what he managed to demonstrate is that the history of technique and the history of the human being run in parallel lines; they are, if not equal, at least inextricable. Even today, the progress of knowledge is still not completely subordinated to the technological inversion (Lyotard 37). In short, human evolution is inseparable from the evolution of the téchne, the evolution of technology. One may simply think the human being as a natural animal, isolated from the external material world. What he becomes and what he is, is essentially bonded to the techniques, from the very beginning. Leroi-Gourhan puts it this way in his text Gesture and Speech: “the apparition of tools as a species ... feature that marks the boundary between animals and humans” (90).To understand the behavior of the technological systems is essential for our ability to control their actions, to harvest their benefits and to minimize their damage. Here it is argued that this requires a wide agenda of scientific investigation to study the behavior of the machine that incorporates and broadens the biotechnological discipline, and includes knowledges coming from all sciences. In some way, Simondon sensed this encounter of knowledges, and proposed the concept of the Allagmatic, or theory of operations, “constituted by a systematized set of particular knowledges” (Simondon 469). We could attempt to begin by describing a set of questions that are fundamental for this emerging field, and then exploring the technical, legal, and institutional limitations in the study of technological agency.Information, Communication and SignificationTo establish the relation between information and communication, we will speak from the following two perspectives: first with Norbert Wiener, then with Simondon. We will see how the concept of information is essential to start understanding communication in an artificial agent.On one side, we have the notion from Wiener about information that is demarcated in his project about cybernetics. Cybernetics is the study of communication and control through the inquiry of messages in animals, human beings, and machines. This idea of information arises from the interrelation with the surrounding. Wiener defines it as the “content of what is an interchange object with the external world, while we adjust to it and make it adjust to us” (Wiener 17-18). In other words, we receive and use information since we interact with the world in which we live. It is in this sense that information is connected to the idea of feedback that is defined as the exchange and interaction of information in our systems or other systems. In Wiener’s own words, feedback is “the property of adjusting the future behavior to facts of the past” (31).Information, for Wiener, is influenced, at the same time, by the mathematic and probabilistic idea from the theory of information. Wiener refers to the amount of information that finds its starting point at the mechanics of statistics, along with the concept of entropy, inasmuch that the information is opposed to it. Therefore, information, by supplying a set of messages, indicates a measure of organisation. Argentinian philosopher Pablo Rodríguez adds that “information [for Wiener] is a new physical category of the universe. [It is] the measure of organization of any entity, an organization without which the material and energetic systems wouldn’t be able to survive” (2-3). This way, we have that information responds to the measure of organization and self-regulation of a given system.Moreover, and almost in complete contrast, we have the concept given by Simondon, where information is applicable to the whole possible range: animals, machines, human beings, molecules, crystals, etc. In this sense, it is more versatile, as it exceeds the domains of the technique. To understand well the scope of this concept we will approach it from two definitions. In first place, Simondon, in his conference Amplification in the Process of Information, in the book Communication and Information, claims that information “is not a thing, but the operation of a thing that arrives to a system and produces a transformation in there. The information can’t be defined beyond this act of transformative incidence, and the operation of receiving” (Simondon 139). From this definition it follows the idea of modulation, just when he refers to the “transformation” and “act of transformative incidence” modulation corresponds to the energy that flows amplified during that transformation that occurs within a system.There is a second definition of information that Simondon provides in his thesis Individuation in Light of Notions of Form and Information, in which he claims that: “the information signal is not just what is to be transmitted … it is also that what must be received, this is, what must adopt a signification” (Simondon 281). In this definition Simondon clearly distances himself from Wiener’s cybernetics, insofar as it deals with information as that which must be received, and not that that is to be transmitted. Although Simondon refers to a link between information and signification, this last aspect is not measured in linguistic terms. It rather expresses the decodification of a given code. This is, signification, and information as well, are the result of a disparity of energies, namely, between the overlaying of two possible states (0 and 1, or on and off).This is a central point of divergence with Wiener, as he refers to information in terms of transference of messages, while Simondon does it in terms of transformation of energies. This way, Simondon adds an energy element to the traditional definition of information, which now works as an operation, based in the transformation of energies as a result of a disparity or the overlaying of two possible elements within a system (recipient). It is according to this innovative element that modulation operates in a metastable system. And this is precisely the last concept we need to clarify: the idea of metastability and its relationship with the recipient-system.Metastability is an expression that finds its origins in thermodynamics. Philosophy traditionally operates around the idea of the stability of the being, while Simondon’s proposal states that the being is its becoming. This way, metastability is the condition of possibility of the individuation insofar as the metastable medium leaves behind a remainder of energy for future individuation processes. Thus, metastability refers to the temporal equilibrium of a system that remains in time, as it maintains within itself potential energy, useful for other future individuations.Returning to the conference Amplification in the Process of Information, Simondon points out that “the recipient metastability is the condition of efficiency of the incident information” (139). In such sense, we may claim that there is no information if the signal is not received. Therefore, the recipient is a necessary condition for said information to be given. Simondon understands the recipient as a mixed system (a quasi-system): on one hand, it must be isolated in terms of energy, and it must count with a membrane that allows it to not spend all the energy at the same time; on the other hand, it must be heteronomous, as it depends on an external input of information to activate the system (recipient).The metastable medium is the one indicated to understand the artificial agent, as it leaves the possibility open for the potential energy to manifest and not be spent all at once, but to leave a remainder useful for future modulations, and so, new transformations may occur. At the same time, Simondon’s concept of information is the most convenient when referring to communication and the relationship with the medium, primarily for its property of modulating potential energy. Nevertheless, it is also necessary to retrieve the idea of feedback from Wiener, as it is in the relationship of the artificial agent with its surrounding (and the world) that information is given, and it may flow amplified through its system. By this, significations manage to decode the internal code of the artificial agent, which represents the first gesture towards the opening of the communication.ConclusionThe hypotheses on extended cognition are subject to a huge amount of debate in the artistic, philosophical, and science of cognition circles nowadays, but their implications extend further beyond metaphysics and sciences of the mind. It is apparent that we have just began to scratch the surface of the social sphere in a broader way; realising that these start from cultural branches of the sight; as our minds are; if our minds are partially poured into our smartphones and even in our homes, then it is not a transformation in the human nature, but the latest manifestation of an ancient human ontology of the organic cognitive and informative systems dynamically assembled.It is to this condition that the critical digital humanities and every form of critique should answer. This is due to an attempt to dig out the delays and ruptures within the systems of mass media, by adding the relentless belief in real time as the future, to remind that systems always involve an encounter with a radical “strangeness” or “alienity”, an incommensurability between the future and the desire that turns into the radical potential of many of our contemporary social movements and politics. Our challenge in our critical job is to dismantle the practice of the representation and to reincorporate it to different forms of space and experience that are not reactionary but imaginary. What we attempt to bring into the light here is the need to get every spectator to notice the limits of the machinic vision and to acknowledge the role of image in the recruitment of liminal energies for the capital. The final objective of this essay will be to see that nature possesses the technique of an artist who renders contingency into necessity and inscribes the infinite within the finite, in arts it is not the figure of nature that corresponds to individuation but rather the artist whose task is not only to render contingency necessary as its operation, but also aim for an elevation of the audience as a form of revelation. The artist is he who opens up, through his or her work, a process of transindividuation, meaning a psychical and collective individuation.ReferencesDeleuze, Gilles. “Post-Script on Control Societies.” Polis 13 (2006): 1-7. 14 Feb. 2020 <http://journals.openedition.org/polis/5509>.Espinoza Lolas, Ricardo, et al. “On Technology and Life: Fundamental Concepts of Georges Caguilhem and Xavier Zubiri’s Thought.” Ideas y Valores 67.167 (2018): 127-47. 14 Feb. 2020 <http://dx.doi.org/10.15446/ideasyvalores.v67n167.59430>.Fumagalli, Andrea. Bioeconomía y Capitalismo Cognitivo: Hacia un Nuevo Paradigma de Acumulación. Madrid: Traficantes de Sueños, 2010.Hui, Yuk. “On Cosmotechnics: For a Renewed Relation between Technology and Nature in the Anthropocene.” Techné: Research in Philosophy and Technology 21.2/3 (2017): 319-41. 14 Feb. 2020 <https://www.pdcnet.org/techne/content/techne_2017_0021_42769_0319_0341>.Leroi-Gourhan, André. El Gesto y la Palabra. Venezuela: Universidad Central de Venezuela, 1971.———. El Hombre y la Materia: Evolución y Técnica I. Madrid: Taurus, 1989.———. El Medio y la Técnica: Evolución y Técnica II. Madrid: Taurus, 1989.Lyotard, Jean-François. La Condición Postmoderna: Informe sobre el Saber. Madrid: Cátedra, 2006.Pasquinelli, Matteo. “The Spike: On the Growth and Form of Pattern Police.” Nervous Systems 18.5 (2016): 213-20. 14 Feb. 2020 <http://matteopasquinelli.com/spike-pattern-police/>. Rivera Hutinel, Marcela.“Techno-Genesis and Anthropo-Genesis in the Work of Bernard Stiegler: Or How the Hand Invents the Human.” Liminales, Escritos Sobre Psicología y Sociedad 2.3 (2013): 43-58. 15 Dec. 2019 <http://revistafacso.ucentral.cl/index.php/liminales/article/view/228>.Rodríguez, Pablo. “El Signo de la ‘Sociedad de la Información’ de Cómo la Cibernética y el Estructuralismo Reinventaron la Comunicación.” Question 1.28 (2010): 1-17. 14 Feb. 2020 <https://perio.unlp.edu.ar/ojs/index.php/question/article/view/1064>.Simondon, Gilbert. Comunicación e Información. Buenos Aires: Editorial Cactus, 2015.———. La Individuación: a la luz de las nociones de forma y de información. Buenos Aires: La Cebra/Cactus, 2009 / 2015.———. El Modo de Existencia de los Objetos Técnicos. Buenos Aires: Prometeo, 2007.———. “The Position of the Problem of Ontogenesis.” Parrhesia 7 (2009): 4-16. 4 Nov. 2019 <http://parrhesiajournal.org/parrhesia07/parrhesia07_simondon1.pdf>.Stiegler, Bernard. La Técnica y el Tiempo I. Guipúzcoa: Argitaletxe Hiru, 2002.———. “Temporality and Technical, Psychic and Collective Individuation in the Work of Simondon.” Revista Trilogía Ciencia Tecnología Sociedad 4.6 (2012): 133-46.Wiener, Norbert. Cibernética y Sociedad. Buenos Aires: Editorial Sudamericana, 1958.
APA, Harvard, Vancouver, ISO, and other styles
40
Leaver, Tama, and Suzanne Srdarov. "ChatGPT Isn't Magic." M/C Journal 26, no.5 (October2, 2023). http://dx.doi.org/10.5204/mcj.3004.
Full textAbstract:
Introduction Author Arthur C. Clarke famously argued that in science fiction literature “any sufficiently advanced technology is indistinguishable from magic” (Clarke). On 30 November 2022, technology company OpenAI publicly released their Large Language Model (LLM)-based chatbot ChatGPT (Chat Generative Pre-Trained Transformer), and instantly it was hailed as world-changing. Initial media stories about ChatGPT highlighted the speed with which it generated new material as evidence that this tool might be both genuinely creative and actually intelligent, in both exciting and disturbing ways. Indeed, ChatGPT is part of a larger pool of Generative Artificial Intelligence (AI) tools that can very quickly generate seemingly novel outputs in a variety of media formats based on text prompts written by users. Yet, claims that AI has become sentient, or has even reached a recognisable level of general intelligence, remain in the realm of science fiction, for now at least (Leaver). That has not stopped technology companies, scientists, and others from suggesting that super-smart AI is just around the corner. Exemplifying this, the same people creating generative AI are also vocal signatories of public letters that ostensibly call for a temporary halt in AI development, but these letters are simultaneously feeding the myth that these tools are so powerful that they are the early form of imminent super-intelligent machines. For many people, the combination of AI technologies and media hype means generative AIs are basically magical insomuch as their workings seem impenetrable, and their existence could ostensibly change the world. This article explores how the hype around ChatGPT and generative AI was deployed across the first six months of 2023, and how these technologies were positioned as either utopian or dystopian, always seemingly magical, but never banal. We look at some initial responses to generative AI, ranging from schools in Australia to picket lines in Hollywood. We offer a critique of the utopian/dystopian binary positioning of generative AI, aligning with critics who rightly argue that focussing on these extremes displaces the more grounded and immediate challenges generative AI bring that need urgent answers. Finally, we loop back to the role of schools and educators in repositioning generative AI as something to be tested, examined, scrutinised, and played with both to ground understandings of generative AI, while also preparing today’s students for a future where these tools will be part of their work and cultural landscapes. Hype, Schools, and Hollywood In December 2022, one month after OpenAI launched ChatGPT, Elon Musk tweeted: “ChatGPT is scary good. We are not far from dangerously strong AI”. Musk’s post was retweeted 9400 times, liked 73 thousand times, and presumably seen by most of his 150 million Twitter followers. This type of engagement typified the early hype and language that surrounded the launch of ChatGPT, with reports that “crypto” had been replaced by generative AI as the “hot tech topic” and hopes that it would be “‘transformative’ for business” (Browne). By March 2023, global economic analysts at Goldman Sachs had released a report on the potentially transformative effects of generative AI, saying that it marked the “brink of a rapid acceleration in task automation that will drive labor cost savings and raise productivity” (Hatzius et al.). Further, they concluded that “its ability to generate content that is indistinguishable from human-created output and to break down communication barriers between humans and machines reflects a major advancement with potentially large macroeconomic effects” (Hatzius et al.). Speculation about the potentially transformative power and reach of generative AI technology was reinforced by warnings that it could also lead to “significant disruption” of the labour market, and the potential automation of up to 300 million jobs, with associated job losses for humans (Hatzius et al.). In addition, there was widespread buzz that ChatGPT’s “rationalization process may evidence human-like cognition” (Browne), claims that were supported by the emergent language of ChatGPT. The technology was explained as being “trained” on a “corpus” of datasets, using a “neural network” capable of producing “natural language“” (Dsouza), positioning the technology as human-like, and more than ‘artificial’ intelligence. Incorrect responses or errors produced by the tech were termed “hallucinations”, akin to magical thinking, which OpenAI founder Sam Altman insisted wasn’t a word that he associated with sentience (Intelligencer staff). Indeed, Altman asserts that he rejects moves to “anthropomorphize” (Intelligencer staff) the technology; however, arguably the language, hype, and Altman’s well-publicised misgivings about ChatGPT have had the combined effect of shaping our understanding of this generative AI as alive, vast, fast-moving, and potentially lethal to humanity. Unsurprisingly, the hype around the transformative effects of ChatGPT and its ability to generate ‘human-like’ answers and sophisticated essay-style responses was matched by a concomitant panic throughout educational institutions. The beginning of the 2023 Australian school year was marked by schools and state education ministers meeting to discuss the emerging problem of ChatGPT in the education system (Hiatt). Every state in Australia, bar South Australia, banned the use of the technology in public schools, with a “national expert task force” formed to “guide” schools on how to navigate ChatGPT in the classroom (Hiatt). Globally, schools banned the technology amid fears that students could use it to generate convincing essay responses whose plagiarism would be undetectable with current software (Clarence-Smith). Some schools banned the technology citing concerns that it would have a “negative impact on student learning”, while others cited its “lack of reliable safeguards preventing these tools exposing students to potentially explicit and harmful content” (Cassidy). ChatGPT investor Musk famously tweeted, “It’s a new world. Goodbye homework!”, further fuelling the growing alarm about the freely available technology that could “churn out convincing essays which can't be detected by their existing anti-plagiarism software” (Clarence-Smith). Universities were reported to be moving towards more “in-person supervision and increased paper assessments” (SBS), rather than essay-style assessments, in a bid to out-manoeuvre ChatGPT’s plagiarism potential. Seven months on, concerns about the technology seem to have been dialled back, with educators more curious about the ways the technology can be integrated into the classroom to good effect (Liu et al.); however, the full implications and impacts of the generative AI are still emerging. In May 2023, the Writer’s Guild of America (WGA), the union representing screenwriters across the US creative industries, went on strike, and one of their core issues were “regulations on the use of artificial intelligence in writing” (Porter). Early in the negotiations, Chris Keyser, co-chair of the WGA’s negotiating committee, lamented that “no one knows exactly what AI’s going to be, but the fact that the companies won’t talk about it is the best indication we’ve had that we have a reason to fear it” (Grobar). At the same time, the Screen Actors’ Guild (SAG) warned that members were being asked to agree to contracts that stipulated that an actor’s voice could be re-used in future scenarios without that actor’s additional consent, potentially reducing actors to a dataset to be animated by generative AI technologies (Scheiber and Koblin). In a statement issued by SAG, they made their position clear that the creation or (re)animation of any digital likeness of any part of an actor must be recognised as labour and properly paid, also warning that any attempt to legislate around these rights should be strongly resisted (Screen Actors Guild). Unlike the more sensationalised hype, the WGA and SAG responses to generative AI are grounded in labour relations. These unions quite rightly fear the immediate future where human labour could be augmented, reclassified, and exploited by, and in the name of, algorithmic systems. Screenwriters, for example, might be hired at much lower pay rates to edit scripts first generated by ChatGPT, even if those editors would really be doing most of the creative work to turn something clichéd and predictable into something more appealing. Rather than a dystopian world where machines do all the work, the WGA and SAG protests railed against a world where workers would be paid less because executives could pretend generative AI was doing most of the work (Bender). The Open Letter and Promotion of AI Panic In an open letter that received enormous press and media uptake, many of the leading figures in AI called for a pause in AI development since “advanced AI could represent a profound change in the history of life on Earth”; they warned early 2023 had already seen “an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control” (Future of Life Institute). Further, the open letter signatories called on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”, arguing that “labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” (Future of Life Institute). Notably, many of the signatories work for the very companies involved in the “out-of-control race”. Indeed, while this letter could be read as a moment of ethical clarity for the AI industry, a more cynical reading might just be that in warning that their AIs could effectively destroy the world, these companies were positioning their products as seemingly magical—“digital minds that no one – not even their creators – can understand”—making them even more appealing to potential customers and investors. Far from pausing AI development, the open letter actually operates as a neon sign touting the amazing capacities and future brilliance of generative AI systems. Nirit Weiss-Blatt argues that general reporting on technology industries up to 2017 largely concurred with the public relations stance of those companies, positioning them as saviours and amplifiers of human connection, creativity, and participation. After 2017, though, media reporting completely shifted, focussing on the problems, risks, and worst elements of these corporate platforms. In the wake of the open letter, Weiss-Blatt extended her point on Twitter, arguing that media and messaging surrounding generative AI can be broken down into those who are profiting and fuelling the panic at one end of the spectrum, and those who think the form of the panic (which positions AI as dangerously intelligent) is deflecting from the immediate real issues caused by generative AI at the other. Weiss-Blatt characterises the Panic-as-a-Business proponents as arguing “we're telling you will all die from a Godlike AI… so you must listen to us”, which coheres with the broader positioning narrative of generative AI’s seemingly magical (and thus potentially destructive) capabilities. Yet this rhetoric also positions the companies creating generative AI as the ones who should be making the rules to control it, an argument so effective that in July 2023 the Biden Administration in the US endorsed the biggest AI companies—Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI—framing future AI development with voluntary safeguards rather than externally imposed policies (Shear, Kang, and Sanger). Fig. 1: Promotors of AI Panic, extrapolating from Nirit Weiss-Blatt. (Algorithm Watch) Stochastic Parrots and Deceitful Media Artificial Intelligences have inhabited popular imaginaries via novels, television, and films far longer than they have been considered even potentially viable technologies, so it is not surprising that popular culture has often framed the way AI is understood (Leaver). Yet as Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell argue, Large Language Models and generative AI are most productively understood as “a stochastic parrot” insomuch as each is a “system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning” (Bender et al. 617). Generative AI, then, is not creating something genuinely new, but rather remixing existing data in novel ways that the systems themselves do not in any meaningful sense understand. Going further, Simone Natale characterises current AI tools as “deceitful media” insomuch as they are designed to deliberately appear generally intelligent, but this is always a deception. The deception makes these tools more engaging for humans to use but is also fundamental in selling and profiting from the use of AI tools. Rather than accepting claims made by the companies financing and creating contemporary AI, Natale argues for a more pedagogically productive path: we must resist the normalization of the deceptive mechanisms embedded in contemporary AI and the silent power that digital media companies exercise over us. We should never cease to interrogate how the technology works, even while we are trying to accommodate it in the fabric of everyday life. (Natale 132) Real Issues Although even a comprehensive list is beyond the scope of this short article, is it nevertheless vital to note that in looking beyond the promotion of AI Panic and deceptive media, ChatGPT and other generative AI tools create or exacerbate a range of very real and significant ethical problems. The most obvious problem is the lack of transparency in terms of what data different generative AI tools were trained on. Generally, these tools are thought to get better by absorbing ever greater amounts of data, with most AI companies acknowledging that scraping the Web in some form has been part of the training data harvesting for their AI tools. Not knowing what data have been used makes it almost impossible to know which perspectives, presumptions, and biases are baked into these tools. While many forms of bias have plagued technology companies for many years (Noble), for generative AI tools, in “accepting large amounts of web text as ‘representative’ of ‘all’ of humanity we risk perpetuating dominant viewpoints, increasing power imbalances, and further reifying inequality” (Bender et al. 614). Even mitigating and working to correct biases in generative AI tools will be a huge challenge if these companies never share what was in their training data. As the WGA and SAG strike discussed above emphasises, the question of human labour is a central challenge for generative AI. Beyond Hollywood, more entrenched forms of labour exploitation haunt generative AI. Very low-paid workers have done much of the labour in classifying different forms of data in order to train AI systems; data workers are routinely not acknowledged at all, even sometimes directly performing the tasks that are ascribed to AI, to the extent that “distracted by the specter of nonexistent sentient machines, an army of precarized workers stands behind the supposed accomplishments of artificial intelligence systems today” (Williams, Miceli, and Gebru). It turns out that people are still doing the work so that companies can pretend the machines can think. In one final but very important example, there is a very direct ecological cost to training, maintaining, and running generative AI tools. In the context of global warming, concerns already existed about the enormous data centres at the heart of the big technology platforms prior to ChatGPT’s release. However, the data and processing power needed to run generative AI tools are even larger, leading to very real questions about how much electricity and water (for cooling) are used by even the most rudimentary ChatGPT queries (Lizarraga and Solon). While not just an AI question, balancing the environmental costs of data centres with the actual utility of AI tools is not one that is routinely asked, or answered, in the hype around generative AI. Messing Around and Geeking Out Escaping the hype and hypocrisy deployed by AI companies is vital for repositioning generative AI not as magical, not as a saviour, and not as a destroyer, but rather as a new technology that needs to be critically and ethically understood. In seminal work exploring how young people engage with digital tools and technologies, Mimi Ito and colleagues developed three genres of technology participation: hanging out, where engagement with any technologies is largely driven by friendships and social engagement; messing around, which includes a great deal of experimentation and play with technological tools; and geeking out, where some young people will find a particular focus on one platform, tool or technology that inspires them to focus enough to develop expertise in using and understanding that tool (Ito et al.). If young people, in particular, are going to be living in a world where generative AI tools are part of their social worlds and workplaces, then messing around with ChatGPT is, indeed, going to be important in testing out how these tools answer questions and synthesise information, what biases are evident in responses, and at what points answers are incorrect. For some young people, they may well move from messing around to completely geeking out with generative AI, a process that will be even more fruitful if these tools are not seen as impenetrable magic, but rather as commercial tools built by for-profit companies. While the idea of digital natives is an unhelpful myth (Bennett, Maton, and Kervin), if young people are going to be the first generation to have generative AI as part of their information, creative, and search landscapes, then safely messing around and geeking out with these tools will be more vital than ever. We mentioned above that most Australian state education departments initially banned ChatGPT, but a more optimistic sign arrived as we were finishing this article insomuch as the different Australian states agreed in mid-2023 to work together to create “a framework to guide the safe and effective use of artificial intelligence in the nation’s schools” (Clare). Although there is work to be done, moving away from a ban to a setting that should allow students to be part of testing, framing, and critiquing ChatGPT and generative AI is a clear step in repositioning these technologies as tools, not magical systems that could never be understood. Conclusion Generative AI is not magic; it is not a saviour or destroyer; it is neither utopian nor dystopian; nor, unless we radically narrow the definition, is it intelligent. The companies and corporations driving AI development have a vested interest in promoting fantastical ideas about generative AI, as it drives their customers, investment, and future viability. When the hype is dominant, responses can be overdetermined, such as banning generative AI in schools. But in taking a less magical and more material approach to ChatGPT and generative AI, we can try and ensure pedagogical opportunities for today’s young people to test out, scrutinise, and critically understand the AI tools they are most likely going to be asked to use today and in the future. The first wave of generative AI hype following the public release of ChatGPT offers an opportunity to reflect on exactly what the best uses of these technologies are, what ethics should drive those uses, and how transparent the workings of generative AI should be before their presence in the digital landscape is so entrenched and mundane that it becomes difficult to see at all. Acknowledgment This research was supported by the Australian Research Council Centre of Excellence for the Digital Child through project number CE200100022. References Algorithm Watch [@AlgorithmWatch]. “Mirror, Mirror on the Wall, Who Is the Biggest Panic-Creator of Them All? Inspired by a Tweet from Nirit Weiss-Blatt, Check out Our Taxonomy of #AI Panic Facilitators and Those Fighting against the Fearmongering. Who Have We Forgotten to Add? Let Us Know! ⬇️” Instagram, 12 July 2023 <https://Instagram.com/p/Cump3losObg/>. Bender, Emily M., Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell. “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Virtual Event. Canada: ACM, 2021. 610–623. <https://dl.acm.org/doi/10.1145/3442188.3445922>. Bender, Stuart Marshall. “Coexistence and Creativity: Screen Media Education in the Age of Artificial Intelligence Content Generators.” Media Practice and Education (2023): 1–16. Bennett, Sue, Karl Maton, and Lisa Kervin. “The ‘Digital Natives’ Debate: A Critical Review of the Evidence.” British Journal of Educational Technology 39.5 (2008): 775–786. Browne, Ryan. “Buzzy A.I. Tools like Microsoft-Backed ChatGPT Replaced Crypto as the Hot Tech Topic of Davos.” CNBC, 20 Jan. 2023. <https://cnbc.com/2023/01/20/chatgpt-microsoft-backed-ai-tool-replaces-crypto-as-hot-davos-tech-topic.html>. Cassidy, Caitlin. “Queensland Public Schools to Join NSW in Banning Students from ChatGPT.” The Guardian, 23 Jan. 2023. <https://theguardian.com/australia-news/2023/jan/23/queensland-public-schools-to-join-nsw-in-banning-students-from-chatgpt>. “Cheating with ChatGPT? Controversial AI Tool Banned in These Schools in Australian First.” SBS News, 22 Jan. 2023. <https://sbs.com.au/news/article/cheating-with-chatgpt-controversial-ai-tool-banned-in-these-schools-in-australian-first/817odtv6e>. Clare, Jason. “Draft Schools AI Framework Open for Consultation.” Ministers’ Media Centre, 28 July 2023. <https://ministers.education.gov.au/clare/draft-schools-ai-framework-open-consultation>. Clarence-Smith, Louisa. “‘Goodbye Homework!’ Elon Musk Praises AI Chatbot That Writes Student Essays.” The Telegraph, 5 Jan. 2023. <https://telegraph.co.uk/news/2023/01/05/homework-elon-musk-chatgpt-praises-ai-chatbot-writes-students/>. Clarke, Arthur C. “Hazards of Prophecy: The Failure of Imagination.” Profiles of the Future: An Inquiry into the Limits of the Possible. New York: Harper and Row, 1973. Dsouza, Elton Grivith. “How ChatGPT Works: Training Model of ChatGPT.” Edureka! 11 May 2023. <https://edureka.co/blog/how-chatgpt-works-training-model-of-chatgpt/>. Future of Life Institute. “Pause Giant AI Experiments: An Open Letter.” Future of Life Institute, 22 Mar. 2023. <https://futureoflife.org/open-letter/pause-giant-ai-experiments/>. Grobar, Matt. “WGA Negotiating Committee Co-Chair Chris Keyser on the Breakdown of Negotiations with ‘Divided’ AMPTP.” Deadline, 2 May 2023. <https://deadline.com/2023/05/wga-strike-chris-keyser-interview-failed-negotiations-amptp-ai-1235354566/>. Hatzius, Jan, Joseph Briggs, Devesh Kodnani, and Giovanni Pierdomenico. “The Potentially Large Effects of Artificial Intelligence on Economic Growth.” Goldman Sachs: Global Economics Analyst, 26 Mar. 2023. <https://gspublishing.com/content/research/en/reports/2023/03/27/d64e052b-0f6e-45d7-967b-d7be35fabd16.html>. Hiatt, Bethany. “National Expert Task Force to Be Set Up in Bid to Help Australian Schools Harness Tools Such as ChatGPT.” The West Australian, 1 Mar. 2023. <https://thewest.com.au/news/education/national-expert-task-force-to-be-set-up-in-bid-to-help-australian-schools-harness-tools-such-as-chatgpt-c-9895269>. Intelligencer staff. “Sam Altman on What Makes Him ‘Super Nervous’ about AI: The OpenAI Co-Founder Thinks Tools like GPT-4 Will Be Revolutionary. But He’s Wary of Downsides.” On with Kara Swisher: Intelligencer. 23 Mar. 2023. <https://nymag.com/intelligencer/2023/03/on-with-kara-swisher-sam-altman-on-the-ai-revolution.html>. Ito, Mizuko. Hanging Out, Messing Around, and Geeking Out: Kids Living and Learning with New Media. Cambridge, Mass.: MIT P, 2012. Leaver, Tama. Artificial Culture: Identity, Technology, and Bodies. New York: Routledge, 2012. Liu, Danny, Adam Bridgeman, and Benjamin Miller. “As Uni Goes Back, Here’s How Teachers and Students Can Use ChatGPT to Save Time and Improve Learning.” The Conversation, 28 Feb. 2023. <https://theconversation.com/as-uni-goes-back-heres-how-teachers-and-students-can-use-chatgpt-to-save-time-and-improve-learning-199884>. Lizarraga, Clara Hernanz, and Olivia Solon. “Thirsty Data Centers Are Making Hot Summers Even Scarier.” Bloomberg, 26 July 2023. <https.//bloomberg.com/news/articles/2023-07-26/extreme-heat-drought-drive-opposition-to-ai-data-centers>. Musk, Elon [@elonmusk]. “@sama. ChatGPT is scary good. We are not far from dangerously strong AI.” Twitter, 4 Dec. 2022. <https://twitter.com/elonmusk/status/1599128577068650498?lang=en>. ———. “@pmarca. It’s a new world. Goodbye homework!” Twitter, 5 Jan. 2023. <https://twitter.com/elonmusk/status/1610849544945950722?lang=en>. Natale, Simone. Deceitful Media Artificial Intelligence and Social Life after the Turing Test. New York: Oxford UP, 2021. Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: NYU P, 2018. Porter, Rick. “Late Night Shows Shut Down with WGA Strike.” The Hollywood Reporter, 2 May 2023. <https://hollywoodreporter.com/tv/tv-news/wga-strike-late-night-shows-shut-down-1235477882/>. Scheiber, Noam, and John Koblin. “Will a Chatbot Write the Next ‘Succession’?” The New York Times 29 Apr. 2023. <https://nytimes.com/2023/04/29/business/media/writers-guild-hollywood-ai-chatgpt.html>. Screen Actors Guild – American Federation of Television and Radio Artists. “SAG-AFTRA Statement on the Use of Artificial Intelligence and Digital Doubles in Media and Entertainment.” 17 Mar. 2023. <https://sagaftra.org/sag-aftra-statement-use-artificial-intelligence-and-digital-doubles-media-and-entertainment>. Shear, Michael D., Cecilia Kang, and David E. Sanger. “Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools.” The New York Times, 21 July 2023. <https://nytimes.com/2023/07/21/us/politics/ai-regulation-biden.html>. Weiss-Blatt, Nirit [@DrTechlash]. “A Taxonomy of AI Panic Facilitators.” Twitter, 1 July 2023. <https://twitter.com/DrTechlash/status/1675155157880016898>. ———. The Techlash and Tech Crisis Communication. Bingley: Emerald Publishing, 2021. Williams, Adrienne, Milagros Miceli, and Timnit Gebru. “The Exploited Labor behind Artificial Intelligence.” Noema, 13 Oct. 2022 <https://noemamag.com/the-exploited-labor-behind-artificial-intelligence/>.
APA, Harvard, Vancouver, ISO, and other styles
41
Fedorova, Ksenia. "Mechanisms of Augmentation in Proprioceptive Media Art." M/C Journal 16, no.6 (November7, 2013). http://dx.doi.org/10.5204/mcj.744.
Full textAbstract:
Introduction In this article, I explore the phenomenon of augmentation by questioning its representational nature and analyzing aesthetic modes of our interrelationship with the environment. How can senses be augmented and how do they serve as mechanisms of enhancing the feeling of presence? Media art practices offer particularly valuable scenarios of activating such mechanisms, as the employment of digital technology allows them to operate on a more subtle level of perception. Given that these practices are continuously evolving, this analysis cannot claim to be a comprehensive one, but rather aims to introduce aspects of the specific relations between augmentation, sense of proprioception, technology, and art. Proprioception is one of the least detectable and trackable human senses because it involves our intuitive sense of positionality, which suggests a subtle equilibrium between a center (our individual bodies) and the periphery (our immediate environments). Yet, as any sense, proprioception implies a communicational chain, a network of signals traveling and exchanging information within the body-mind complex. The technological augmentation of this dynamic process produces an interference in our understanding of the structure and elements, the information sent/received. One way to understand the operations of the senses is to think about them as images that the mind creates for itself. Artistic intervention (usually) builds upon exactly this logic: representation of images generated in mind, supplementing or even supplanting the existing collection of inner images with new, created ones. Yet, in case of proprioception the only means to interfere with and augment these inner images is on bodily level. Hence, the question of communication through images (or representations) should be extended towards a more complex theory of embodied perception. Drawing on phenomenology, cognitive science, and techno-cultural studies, I focus on the potential of biofeedback technologies to challenge and transform our self-perception by conditioning new pathways of apprehension (sometimes by creating mechanisms of direct stimulation of neural activity). I am particularly interested in how the awareness of the self (grounded in the felt relationality of our body parts) is most significantly activated at the moments of disturbance of balance, in situations of perplexity and disorientation. Projects by Marco Donnarumma, Sean Montgomery, and other artists working with biofeedback aesthetically validate and instantiate current research about neuro-plasticity, with technologically mediated sensory augmentation as one catalyst of this process. Augmentation as Representation: Proprioception and Proprioceptive Media Representation has been one of the key ways to comprehend reality. But representation also constitutes a spatial relation of distancing and separation: the spectator encounters an object placed in front of him, external to him. Thus, representation is associated more with an analytical, rather than synthetic, methodology because it implies detachment and division into parts. Both methods involve relation, yet in the case of representation there is a more distinct element of distance between the representing subject and represented object. Representation is always a form of augmentation: it extends our abilities to see the "other", otherwise invisible sides and qualities of the objects of reality. Representation is key to both science and art, yet in case of the latter, what is represented is not a (claimed) "objective" scheme of reality, but rather images of the imaginary, inner reality (even figurative painting always presents a particular optical and psychological perspective, to say nothing about forms of abstract art). There are certain kinds of art (visual arts, music, dance, etc.) that deal with different senses and thus, build their specific representational structures. Proprioception is one of the senses that occupies relatively marginal position in artistic production (which is exactly because of the specificity of its representational nature and because it does not create a sense of an external object. The term "proprioception" comes from Latin propius, or "one's own", "individual", and capio, cepi – "to receive", "to perceive". It implies a sense of one's self felt as a relational unity of parts of the body most vividly discovered in movement and in effort employed in it. The loss of proprioception usually means loss of bodily orientation and a feeling of one's body (Sacks 43-54). On the other hand, in case of additional stimulation and training of this sense (not only via classical cyber-devices, like cyber-helmets, gloves, etc. that set a different optics, but also techniques of different kinds of altered states of mind, e.g. through psychotropics, but also through architecture of virtual space and acoustics) a sense of disorientation that appears at first changes towards some analogue of reactions of enthusiasm, excitement discovery, and emotion of approaching new horizons. What changes is not only perception of external reality, but a sense of one's self: the self is felt as fluid, flexible, with penetrable borders. Proprioception implies initial co-existence of the inner and outer space on the basis of originary difference and individuality/specificity of the occupied position. Yet, because they are related, the "external" and "other" already feels as "one's own", and this is exactly what causes the sense of presence. Among the many possible connections that the body, in its sense of proprioception, is always already ready for, only a certain amount gets activated. The result of proprioception is a special kind of meta-stable internal image. This image may not coincide with the optical, auditory, or haptic image. According to Brian Massumi, proprioception translates the exertions and ease of the body's encounters with objects into a muscular memory of relationality. This is the cumulative memory of skill, habit, posture. At the same time as proprioception folds tactility in, it draws out the subject's reactions to the qualities of the objects it perceives through all five senses, bringing them into the motor realm of externalizable response. (59) This internal image is not mediated by anything, though it depends directly on the relations between the parts. It cannot be grasped because it is by definition fluid and dynamic. The position in one point is replaced here by a position-in-movement (point-in-movement). "Movement is not indexed by position. Rather, the position is born in movement, from the relation of movement towards itself" (Massumi 179). Philosopher of "extended mind" Andy Clark notes that we should distinguish between a real body schema (non-conscious configuration) and a body image (conscious construct) (Clark). It is the former that is important to understand, and yet is the most challenging. Due to its fluidity and self-referentiality, proprioception is not presentable to consciousness (the unstable internal image that it creates resides in consciousness but cannot be grasped and thus re-presented). A feeling/sense, it is not bound by sensible forms that would serve as means of objectification and externalization. As Barbara Montero observes, while the objects of vision and hearing, i.e. the most popular senses involved in the arts, are beyond one's body, sense of proprioception relates directly to the bodily sensation, it does not represent any external objects, but the sensory itself (231). These characteristics of proprioception help to reframe the question of augmentation as mediation: in the case of proprioception, the medium of sensation is the very relational structure of the body itself, irrespective of the "exteroceptive" (tactile) or "interoceptive" (visceral) dimensions of sensibility. The body is understood, then, as the "body without image,” and its proprioceptive effect can then be described as "the sensibility proper to the muscles and ligaments" (Massumi 58). Proprioception in (Media) Art One of the most convincing ways of externalization and (re)presentation of the data of proprioception is through re-production of its structure and its artificial enhancement with the help of technology. This can be achieved in at least two ways: by setting up situations and environments that emphasize self-perspective and awareness of perception, and by presenting measurements of bio-data and inviting into dialogue with them. The first strategy may be connected to disorientation and shifted perspective that are created in immersive virtual environments that make the role of otherwise un-trackable, fluid sense of proprioception actually felt and cognized. These effects are closely related to the nuances of perception of space, for instance, to spatial illusion. Practice of spatial illusion in the arts traces its history as far back as Roman frescos, trompe l’oeil, as well as phantasmagorias, like magic lantern. Geometrically, the system of the 360º image is still the most effective in producing a sense of full immersion—either in spaces from panoramas, Stereopticon, Cinéorama to CAVE (Computer Augmented Virtual Environments), or in devices for an individual spectator’s usage, like a stereoscope, Sensorama and more recent Head Mounted Displays (HMD). All these devices provide a sense of hermetic enclosure and bodily engagement with its scenes (realistic or often fantastical). Their images are frameless and thus immeasurable (lack of the sense of proportion provokes feeling of disorientation), image apparatus and the image itself converge here into an almost inseparable total unity: field of vision is filled, and the medium becomes invisible (Grau 198-202; 248-255). Yet, the constructed image is even more frameless and more peculiarly ‘mental’ in environments created on the basis of objectless or "immaterial" media, like light or sound; or in installations prioritizing haptic sensation and in responsive architectures, i.e. environments that transform physically in reaction to their inhabitants. The examples may include works by Olafur Eliasson that are centered around the issues of conscious perception and employ various optical and other apparata (mirrors, curved surfaces, coloured glass, water systems) to shift the habitual perspective and make one conscious of the subtle changes in the environment depending on one's position in space (there have been instances of spectators in Eliasson's installations falling down after trying to lean against an apparent wall that turned out to be a mere optical construct.). Figure 1: Olafur Eliasson, Take Your Time, 2008. © Olafur Eliasson Studio. In his classic H2OExpo project for Delta Expo in 1997, the Dutch architect Lars Spuybroek experimented with the perception of instability. There is no horizontal surface in the pavilion; floors, composed of interconnected elliptical volumes, transform into walls and walls into ceilings, promoting a sense of fluidity and making people respond by falling, leaning, tilting and "experiencing the vector of one’s own weight, and becoming sensitized to the effects of gravity" (Schwartzman 63). Along the way, specially installed sensors detect the behaviour of the ‘walker’ and send signals to the system to contribute further to the agenda of imbalance and confusion by changing light, image projection, and sound.Figure 2: Lars Spuybroek, H2OExpo, 1994-1997. © NOX/ Lars Spuybroek. Philip Beesley’s Hylozoic Ground (2010) is also a responsive environment filled by a dense organic network of delicate illuminated acrylic tendrils that can extend out to touch the visitor, triggering an uncanny mixture of delight and discomfort. The motif of pulsating movement was inspired by fluctuations in coral reefs and recreated via the system of precise sensors and microprocessors. This reference to an unfamiliar and unpredictable natural environment, which often makes us feel cautious and ultra-attentive, is a reminder of our innate ability of proprioception (a deeply ingrained survival instinct) and its potential for a more nuanced, intimate, emphatic and bodily rooted communication. Figure 3: Philip Beesley, Hylozoic Ground, 2010. © Philip Beesley Architect Inc. Works of this kind stimulate awareness of both the environment and one's own response to it. Inviting participants to actively engage with the space, they evoke reactions of self-reflexivity, i.e. the self becomes the object of its own exploration and (potentially) transformation. Another strategy of revealing the processes of the "body without image" is through representing various kinds of bio-data, bodily affective reactions to certain stimuli. Biosignal monitoring technologies most often employed include EEG (Electroencephalogram), EMG (Electromyogram), GSR (Galvanic Skin Response), ECG (Electrocardiogram), HRV (Heart Rate Variability) and others. Previously available only in medical settings and research labs, many types of sensors (bio and environmental) now become increasingly available (bio-enabled products ranging from cardio watches—an instance of the "quantified self" trend—to brain wave-controlled video games). As the representatives of the DIY makers community put it: "By monitoring some phenomena (biofeedback) you can train yourself to modulate them, possibly improving your emotional state. Biosensing lets you interact more naturally with digital systems, creating cyborg-like extensions of your body that overcome disabilities or provide new abilities. You can also share your bio-signals, if you choose, to participate in new forms of communication" (Montgomery). What is it about these technologies besides understanding more accurately the unconscious and invisible signals? The critical question in relation to biofeedback data is about the adequacy of the transference of the initial signal, about the "new" brought by the medium, as well as the ontological status of the resulting representation. These data are reflections of something real, yet themselves have a different weight, also providing the ground for all sorts of simulative methods and creation of mixed realities. External representations, unlike internal, are often attributed a prosthetic nature that is treated as extensions of existing skills. Besides serving their direct purpose (for instance, maps give detailed picture of a distant location), these extensions provide certain psychological effects, such as disorientation, displacement, a shift in a sense of self and enhancement of the sense of presence. Artistic experiments with bio-data started in the 1960s most famously with employing the method of sonification. Among the pioneers were the composers Alvin Lucier, Richard Teitelbaum, David Rosenblum, Erkki Kurenemi, Pierre Henry, and others. Today's versions of biophysical performance may include not only acoustic, but also visual interpretation, as well as subtle narrative scenarios. An example can be Marco Donnarumma's Hypo Chrysos, a piece that translates visceral strain in sound and moving images. The title refers to the type of a punishing trial in one of the circles of hell in Dante's Divine Comedy: the eternal task of carrying heavy rocks is imitated by the artist-performer, while the audience can feel the bodily tension enhanced by sound and imagery. The state of the inner body is, thus, amplified, or augmented. The sense of proprioception experienced by the performer is translated into media perceivable by others. In this externalized form it can also be shared, i.e. released into a space of inter-subjectivity, where it receives other, collective qualities and is not perceived negatively, in terms of pressure. Figure 4: Marco Donnarumma, Hypo Chrysos, 2011. © Marco Donnarumma. Another example can be an installation Telephone Rewired by the artist-neuroscientist Sean Montgomery. Brainwave signals are measured from each visitor upon the entrance to the installation site. These individual data then become part of the collective archive of the brainwaves of all the participants. In the second room, the viewer is engulfed by pulsing light and sound that mimic endogenous brain waveforms of the previous viewers. As in the experience of Donnarumma's performance, this process encourages tuning in to the inner state of the other and finding resonating states in one's own body. It becomes a tool for self-exploration, self-knowledge, and self-control, as well as for developing skills of collective being, of shared body-mind topologies. Synchronization of mental and bodily states of multiple people serves here a broader and deeper goal of training collaborative and empathic abilities. An immersive experience, it triggers deep embodied neural circuits, reaching towards the most authentic reactions not mediated by conscious procedures and judgment. Figure 5: Sean Montgomery, Telephone Rewired, 2013. © Sean Montgomery. Conclusion The potential of biofeedback as a strategy for art projects is a rich area that artists have only begun to explore. The layer of the imaginary and the fictional (which makes art special and different from, for instance, science) can add a critical dimension to understanding the processes of augmentation and mediation. As the described examples demonstrate, art is an investigative journey that can be engaging, surprising, and awakening towards the more subtle and acute forms of thinking and feeling. This astuteness and percipience are especially needed as media and technologies penetrate and affect our very abilities to apprehend reality. We need new tools to make independent and individual judgment. The sense of proprioception establishes a productive challenge not only for science, but also for the arts, inviting a search for new mechanisms of representing the un-presentable and making shareable and communicable what is, by definition, individual, fluid, and ungraspable. Collaborative cognition emerging from the augmentation of proprioception that is enabled by biofeedback technologies holds distinct promise for exploration of not only subjective, but also inter-subjective states and aesthetic strategies of inducing them. References Beesley, Philip. Hylozoic Ground. 2010. Venice Biennale, Venice. Clark, Andy, and David J. Chalmers. “The Extended Mind.” Analysis 58.1 (1998):7-19. Donnarumma, Marco. Hypo Chrysos: Action Art for Vexed Body and Biophysical Media. 2011. Xth Sense Biosensing Wearable Technology. MADATAC Festival, Madrid. Eliasson, Olafur. Take Your Time, 2008. P.S.1 Contemporary Art Centre; Museum of Modern Art, New York. Grau, Oliver. Virtual Art: From Illusion to Immersion. Cambridge, Mass.: MIT Press, 2003. Massumi, Brian. Parables of the Virtual: Movement, Affect, Sensation. Durham: Duke University Press, 2002. Montero, Barbara. "Proprioception as an Aesthetic Sense." Journal of Aesthetics and Art Criticism 64.2 (2006): 231-242. Montgomery, Sean, and Ira Laefsky. "Biosensing: Track Your Body's Signals and Brain Waves and Use Them to Control Things." Make 26. 1 Oct. 2013 ‹http://www.make-digital.com/make/vol26?pg=104#pg104›. Sacks, Oliver. "The Disembodied Lady". The Man Who Mistook His Wife for a Hat and Other Clinical Tales. Philippines: Summit Books, 1985. Schwartzman, Madeline, See Yourself Sensing. Redefining Human Perception. London: Black Dog Publishing, 2011. Spuybroek, Lars. Waterland. 1994-1997. H2O Expo, Zeeland, NL.
APA, Harvard, Vancouver, ISO, and other styles
42
Goggin, Gerard. "SMS Riot: Transmitting Race on a Sydney Beach, December 2005." M/C Journal 9, no.1 (March1, 2006). http://dx.doi.org/10.5204/mcj.2582.
Full textAbstract:
My message is this in regard to SMS messages and swarming crowds; this is ludicrous behaviour; it is unAustralian. We all share this wonderful country. (NSW Police Assistant Commissioners Mark Goodwin, quoted in Kennedy) The cops hate and fear the swarming packs of Lebanese who respond when some of their numbers are confronted, mobilising quickly via mobile phones and showing open contempt for Australian law. All this is the real world, as distinct from the world preferred by ideological academics who talk about “moral panic” and the oppression of Muslims. They will see only Australian racism as the problem. (Sheehan) The Politics of Transmission On 11 December 2005, as Sydney was settling into early summer haze, there was a race riot on the popular Cronulla beach in the city’s southern suburbs. Hundreds of people, young men especially, gathered for a weekend protest. Their target and pretext were visitors from the culturally diverse suburbs to the west, and the need to defend their women and beaches in the face of such unwelcome incursions and behaviours. In the ensuing days, there were violent raids and assaults criss-crossing back and forth across Sydney’s beaches and suburbs, involving almost farcical yet deadly earnest efforts to identify, respectively, people of “anglo” or “Middle Eastern” appearance (often specifically “Lebanese”) and to threaten or bash them. At the very heart of this state of siege and the fear, outrage, and sadness that gripped those living in Sydney were the politics of transmission. The spark that set off this conflagration was widely believed to have been caused by the transmission of racist and violent “calls to arms” via mobile text messages. Predictably perhaps media outlets sought out experts on text messaging and cell phone culture for commentary, including myself and most mainstream media appeared interested in portraying a fascination for texting and reinforcing its pivotal role in the riots. In participating in media interviews, I found myself torn between wishing to attest to the significance and importance of cell phone culture and texting, on the one hand (or thumb perhaps), while being extremely sceptical about its alleged power in shaping these unfolding events, on the other — not to mention being disturbed about the ethical implications of what had unfolded. In this article, I wish to discuss the subject of transmission and the power of mobile texting culture, something that attracted much attention elsewhere — and to which the Sydney riots offer a fascinating and instructive lesson. My argument runs like this. Mobile phone culture, especially texting, has emerged over the past decade, and has played a central role in communicative and cultural practice in many countries and contexts as scholars have shown (Glotz and Bertschi; Harper, Palen and Taylor). Among other features, texting often plays a significant, if not decisive, role in co-ordinated as well as spontaneous social and political organization and networks, if not, on occasion, in revolution. However, it is important not to over-play the role, significance and force of such texting culture in the exercise of power, or the formation of collective action and identities (whether mobs, crowds, masses, movements, or multitudes). I think texting has been figured in such a hyperbolic and technological determinist way, especially, and ironically, through how it has been represented in other media (print, television, radio, and online). The difficulty then is to identify the precise contribution of mobile texting in organized and disorganized social networks, without the antimonies conferred alternatively by dystopian treatments (such as moral panic) or utopian ones (such as the technological sublime) — something which I shall try to elucidate in what follows. On the Beach Again Largely caught unawares and initially slow to respond, the New South Wales state government responded with a massive show of force and repression. 2005 had been marked by the state and Federal enactment of draconian terror laws. Now here was an opportunity for the government to demonstrate the worth of the instruments and rationales for suppression of liberties, to secure public order against threats of a more (un)civil than martial order. Outflanking the opposition party on law-and-order rhetoric once again, the government immediately formulated new laws to curtail accused and offender’s rights (Brown). The police “locked” down whole suburbs — first Cronulla, then others — and made a show of policing all beaches north and south (Sydney Morning Herald). The race riots were widely reported in the international press, and, not for the first time (especially since the recent Redfern and Macquarie Fields), the city’s self-image of a cosmopolitan, multicultural nation (or in Australian Prime Minister John Howard’s prim and loaded terms, a nation “relaxed and comfortable”) looked like a mirage. Debate raged on why the riots occurred, how harmony could be restored and what the events signified for questions of race and identity — the latter most narrowly construed in the Prime Minister’s insistence that the riots did not reflect underlying racism in Australia (Dodson, Timms and Creagh). There were suggestions that the unrest was rather at base about the contradictions and violence of masculinity, some two-odd decades after Puberty Blues — the famous account of teenage girls growing up on the (Cronulla) Shire beaches. Journalists agonized about whether the media amounted to reporter or amplifier of tensions. In the lead-up to the riots, at their height, and in their wake, there was much emphasis on the role mobile text messages played in creating the riots and sustaining the subsequent atmosphere of violence and racial tension (The Australian; Overington and Warne-Smith). Not only were text messages circulating in the Sydney area, but in other states as well (Daily Telegraph). The volume of such text messages and emails also increased in the wake of the riot (certainly I received one personally from a phone number I did not recognise). New messages were sent to exhort Lebanese-Australians and others to fight back. Those decrying racism, such as the organizers of a rally, pointedly circulated text messages, hoping to spread peace. Media commentators, police, government officials, and many others held such text messages directly and centrally responsible for organizing the riot and for the violent scuffles that followed: The text message hate mail that inspired 5000 people to attend the rally at Cronulla 10 days ago demonstrated to the police the power of the medium. The retaliation that followed, when gangs marauded through Maroubra and Cronulla, was also co-ordinated by text messaging (Davies). It is rioting for a tech-savvy generation. Mobile phones are providing the call to arms for the tribes in the race war dividing Sydney. More than 5000 people turn up to Cronulla on Sunday … many were drawn to the rally, which turned into a mob, by text messages on their mobiles (Hayes and Kearney). Such accounts were crucial to the international framing of the events as this report from The Times in London illustrates: In the days leading up to the riot racist text messages had apparently been circulating calling upon concerned “white” Australians to rally at Cronulla to defend their beach and women. Following the attacks on the volunteer lifeguards, a mobile telephone text campaign started, backed up by frenzied discussions on weblogs, calling on Cronulla locals to rally to protect their beach. In response, a text campaign urged youths from western Sydney to be at Cronulla on Sunday to protect their friends (Maynard). There were calls upon the mobile companies to intercept and ban such messages, with industry spokespeople pointing out text messages were usually only held for twenty-four hours and were in many ways more difficult to intercept than it was to tap phone calls (Burke and Cubby). Mobs and Messages I think there are many reasons to suggest that the transmission of text messages did constitute a moral panic (what I’ve called elsewhere a “mobile panic”; see Goggin), pace columnist Paul Sheehan. Notably the wayward texting drew a direct and immediate response from the state government, with legislative changes that included provisions allowing the confiscation of cell phones and outlawing sending, receipt or keeping of racist or inflammatory text messages. For some days police proceeded to stop cars and board buses and demand to inspect mobiles, checking and reading text messages, arresting at least one person for being responsible for transmitting banned text messages. However, there is another important set of ideas adduced by commentators to explain how people came together to riot in Sydney, taking their cue from Howard Rheingold’s 2002 book Smart Mobs, a widely discussed and prophetic text on social revolution and new technologies. Rheingold sees text messaging as the harbinger of such new, powerful forms of collectivity, studying emergent uses around the world. A prime example he uses to illustrate the “power of the mobile many” is the celebrated overthrow of President Joseph Estrada of the Philippines in January 2001: President Joseph Estrada of the Philippines became the first head of state in history to lose power to a smart mob. More than 1 million Manila residents, mobilized and coordinated by waves of text messages, assembled … Estrada fell. The legend of “Generation Txt” was born (Rheingold 157-58). Rheingold is careful to emphasize the social as much as technical nature of this revolution, yet still sees such developments leading to “smart mobs”. As with his earlier, prescient book Virtual Community (Rheingold 1993) did for the Internet, so has Smart Mobs compellingly fused and circulated a set of ideas about cell phones and the pervasive, wearable and mobile technologies that are their successors. The received view of the overthrow of the Estrada government is summed up in a remark attributed to Estrada himself: “I was ousted by a coup d’text” (Pertierra et al. ch. 6). The text-toppling of Estrada is typically attributed to “Generation Txt”, underlining the power of text messaging and the new social category which marks it, and has now passed into myth. What is less well-known is that the overriding role of the cell phone in the Estrada overthrow has been challenged. In the most detailed study of text messaging and subjectivity in the Philippines, which reviewed accounts of the events of the Estrada overthrow, as well as conducting interviews with participants, Pertierra et al. discern in EDSA2 a “utopian vision of the mobile phone that is characteristic of ‘discourses of sublime technology’”: It focuses squarely on the mobile phone, and ignores the people who used it … the technology is said to possess a mysterious force, called “Text Power” ... it is the technology that does things — makes things happen — not the people who use it. (Pertierra et al. ch. 6) Given the recrudescence of the technological sublime in digital media (on which see Mosco) the detailed examination of precise details and forms of agency and coordination using cell phones is most instructive. Pertierra et al. confirm that the cell phone did play an important role in EDSA2 (the term given to the events surrounding the downfall of Estrada). That role, however, was not the one for which it has usually been praised in the media since the event — namely, that of crowd-drawer par excellence … less than half of our survey respondents who took part in People Power 2 noted that text messaging influenced them to go. If people did attend, it was because they were persuaded to by an ensemble of other reasons … (2002: ch. 6) Instead, they argue, the significance of the cell phone lay firstly, in the way it helped join people who disapproved of Pres. Estrada in a network of complex connectivity … Secondly, the mobile phone was instrumental as an organizational device … In the hands of activists and powerbrokers from politics, the military, business groups and civil society, the mobile phone becomes a “potent communications tool” … (Pertierra et al. 2002: ch. 6) What this revisionist account of the Estrada coup underscores is that careful research and analysis is required to understand how SMS is used and what it signifies. Indeed it is worth going further to step back from either the celebratory or minatory discourses on the cell phone and its powerful effects, and reframe this set of events as very much to do with the mutual construction of society and technology, in which culture is intimately involved. This involves placing both the technology of text messaging and the social and political forces manifested in this uprising in a much wider setting. For instance, in his account of the Estrada crisis Vicente L. Rafael terms the tropes of text messaging and activism evident in the discourses surrounding it as: a set of telecommunicative fantasies among middle-class Filipinos … [that] reveal certain pervasive beliefs of the middle classes … in the power of communication technologies to transmit messages at a distance and in their own ability to possess that power (Rafael 399). For Rafael, rather than possessing instrinsic politics in its own right, text messaging here is about a “media politics (understood in both senses of the phrase: the politics of media systems, but also the inescapable mediation of the political) [that] reveal the unstable workings of Filipino middle-class sentiments” (400). “Little Square of Light” Doubtless there are emergent cultural and social forms created in conjunction with new technologies, which unfreeze and open up (for a time) social relations. As my discussion of the Estrada “coup d’text” shows, however, the dynamics of media, politics and technology in any revolution or riot need to be carefully traced. A full discussion of mobile media and the Sydney uprising will need to wait for another occasion. However, it is worth noting that the text messages in question to which the initial riot had been attributed, were actually read out on one of the country’s highest-rating and most influential talk-radio programs. The contents of such messages had also been detailed in print media, especially tabloids, and been widely discussed (McLellan, Marr). What remains unknown and unclear, however, is the actual use of text messages and cell phones in the conceiving, co-ordination, and improvisational dynamics of the riots, and affective, cultural processing of what occurred. Little retrospective interpretation at all has emerged in the months since the riots, but it certainly felt as if the police and state’s over-reaction, and the arrival of the traditionally hot and lethargic Christmas — combined with the underlying structures of power and feeling to achieve the reinstitution of calm, or rather perhaps the habitual, much less invisible, expression of whiteness as usual. The policing of the crisis had certainly been fuelled by the mobile panic, but setting law enforcement the task of bringing those text messages to book was much like asking them to catch the wind. For analysts, as well as police, the novel and salience appearance of texting also has a certain lure. Yet in concentrating on the deadly power of the cell phone to conjure up a howling or smart mob, or in the fascination with the new modes of transmission of mobile devices, it is important to give credit to the formidable, implacable role of media and cultural representations more generally, in all this, as they are transmitted, received, interpreted and circulated through old as well as new modes, channels and technologies. References The Australian. “SMS Message Goes Out: Let’s March for Racial Tolerance.” The Australian. 17 September, 2005. 6. Brown, M. “Powers Tested in the Text”. Sydney Morning Herald. 20 December, 2005. 7. Burke, K. and Cubby, B. “Police Track Text Message Senders”. Sydney Morning Herald, 23-25 December, 2005. 7. Daily Telegraph. “Police Intercept Interstate Riot SMS — Race Riot: Flames of Fear.” Daily Telegraph. 15 December, 2005. 5. Davis, A. “Flying Bats Rang Alarm”. Sydney Morning Herald. 21 December, 2005. 1, 5. Dodson, L., Timms, A. and Creagh, S. “Tourism Starts Counting the Cost of Race Riots”, Sydney Morning Herald. 21 December, 2005. 1. Goggin, G. Cell Phone Culture: Mobile Technology in Everyday Life. London: Routledge, 2006. In press. Glotz, P., and Bertschi, S. (ed.) Thumb Culture: Social Trends and Mobile Phone Use, Bielefeld: Transcript Verlag. Harper, R., Palen, L. and Taylor, A. (ed.)_ _The Inside Text: Social, Cultural and Design Perspectives on SMS. Dordrecht: Springer. Hayes, S. and Kearney, S. “Call to Arms Transmitted by Text”. Sydney Morning Herald. 13 December, 2005. 4. Kennedy, L. “Police Act Swiftly to Curb Attacks”. Sydney Morning Herald. 13 December, 2005. 6. Maynard, R. “Battle on Beach as Mob Vows to Defend ‘Aussie Way of Life.’ ” The Times. 12 December 2005. 29. Marr, D. “One-Way Radio Plays by Its Own Rules.” Sydney Morning Herald. 13 December, 2005. 6. McLellan, A. “Solid Reportage or Fanning the Flames?” The Australian. 15 December, 2005. 16. Mosco, V. The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA: MIT Press, 2004. Overington, C. and Warne-Smith, D. “Countdown to Conflict”. The Australian. 17 December, 2005. 17, 20. Pertierra, R., E.F. Ugarte, A. Pingol, J. Hernandez, and N.L. Dacanay, N.L. Txt-ing Selves: Cellphones and Philippine Modernity. Manila: De La Salle University Press, 2002. 1 January 2006 http://www.finlandembassy.ph/texting1.htm>. Rafael, V. L. “The Cell Phone and the Crowd: Messianic Politics in the Contemporary Philippines.” Public Culture 15 (2003): 399-425. Rheingold, H. Smart Mobs: The Next Social Revolution. Cambridge, MA: Perseus, 2002. Sheehan, P. “Nasty Reality Surfs In as Ugly Tribes Collide”. Sydney Morning Herald. 12 December, 2005. 13. Sydney Morning Herald. “Beach Wars 1: After Lockdown”. Editorial. Sydney Morning Herald. 20 December, 2005. 12. Citation reference for this article MLA Style Goggin, Gerard. "SMS Riot: Transmitting Race on a Sydney Beach, December 2005." M/C Journal 9.1 (2006). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0603/02-goggin.php>. APA Style Goggin, G. (Mar. 2006) "SMS Riot: Transmitting Race on a Sydney Beach, December 2005," M/C Journal, 9(1). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0603/02-goggin.php>.
APA, Harvard, Vancouver, ISO, and other styles
43
Braun, Carol-Ann, and Annie Gentes. "Dialogue: A Hyper-Link to Multimedia Content." M/C Journal 7, no.3 (July1, 2004). http://dx.doi.org/10.5204/mcj.2361.
Full textAbstract:
Background information Sandscript was programmed with the web application « Tchat-scene », created by Carol-Ann Braun and the computer services company Timsoft (). It organizes a data-base of raw material into compositions and sequences allowing to build larger episodes. Multimedia resources are thus attributed to frames surrounding the chat space or to the chat space itself, thus “augmented” to include pre-written texts and graphics. Sandscript works best on a PC, with Internet Explorer. On Mac, use 0S9 and Internet Explorer. You will have to download a chat application for the site to function. Coded conversation General opinion would have it that chat space is a conversational space, facilitating rather than complicating communication. Writing in a chat space is very much influenced by the current ideological stance which sees collaborative spaces as places to make friends, speak freely, flip from one “channel” to another, link with a simple click into related themes, etc. Moreover, chat users tend to think of the chat screen in terms of a white page, an essentially neutral environment. A quick analysis of chat practices reveals a different scenario: chat spaces are highly coded typographical writing spaces, quick to exclude those who don’t abide by the technical and procedural constraints associated with computer reading/writing tools (Despret-Lonné, Gentès). Chatters seek to belong to a “community;” conversely, every chat has “codes” which restrict its membership to the like-minded. The patterns of exchange characteristic of chats are phatic (Jakobson), and their primary purpose is to get and maintain a social link. It is no surprise then that chatters should emphasize two skills: one related to rhetorical ingenuity, the other to dexterity and speed of writing. To belong, one first has to grasp the banter, then manage very quickly the rules and rituals of the group, then answer by mastering the intricacies of the keyboard and its shortcuts. Speed is compulsory if your answers are to follow the communal chat; as a result, sentences tend to be very short, truncated bits, dispatched in a continuous flow. Sandscript attempts to play with the limits of this often hermetic writing process (and the underlying questions of affinity, participation and reciprocity). It opens up a social space to an artistic and fictional space, each with rules of its own. Hyper-linked dialogue Sandscript is not just about people chatting, it is also about influencing the course of these exchanges. The site weaves pre-scripted poetic content into the spontaneous, real-time dialogue of chatters. Smileys and the plethora of abbreviations, punctuations and icons characteristic of chat rooms are mixed in with typographical games that develop the idea of text as image and text as sound — using Morse Code to make text resonate, CB code to evoke its spoken use, and graphic elements within the chat space itself to oppose keyboard text and handwritten graffiti. The web site encourages chatters to broaden the scope of their “net-speak,” and take a playfully conscious stance towards their own familiar practices. Actually, most of the writing in this web-site is buried in the database. Two hundred or so “key words” — expressions typical of phatic exchanges, in addition to other words linked to the idea of sandstorms and archeology — lie dormant, inactive and unseen until a chatter inadvertently types one in. These keywords bridge the gap between spontaneous exchange and multimedia content: if someone types in “hi,” an image of a face, half buried in sand, pops up in a floating window and welcomes you, silently; if someone types in the word “wind,” a typewritten “wind” floats out into the graphic environment and oscillates between the left and right edges of the frames; typing the word “no” “magically” triggers the intervention of an anarchist who says something provocative*. *Sandscript works like a game of ping-pong among chatters who are intermittently surprised by a comment “out of nowhere.” The chat space, augmented by a database, forms an ever-evolving, fluid “back-bone” around which artistic content is articulated. Present in the form of programs who participate in their stead, artists share the spot light, adding another level of mediation to a collective writing process. Individual and collective identities Not only does Sandscript accentuate the multimedia aspects of typed chat dialogues, it also seeks to give a “ shape” to the community of assembled chatters. This shape is musical: along with typing in a nickname of her choice, each chatter is attributed a sound. Like crickets in a field, each sound adds to the next to create a collective presence, modified with every new arrival and departure. For example, if your nick is “yoyo-mama,” your presence will be associated with a low, electronic purr. When “pillX” shows up, his nick will be associated with a sharp violin chord. When “mojo” pitches in, she adds her sound profile to the lot, and the overall environment changes again. Chatters can’t hear the clatter of each other’s keyboards, but they hear the different rhythms of their musical identities. The repeated pings of people present in the same “scape” reinforce the idea of community in a world where everything typed is swept away by the next bit of text, soon to be pushed off-screen in turn. The nature of this orchestrated collective presence is determined by the artists and their programs, not by the chatters themselves, whose freedom is limited to switching from one nick to another to test the various sounds associated with each. Here, identity is both given and built, both individual and collective, both a matter of choice and pre-defined rules. (Goffman) Real or fictitious characters The authors introduce simulated bits of dialogue within the flow of written conversation. Some of these fake dialogues simply echo whatever keywords chatters might type. Others, however, point else where, suggesting a hyper-link to a more elaborate fictionalized drama among “characters.” Sandscript also hides a plot. Once chatters realize that there are strange goings on in their midst, they become caught in the shifting sands of this web site’s inherent duality. They can completely lose their footing: not only do they have to position themselves in relation to other, real people (however disguised…) but they also have to find their bearings in the midst of a database of fake interlocutors. Not only are they expected to “write” in order to belong, they are also expected to unearth content in order to be “in the know.” A hybridized writing is required to maintain this ambivalence in place. Sandscript’s fake dialogue straddles two worlds: it melds in with the real-time small talk of chatters all while pointing to elements in a fictional narrative. For example, “mojo” will say: “silting up here ”, and “zano” will answer “10-4, what now? ” These two characters could be banal chatters, inviting others to join in their sarcastic banter… But they are also specifically referring to incidents in their fictional world. The “chat code” not only addresses its audience, it implies that something else is going on that merits a “click” or a question. “Clicking” at this juncture means more than just quickly responding to what another chatter might have typed. It implies stopping the banter and delving into the details of a character developed at greater length elsewhere. Indeed, in Sandscript, each fictional dialogue is linked to a blog that reinforces each character’s personality traits and provides insights into the web-site’s wind-swept, self-erasing world. Interestingly enough, Sandscript then reverses this movement towards a closed fictional space by having each character not only write about himself, but relate her immediate preoccupations to the larger world. Each blog entry mentions a character’s favorite URL at that particular moment. One character might evoke a web site about romantic poetry, another one on anarchist political theory, a third a web-site on Morse code, etc… Chatters click on the URL and open up an entirely new web-site, directly related to the questions being discussed in Sandscript. Thus, each character represents himself as well as a point of view on the larger world of the web. Fiction opens onto a “real” slice of cyber-space and the work of other authors and programmers. Sandscript mixes up different types of on-line identities, emphasizing that representations of people on the web are neither “true” nor “false.” They are simply artificial and staged, simple facets of identities which shift in style and rhetoric depending on the platform available to them. Again, identity is both closed by our social integration and opened to singular “play.” Conclusion: looking at and looking through One could argue that since the futurists staged their “electrical theater” in the streets of Turin close to a hundred years ago, artists have worked on the blurry edge between recognizable formal structures and their dissolution into life itself. And after a century of avant-gardes, self-referential appropriations of mass media are also second nature. Juxtaposing one “use” along another reveals how different frames of reference include or exclude each other in unexpected ways. For the past twenty years much artwork has which fallen in between genres, and most recently in the realm of what Nicolas Bourriaud calls “relational aesthetics.” Such work is designed not only to draw attention to itself but also to the spectator’s relation to it and the broader artistic context which infuses the work with additional meaning. By having dialogue serve as a hyper-link to multimedia content, Sandscript, however, does more. Even though some changes in the web site are pre-programmed to occur automatically, not much happens without the chatters, who occupy center-stage and trigger the appearance of a latent content. Chatters are the driving force, they are the ones who make text appear and flow off-screen, who explore links, who exchange information, and who decide what pops up and doesn’t. Here, the art “object” reveals its different facets around a multi-layered, on-going conversation, subjected to the “flux” of an un-formulated present. Secondly, Sandscript demands that we constantly vary our posture towards the work: getting involved in conversation to look through the device, all while taking some distance to consider the object and look at its content and artistic “mediations.” (Bolster and Grusin, Manovitch). This tension is at the heart of Sandscript, which insists on being both a communication device “transparent” to its user, and an artistic device that imposes an opaque and reflexive quality. The former is supposed to disappear behind its task; the latter attracts the viewer’s attention over and over again, ever open to new interpretations. This approach is not without pitfalls. One Sandscript chatter wondered if as the authors of the web-site were not disappointed when conversation took the upper hand, and chatters ignored the graphics. On the other hand, the web site’s explicit status as a chat space was quickly compromised when users stopped being interested in each other and turned to explore the different layers hidden within the interface. In the end, Sandscript chatters are not bound to any single one of these modes. They can experience one and then other, and —why not —both simultaneously. This hybrid posture brings to mind Herman’s metaphor of a door that cannot be closed entirely: “la porte joue” —the door “gives.” It is not perfectly fitted and closed — there is room for “play.” Such openness requires that the artistic device provide two seemingly contradictory ways of relating to it: a desire to communicate seamlessly all while being fascinated by every seam in the representational space projected on-screen. Sandscript is supposed to “run” and “not run” at the same time; it exemplifies the technico-semiotic logic of speed and resists it full stop. Here, openness is not ontological; it is experiential, shifting. About the Authors Carol-Ann Braun is multimedia artist, at the Ecole Nationale Superieure des Telecomunications, Paris, France. EmaiL: carol-ann.braun@wanadoo.fr Annie Gentes is media theorist and professor at the Ecole Nationale Superieure des Telecomunications, Paris, France. Email: Annie.Gentes@enst.fr Works Cited Adamowicz, Elza. Surrealist Collage in Text and Image, Dissecting the Exquisite Corpse. Cambridge: Cambridge University Press, 1998. Augé, Marc. Non-lieux, Introduction à une Anthropologie de la Surmodernité. Paris: Seuil, 1992. Bolter, Jay David and Richard Grusin. Remediation, Understanding New Media. Cambridge: MIT Press, 2000. Bourriaud, Nicholas. Esthétique Relationnelle. Paris: Les Presses du Réel, 1998. Despret-Lonnet, Marie and Annie Gentes, Lire, Ecrire, Réécrire. Paris: Bibliothèque Centre Pompidou, 2003. Goffman, Irving. Interaction Ritual. New York: Pantheon, 1967. Habermas, Jürgen. Théorie de l’Agir Communicationnel, Vol.1. Paris: Fayard, 1987. Herman, Jacques. “Jeux et Rationalité.” Encyclopedia Universalis, 1997. Jakobson, Roman.“Linguistics and Poetics: Closing statements,” in Thomas Sebeok. Style in Language. Cambridge: MIT Press, 1960. Latzko-Toth, Guillaume. “L’Internet Relay Chat, Un Cas Exemplaire de Dispositif Socio-technique,” in Composite. Montreal: Université du Québec à Montréal, 2001. Lyotard, Jean-François. La Condition Post-Moderne. Paris: les Editions de Minuit, 1979. Manovitch, Lev. The Language of New Media. Cambridge: MIT Press, 2001. Michaud, Yves. L’Art à l’Etat Gazeux. Essai sur le Triomphe de l’Esthétique, Les essais. Paris: Stock, 2003. Citation reference for this article MLA Style Braun, Carol-Ann & Gentes, Annie. "Dialogue: a hyper-link to multimedia content." M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/05_Braun-Gentes.php>. APA Style Braun, C. & Gentes, A. (2004, Jul1). Dialogue: a hyper-link to multimedia content.. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/05_Braun-Gentes.php>
APA, Harvard, Vancouver, ISO, and other styles
44
Muntean, Nick, and Anne Helen Petersen. "Celebrity Twitter: Strategies of Intrusion and Disclosure in the Age of Technoculture." M/C Journal 12, no.5 (December13, 2009). http://dx.doi.org/10.5204/mcj.194.
Full textAbstract:
Being a celebrity sure ain’t what it used to be. Or, perhaps more accurately, the process of maintaining a stable star persona isn’t what it used to be. With the rise of new media technologies—including digital photography and video production, gossip blogging, social networking sites, and streaming video—there has been a rapid proliferation of voices which serve to articulate stars’ personae. This panoply of sanctioned and unsanctioned discourses has brought the coherence and stability of the star’s image into crisis, with an evermore-heightened loop forming recursively between celebrity gossip and scandals, on the one hand, and, on the other, new media-enabled speculation and commentary about these scandals and gossip-pieces. Of course, while no subject has a single meaning, Hollywood has historically expended great energy and resources to perpetuate the myth that the star’s image is univocal. In the present moment, however, studios’s traditional methods for discursive control have faltered, such that celebrities have found it necessary to take matters into their own hands, using new media technologies, particularly Twitter, in an attempt to stabilise that most vital currency of their trade, their professional/public persona. In order to fully appreciate the significance of this new mode of publicity management, and its larger implications for contemporary subjectivity writ large, we must first come to understand the history of Hollywood’s approach to celebrity publicity and image management.A Brief History of Hollywood PublicityThe origins of this effort are nearly as old as Hollywood itself, for, as Richard DeCordova explains, the celebrity scandals of the 1920s threatened to disrupt the economic vitality of the incipient industry such that strict, centralised image control appeared as a necessary imperative to maintain a consistently reliable product. The Fatty Arbuckle murder trial was scandalous not only for its subject matter (a murder suffused with illicit and shadowy sexual innuendo) but also because the event revealed that stars, despite their mediated larger-than-life images, were not only as human as the rest of us, but that, in fact, they were capable of profoundly inhuman acts. The scandal, then, was not so much Arbuckle’s crime, but the negative pall it cast over the Hollywood mythos of glamour and grace. The studios quickly organised an industry-wide regulatory agency (the MPPDA) to counter potentially damaging rhetoric and ward off government intervention. Censorship codes and morality clauses were combined with well-funded publicity departments in an effort that successfully shifted the locus of the star’s extra-filmic discursive construction from private acts—which could betray their screen image—to information which served to extend and enhance the star’s pre-existing persona. In this way, the sanctioned celebrity knowledge sphere became co-extensive with that of commercial culture itself; the star became meaningful only by knowing how she spent her leisure time and the type of make-up she used. The star’s identity was not found via unsanctioned intrusion, but through studio-sanctioned disclosure, made available in the form of gossip columns, newsreels, and fan magazines. This period of relative stability for the star's star image was ultimately quite brief, however, as the collapse of the studio system in the late 1940s and the introduction of television brought about a radical, but gradual, reordering of the star's signifying potential. The studios no longer had the resources or incentive to tightly police star images—the classic age of stardom was over. During this period of change, an influx of alternative voices and publications filled the discursive void left by the demise of the studios’s regimented publicity efforts, with many of these new outlets reengaging older methods of intrusion to generate a regular rhythm of vendible information about the stars.The first to exploit and capitalize on star image instability was Robert Harrison, whose Confidential Magazine became the leading gossip publication of the 1950s. Unlike its fan magazine rivals, which persisted in portraying the stars as morally upright and wholesome, Confidential pledged on the cover of each issue to “tell the facts and name the names,” revealing what had been theretofore “confidential.” In essence, through intrusion, Confidential reasserted scandal as the true core of the star, simultaneously instituting incursion and surveillance as the most direct avenue to the “kernel” of the celebrity subject, obtaining stories through associations with call girls, out-of-work starlettes, and private eyes. As extra-textual discourses proliferated and fragmented, the contexts in which the public encountered the star changed as well. Theatre attendance dropped dramatically, and as the studios sold their film libraries to television, the stars, formerly available only on the big screen and in glamour shots, were now intercut with commercials, broadcast on grainy sets in the domestic space. The integrity—or at least the illusion of integrity—of the star image was forever compromised. As the parameters of renown continued to expand, film stars, formally distinguished from all other performers, migrated to television. The landscape of stardom was re-contoured into the “celebrity sphere,” a space that includes television hosts, musicians, royals, and charismatic politicians. The revamped celebrity “game” was complex, but still playabout: with a powerful agent, a talented publicist, and a check on drinking, drug use, and extra-marital affairs, a star and his or her management team could negotiate a coherent image. Confidential was gone, The National Inquirer was muzzled by libel laws, and People and E.T.—both sheltered within larger media companies—towed the publicists’s line. There were few widely circulated outlets through which unauthorised voices could gain traction. Old-School Stars and New Media Technologies: The Case of Tom CruiseYet with the relentless arrival of various news media technologies beginning in the 1980s and continuing through the present, maintaining tight celebrity image control began to require the services of a phalanx of publicists and handlers. Here, the example of Tom Cruise is instructive: for nearly twenty years, Cruise’s publicity was managed by Pat Kingsley, who exercised exacting control over the star’s image. With the help of seemingly diverse yet essentially similar starring roles, Cruise solidified his image as the cocky, charismatic boy-next-door.The unified Cruise image was made possible by shutting down competing discourses through the relentless, comprehensive efforts of his management company; Kingsley's staff fine-tuned Cruise’s acts of disclosure while simultaneously eliminating the potential for unplanned intrusions, neutralising any potential scandal at its source. Kingsley and her aides performed for Cruise all the functions of a studio publicity department from Hollywood’s Golden Age. Most importantly, Cruise was kept silent on the topic of his controversial religion, Scientology, lest it incite domestic and international backlash. In interviews and off-the-cuff soundbites, Cruise was ostensibly disclosing his true self, and that self remained the dominant reading of what, and who, Cruise “was.” Yet in 2004, Cruise fired Kingsley, replaced her with his own sister (and fellow Scientologist), who had no prior experience in public relations. In essence, he exchanged a handler who understood how to shape star disclosure for one who did not. The events that followed have been widely rehearsed: Cruise avidly pursued Katie Holmes; Cruise jumped for joy on Oprah’s couch; Cruise denounced psychology during a heated debate with Matt Lauer on The Today Show. His attempt at disclosing this new, un-publicist-mediated self became scandalous in and of itself. Cruise’s dismissal of Kingsley, his unpopular (but not necessarily unwelcome) disclosures, and his own massively unchecked ego all played crucial roles in the fall of the Cruise image. While these stumbles might have caused some minor career turmoil in the past, the hyper-echoic, spastically recombinatory logic of the technoculture brought the speed and stakes of these missteps to a new level; one of the hallmarks of the postmodern condition has been not merely an increasing textual self-reflexivity, but a qualitative new leap forward in inter-textual reflexivity, as well (Lyotard; Baudrillard). Indeed, the swift dismantling of Cruise’s long-established image is directly linked to the immediacy and speed of the Internet, digital photography, and the gossip blog, as the reflexivity of new media rendered the safe division between disclosure and intrusion untenable. His couchjumping was turned into a dance remix and circulated on YouTube; Mission Impossible 3 boycotts were organised through a number of different Web forums; gossip bloggers speculated that Cruise had impregnated Holmes using the frozen sperm of Scientology founder L. Ron Hubbard. In the past, Cruise simply filed defamation suits against print publications that would deign to sully his image. Yet the sheer number of sites and voices reproducing this new set of rumors made such a strategy untenable. Ultimately, intrusions into Cruise’s personal life, including the leak of videos intended solely for Scientology recruitment use, had far more traction than any sanctioned Cruise soundbite. Cruise’s image emerged as a hollowed husk of its former self; the sheer amount of material circulating rendered all attempts at P.R., including a Vanity Fair cover story and “reveal” of daughter Suri, ridiculous. His image was fragmented and re-collected into an altered, almost uncanny new iteration. Following the lackluster performance of Mission Impossible 3 and public condemnation by Paramount head Sumner Redstone, Cruise seemed almost pitiable. The New Logic of Celebrity Image ManagementCruise’s travails are expressive of a deeper development which has occurred over the course of the last decade, as the massively proliferating new forms of celebrity discourse (e.g., paparazzi photos, mug shots, cell phone video have further decentered any shiny, polished version of a star. With older forms of media increasingly reorganising themselves according to the aesthetics and logic of new media forms (e.g., CNN featuring regular segments in which it focuses its network cameras upon a computer screen displaying the CNN website), we are only more prone to appreciate “low media” forms of star discourse—reports from fans on discussion boards, photos taken on cell phones—as valid components of the celebrity image. People and E.T. still attract millions, but they are rapidly ceding control of the celebrity industry to their ugly, offensive stepbrothers: TMZ, Us Weekly, and dozens of gossip blogs. Importantly, a publicist may be able to induce a blogger to cover their client, but they cannot convince him to drop a story: if TMZ doesn’t post it, then Perez Hilton certainly will. With TMZ unabashedly offering pay-outs to informants—including those in law enforcement and health care, despite recently passed legislation—a star is never safe. If he or she misbehaves, someone, professional or amateur, will provide coverage. Scandal becomes normalised, and, in so doing, can no longer really function as scandal as such; in an age of around-the-clock news cycles and celebrity-fixated journalism, the only truly scandalising event would be the complete absence of any scandalous reports. Or, as aesthetic theorist Jacques Ranciere puts it; “The complaint is then no longer that images conceal secrets which are no longer such to anyone, but, on the contrary, that they no longer hide anything” (22).These seemingly paradoxical involutions of post-modern celebrity epistemologies are at the core of the current crisis of celebrity, and, subsequently, of celebrities’s attempts to “take back their own paparazzi.” As one might expect, contemporary celebrities have attempted to counter these new logics and strategies of intrusion through a heightened commitment to disclosure, principally through the social networking capabilities of Twitter. Yet, as we will see, not only have the epistemological reorderings of postmodernist technoculture affected the logic of scandal/intrusion, but so too have they radically altered the workings of intrusion’s dialectical counterpart, disclosure.In the 1930s, when written letters were still the primary medium for intimate communication, stars would send lengthy “hand-written” letters to members of their fan club. Of course, such letters were generally not written by the stars themselves, but handwriting—and a star’s signature—signified authenticity. This ritualised process conferred an “aura” of authenticity upon the object of exchange precisely because of its static, recurring nature—exchange of fan mail was conventionally understood to be the primary medium for personal encounters with a celebrity. Within the overall political economy of the studio system, the medium of the hand-written letter functioned to unleash the productive power of authenticity, offering an illusion of communion which, in fact, served to underscore the gulf between the celebrity’s extraordinary nature and the ordinary lives of those who wrote to them. Yet the criterion and conventions through which celebrity personae were maintained were subject to change over time, as new communications technologies, new modes of Hollywood's industrial organization, and the changing realities of commercial media structures all combined to create a constantly moving ground upon which the celebrity tried to affix. The celebrity’s changing conditions are not unique to them alone; rather, they are a highly visible bellwether of changes which are more fundamentally occurring at all levels of culture and subjectivity. Indeed, more than seventy years ago, Walter Benjamin observed that when hand-made expressions of individuality were superseded by mechanical methods of production, aesthetic criteria (among other things) also underwent change, rendering notions of authenticity increasingly indeterminate.Such is the case that in today’s world, hand-written letters seem more contrived or disingenuous than Danny DeVito’s inaugural post to his Twitter account: “I just joined Twitter! I don't really get this site or how it works. My nuts are on fire.” The performative gesture in DeVito’s tweet is eminently clear, just as the semantic value is patently false: clearly DeVito understands “this site,” as he has successfully used it to extend his irreverent funny-little-man persona to the new medium. While the truth claims of his Tweet may be false, its functional purpose—both effacing and reifying the extraordinary/ordinary distinction of celebrity and maintaining DeVito’s celebrity personality as one with which people might identify—is nevertheless seemingly intact, and thus mirrors the instrumental value of celebrity disclosure as performed in older media forms. Twitter and Contemporary TechnocultureFor these reasons and more, considered within the larger context of contemporary popular culture, celebrity tweeting has been equated with the assertion of the authentic celebrity voice; celebrity tweets are regularly cited in newspaper articles and blogs as “official” statements from the celebrity him/herself. With so many mediated voices attempting to “speak” the meaning of the star, the Twitter account emerges as the privileged channel to the star him/herself. Yet the seemingly easy discursive associations of Twitter and authenticity are in fact ideological acts par excellence, as fixations on the indexical truth-value of Twitter are not merely missing the point, but actively distracting from the real issues surrounding the unsteady discursive construction of contemporary celebrity and the “celebretification” of contemporary subjectivity writ large. In other words, while it is taken as axiomatic that the “message” of celebrity Twittering is, as Henry Jenkins suggests, “Here I Am,” this outward epistemological certainty veils the deeply unstable nature of celebrity—and by extension, subjectivity itself—in our networked society.If we understand the relationship between publicity and technoculture to work as Zizek-inspired cultural theorist Jodi Dean suggests, then technologies “believe for us, accessing information even if we cannot” (40), such that technology itself is enlisted to serve the function of ideology, the process by which a culture naturalises itself and attempts to render the notion of totality coherent. For Dean, the psycho-ideological reality of contemporary culture is predicated upon the notion of an ever-elusive “secret,” which promises to reveal us all as part of a unitary public. The reality—that there is no such cohesive collective body—is obscured in the secret’s mystifying function which renders as “a contingent gap what is really the fact of the fundamental split, antagonism, and rupture of politics” (40). Under the ascendancy of the technoculture—Dean's term for the technologically mediated landscape of contemporary communicative capitalism—subjectivity becomes interpellated along an axis blind to the secret of this fundamental rupture. The two interwoven poles of this axis are not unlike structuralist film critics' dialectically intertwined accounts of the scopophilia and scopophobia of viewing relations, simply enlarged from the limited realm of the gaze to encompass the entire range of subjectivity. As such, the conspiratorial mindset is that mode of desire, of lack, which attempts to attain the “secret,” while the celebrity subject is that element of excess without which desire is unthinkable. As one might expect, the paparazzi and gossip sites’s strategies of intrusion have historically operated primarily through the conspiratorial mindset, with endless conjecture about what is “really happening” behind the scenes. Under the intrusive/conspiratorial paradigm, the authentic celebrity subject is always just out of reach—a chance sighting only serves to reinscribe the need for the next encounter where, it is believed, all will become known. Under such conditions, the conspiratorial mindset of the paparazzi is put into overdrive: because the star can never be “fully” known, there can never be enough information about a star, therefore, more information is always needed. Against this relentless intrusion, the celebrity—whose discursive stability, given the constant imperative for newness in commercial culture, is always in danger—risks a semiotic liquidation that will totally displace his celebrity status as such. Disclosure, e.g. Tweeting, emerges as a possible corrective to the endlessly associative logic of the paparazzi’s conspiratorial indset. In other words, through Twitter, the celebrity seeks to arrest meaning—fixing it in place around their own seemingly coherent narrativisation. The publicist’s new task, then, is to convincingly counter such unsanctioned, intrusive, surveillance-based discourse. Stars continue to give interviews, of course, and many regularly pose as “authors” of their own homepages and blogs. Yet as posited above, Twitter has emerged as the most salient means of generating “authentic” celebrity disclosure, simultaneously countering the efforts of the papparazzi, fan mags, and gossip blogs to complicate or rewrite the meaning of the star. The star uses the account—verified, by Twitter, as the “real” star—both as a means to disclose their true interior state of being and to counter erastz narratives circulating about them. Twitter’s appeal for both celebrities and their followers comes from the ostensible spontaneity of the tweets, as the seemingly unrehearsed quality of the communiqués lends the form an immediacy and casualness unmatched by blogs or official websites; the semantic informality typically employed in the medium obscures their larger professional significance for celebrity tweeters. While Twitter’s air of extemporary intimacy is also offered by other social networking platforms, such as MySpace or Facebook, the latter’s opportunities for public feedback (via wall-posts and the like) works counter to the tight image control offered by Twitter’s broadcast-esque model. Additionally, because of the uncertain nature of the tweet release cycle—has Ashton Kutcher sent a new tweet yet?—the voyeuristic nature of the tweet disclosure (with its real-time nature offering a level of synchronic intimacy that letters never could have matched), and the semantically displaced nature of the medium, it is a form of disclosure perfectly attuned to the conspiratorial mindset of the technoculture. As mentioned above, however, the conspiratorial mindset is an unstable subjectivity, insofar as it only exists through a constant oscillation with its twin, the celebrity subjectivity. While we can understand that, for the celebrities, Twitter functions by allowing them a mode for disclosive/celebrity subjectivisation, we have not yet seen how the celebrity itself is rendered conspiratorial through Twitter. Similarly, only the conspiratorial mode of the follower’s subjectivity has thus far been enumerated; the moment of the follower's celebrtification has so far gone unmentioned. Since we have seen that the celebrity function of Twitter is not really about discourse per se, we should instead understand that the ideological value of Twitter comes from the act of tweeting itself, of finding pleasure in being engaged in a techno-social system in which one's participation is recognised. Recognition and participation should be qualified, though, as it is not the fully active type of participation one might expect in say, the electoral politics of a representative democracy. Instead, it is a participation in a sort of epistemological viewing relations, or, as Jodi Dean describes it, “that we understand ourselves as known is what makes us think there is that there is a public that knows us” (122). The fans’ recognition by the celebrity—the way in which they understood themselves as known by the star was once the receipt of a hand-signed letter (and a latent expectation that the celebrity had read the fan’s initial letter); such an exchange conferred to the fan a momentary sense of participation in the celebrity's extraordinary aura. Under Twitter, however, such an exchange does not occur, as that feeling of one-to-one interaction is absent; simply by looking elsewhere on the screen, one can confirm that a celebrity's tweet was received by two million other individuals. The closest a fan can come to that older modality of recognition is by sending a message to the celebrity that the celebrity then “re-tweets” to his broader following. Beyond the obvious levels of technological estrangement involved in such recognition is the fact that the identity of the re-tweeted fan will not be known by the celebrity’s other two million followers. That sense of sharing in the celebrity’s extraordinary aura is altered by an awareness that the very act of recognition largely entails performing one’s relative anonymity in front of the other wholly anonymous followers. As the associative, conspiratorial mindset of the star endlessly searches for fodder through which to maintain its image, fans allow what was previously a personal moment of recognition to be transformed into a public one. That is, the conditions through which one realises one’s personal subjectivity are, in fact, themselves becoming remade according to the logic of celebrity, in which priority is given to the simple fact of visibility over that of the actual object made visible. Against such an opaque cultural transformation, the recent rise of reactionary libertarianism and anti-collectivist sentiment is hardly surprising. ReferencesBaudrillard, Jean. Simulacra and Simulation. Ann Arbor: Michigan UP, 1994.Benjamin, Walter. Illuminations. New York: Harcourt, Brace and World, 1968. Dean, Jodi. Publicity’s Secret: How Technoculture Capitalizes on Democracy. Ithaca: Cornell UP, 2003. DeCordova, Richard. Picture Personalities: The Emergence of the Star System in America. Urbana: University of Illinois Press, 1990. Jenkins, Henry. “The Message of Twitter: ‘Here It Is’ and ‘Here I Am.’” Confessions of an Aca-Fan. 23 Aug. 2009. 15 Sep. 2009 < http://henryjenkins.org/2009/08/the_message_of_twitter.html >.Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Minneapolis: Minnesota UP, 1984.Ranciere, Jacques. The Future of the Image. New York: Verso, 2007.
APA, Harvard, Vancouver, ISO, and other styles
45
McCosker, Anthony, and Timothy Graham. "Data Publics: Urban Protest, Analytics and the Courts." M/C Journal 21, no.3 (August15, 2018). http://dx.doi.org/10.5204/mcj.1427.
Full textAbstract:
This article reflects on part of a three-year battle over the redevelopment of an iconic Melbourne music venue, the Palace-Metro Nightclub (the Palace), involving the tactical use of Facebook Page data at trial. We were invited by the Save the Palace group, Melbourne City Council and the National Trust of Australia to provide Facebook Page data analysis as evidence of the social value of the venue at an appeals trial heard at the Victorian Civil Administration Tribunal (VCAT) in 2016. We take a reflexive ethnographic approach here to explore the data production, collection and analysis processes as these represent and constitute a “data public”.Although the developers won the appeal and were able to re-develop the site, the court accepted the validity of social media data as evidence of the building’s social value (Jinshan Investment Group Pty Ltd v Melbourne CC [2016] VCAT 626, 117; see also Victorian Planning Reports). Through the case, we elaborate on the concept of data publics by considering the “affordising” (Pollock) processes at play when extracting, analysing and visualising social media data. Affordising refers to the designed, deliberate and incidental effects of datafication and highlights the need to attend to the capacities for data collection and processing as they produce particular analytical outcomes. These processes foreground the compositional character of data publics, and the unevenness of data literacies (McCosker “Data Literacies”; Gray et al.) as a factor of the interpersonal and institutional capacity to read and mobilise data for social outcomes.We begin by reconsidering the often-assumed connection between social media data and their publics. Taking onboard theoretical accounts of publics as problem-oriented (Dewey) and dynamically constituted (Kelty), we conceptualise data publics through the key elements of a) consequentiality, b) sufficient connection over time, c) affective or emotional qualities of connection and interaction with the events. We note that while social data analytics may be a powerful tool for public protest, it equally affords use against public interests and introduces risks in relation to a lack of transparency, access or adequate data literacy.Urban Protest and Data Publics There are many examples globally of the use of social media to engage publics in battles over urban development or similar issues (e.g. Fredericks and Foth). Some have asked how social media might be better used by neighborhood organisations to mobilise protest and save historic buildings, cultural landmarks or urban sites (Johnson and Halegoua). And we can only note here the wealth of research literature on social movements, protest and social media. To emphasise Gerbaudo’s point, drawing on Mattoni, we “need to account for how exactly the use of these media reshapes the ‘repertoire of communication’ of contemporary movements and affects the experience of participants” (2). For us, this also means better understanding the role that social data plays in both aiding and reshaping urban protest or arming third sector groups with evidence useful in social institutions such as the courts.New modes of digital engagement enable forms of distributed digital citizenship, which Meikle sees as the creative political relationships that form through exercising rights and responsibilities. Associated with these practices is the transition from sanctioned, simple discursive forms of social protest in petitions, to new indicators of social engagement in more nuanced social media data and the more interactive forms of online petition platforms like change.org or GetUp (Halpin et al.). These technical forms code publics in specific ways that have implications for contemporary protest action. That is, they provide the operational systems and instructions that shape social actions and relationships for protest purposes (McCosker and Milne).All protest and social movements are underwritten by explicit or implicit concepts of participatory publics as these are shaped, enhanced, or threatened by communication technologies. But participatory protest publics are uneven, and as Kelty asks: “What about all the people who are neither protesters nor Twitter users? In the broadest possible sense this ‘General Public’ cannot be said to exist as an actual entity, but only as a kind of virtual entity” (27). Kelty is pointing to the porous boundary between a general public and an organised public, or formal enterprise, as a reminder that we cannot take for granted representations of a public, or the public as a given, in relation to Like or follower data for instance.If carefully gauged, the concept of data publics can be useful. To start with, the notions of publics and publicness are notoriously slippery. Baym and boyd explore the differences between these two terms, and the way social media reconfigures what “public” is. Does a Comment or a Like on a Facebook Page connect an individual sufficiently to an issues-public? As far back as the 1930s, John Dewey was seeking a pragmatic approach to similar questions regarding human association and the pluralistic space of “the public”. For Dewey, “the machine age has so enormously expanded, multiplied, intensified and complicated the scope of the indirect consequences [of human association] that the resultant public cannot identify itself” (157). To what extent, then, can we use data to constitute a public in relation to social protest in the age of data analytics?There are numerous well formulated approaches to studying publics in relation to social media and social networks. Social network analysis (SNA) determines publics, or communities, through links, ties and clustering, by measuring and mapping those connections and to an extent assuming that they constitute some form of sociality. Networked publics (Ito, 6) are understood as an outcome of social media platforms and practices in the use of new digital media authoring and distribution tools or platforms and the particular actions, relationships or modes of communication they afford, to use James Gibson’s sense of that term. “Publics can be reactors, (re)makers and (re)distributors, engaging in shared culture and knowledge through discourse and social exchange as well as through acts of media reception” (Ito 6). Hashtags, for example, facilitate connectivity and visibility and aid in the formation and “coordination of ad hoc issue publics” (Bruns and Burgess 3). Gray et al., following Ruppert, argue that “data publics are constituted by dynamic, heterogeneous arrangements of actors mobilised around data infrastructures, sometimes figuring as part of them, sometimes emerging as their effect”. The individuals of data publics are neither subjugated by the logics and metrics of digital platforms and data structures, nor simply sovereign agents empowered by the expressive potential of aggregated data (Gray et al.).Data publics are more than just aggregates of individual data points or connections. They are inherently unstable, dynamic (despite static analysis and visualisations), or vibrant, and ephemeral. We emphasise three key elements of active data publics. First, to be more than an aggregate of individual items, a data public needs to be consequential (in Dewey’s sense of issues or problem-oriented). Second, sufficient connection is visible over time. Third, affective or emotional activity is apparent in relation to events that lend coherence to the public and its prevailing sentiment. To these, we add critical attention to the affordising processes – or the deliberate and incidental effects of datafication and analysis, in the capacities for data collection and processing in order to produce particular analytical outcomes, and the data literacies these require. We return to the latter after elaborating on the Save the Palace case.Visualising Publics: Highlighting Engagement and IntensityThe Palace theatre was built in 1912 and served as a venue for theatre, cinema, live performance, musical acts and as a nightclub. In 2014 the Heritage Council decided not to include the Palace on Victoria’s heritage register and hence opened the door for developers, but Melbourne City Council and the National Trust of Australia opposed the redevelopment on the grounds of the building’s social significance as a music venue. Similarly, the Save the Palace group saw the proposed redevelopment as affecting the capacity of Melbourne CBD to host medium size live performances, and therefore impacting deeply on the social fabric of the local music scene. The Save the Palace group, chaired by Rebecca Leslie and Michael Raymond, maintained a 36,000+ strong Facebook Page and mobilised local members through regular public street protests, and participated in court proceedings in 2015 and February 2016 with Melbourne City Council and National Trust Australia. Joining the protesters in the lead up to the 2016 appeals trial, we aimed to use social media engagement data to measure, analyse and present evidence of the extent and intensity of a sustained protest public. The evidence we submitted had to satisfy VCAT’s need to establish the social value of the building and the significance of its redevelopment, and to explain: a) how social media works; b) the meaning of the number of Facebook Likes on the Save The Palace Page and the timing of those Likes, highlighting how the reach and Likes pick up at significant events; and c) whether or not a representative sample of Comments are supportive of the group and the Palace Theatre (McCosker “Statement”). As noted in the case (Jinshan, 117), where courts have traditionally relied on one simple measure for contemporary social value – the petition – our aim was to make use of the richer measures available through social media data, to better represent sustained engagement with the issues over time.Visualising a protest public in this way raises two significant problems for a workable concept of data publics. The first involves the “affordising” (Pollock) work of both the platform and our data analysis. This concerns the role played by data access and platform affordances for data capture, along with methodological choices made to best realise or draw out the affordances of the data for our purposes. The second concerns the issue of digital and data literacies in both the social acts that help to constitute a data public in the first place, and the capacity to read and write public data to represent those activities meaningfully. That is, Facebook and our analysis constitutes a data public in certain ways that includes potentially opaque decisions or processes. And citizens (protesters or casual Facebook commenters alike) along with social institutions (like the courts) have certain uneven capacity to effectively produce or read public protest-oriented data. The risk here, which we return to in the final section, lies in the potential for misrepresentation of publics through data, exclusions of access and ownership of data, and the uneven digital literacies at each stage of data production, analysis and sensemaking.Facebook captures data about individuals in intricate detail. Its data capture strategies are geared toward targeting for the purposes of marketing, although only a small subset of the data is publicly available through the Facebook Application Programming Interface (API), which is a kind of data “gateway”. The visible page data tells only part of the story. The total Page Likes in February 2016 was 36,828, representing a sizeable number of followers, mainly located in Melbourne but including 45 countries in total and 38 different languages. We extracted a data set of 268,211 engagements with the Page between February 2013 and August 2015. This included 45,393 post Likes and 9,139 Comments. Our strategy was to demarcate a structurally defined “community” (in the SNA sense of that term as delineating clusters of people, activities and links within a broader network), by visualising the interactions of Facebook users with Posts over time, and then examine elements of intensity of engagement. In other words, we “affordised” the network data using SNA techniques to most clearly convey the social value of the networked public.We used a combination of API access and Facebook’s native Insights data and analytics to extract use-data from that Page between June 2013 and December 2015. Analysis of a two-mode or bipartite network consisting of users and Posts was compiled using vosonSML, a package in the R programming language created at Australian National University (Graham and Ackland) and visualised with Gephi software. In this network, the nodes (or vertices) represent Facebook users and Facebook Posts submitted on the Page, and ties (or edges) between nodes represent whether a user has commented on and/or liked a post. For example, a user U might have liked Post A and commented on Post B. Additionally, a weight value is assigned for the Comments ties, indicating how many times a user commented on a particular post (note that users can only like Posts once). We took these actions as demonstrating sufficient connection over time in relation to an issue of common concern.Figure 1: Network visualisation of activity on the Save the Palace Facebook Page, June 2013 to December 2015. The colour of the nodes denotes which ‘community’ cluster they belong to (computed via the Infomap algorithm) and nodes are sized by out-degree (number of Likes/Comments made by users to Posts). The graph layout is computed via the Force Atlas 2 algorithm.Community detection was performed on the network using the Infomap algorithm (Rosvall and Bergstrom), which is suited to large-scale weighted and directed networks (Henman et al.). This analysis reveals two large and two smaller clusters or groups represented by colour differences (Fig. 1). Broadly, this suggests the presence of several clusters amongst a sustained network engaging with the page over the three years. Beyond this, a range of other colours denoting smaller clusters indicates a diversity of activity and actors co-participating in the network as part of a broader community.The positioning of nodes within the network is not random – the visualisation is generated by the Force Atlas 2 algorithm (Jacomy et al.) that spatially sorts the nodes through processes of attraction and repulsion according to the observed patterns of connectivity. As we would expect, the two-dimensional spatial arrangement of nodes conforms to the community clustering, helping us to visualise the network in the form of a networked public, and build a narrative interpretation of “what is going on” in this online social space.Social value for VCAT was loosely defined as a sense of connection, sentiment and attachment to the venue. While we could illustrate the extent of the active connections of those engaging with the Page, the network map does not in itself reveal much about the sentiment, or the emotional attachment to the Save the Palace cause. This kind of affect can be understood as “the energy that drives, neutralizes, or entraps networked publics” (Papacharissi 7), and its measure presents a particular challenge, but also interest, for understanding a data public. It is often measured through sentiment analysis of content, but we targeted reach and engagement events – particular moments that indicated intense interaction with the Page and associated events.Figure 2: Save the Palace Facebook Page: Organic post reach November—December 2014The affective connection and orientation could be demonstrated through two dimensions of post “reach”: average reach across the lifespan of the Page, and specific “reach-events”. Average reach illustrates the sustained engagement with the Page over time. Average un-paid reach for Posts with links (primarily news and legal updates), was 12,015 or 33% of the total follower base – a figure well above the standard for Community Page reach at that time. Reach-events indicated particular points of intensity and illustrates the Page’s ability to resonate publicly. Figure 2 points to one such event in November 2015, when news circulated that the developers were defying stop-work orders and demolishing parts of The Palace. The 100k reach indicated intense and widespread activity – Likes, Shares, Comments – in a short timeframe. We examined Comment activity in relation to specific reach events to qualify this reach event and illustrate the sense of outrage directed toward the developers, and expressions of solidarity toward those attempting to stop the redevelopment. Affordising Data Publics and the Transformative Work of AnalyticsEach stage of deriving evidence of social value through Page data, from building public visibility and online activity to analysis and presentation at VCAT, was affected by the affordising work of the protesters involved (particularly the Page Admins), civil society groups, platform features and data structures and our choices in analysis and presentation. The notion of affordising is useful here because, as Pollock defines the term, it draws attention to the transformative work of metrics, analytics, platform features and other devices that re-package social activity through modes of datafication and analysis. The Save the Palace group mobilised in a particular way so as to channel their activities, make them visible and archival, to capture the resonant effects of their public protest through a platform that would best make that public visible to itself. The growth of the interest in the Facebook Page feeds back on itself reflexively as more people encounter it and participate. Contrary to critiques of “clicktivism”, these acts combine digital-material events and activities that were to become consequential for the public protest – such as the engagement activities around the November 2015 event described in Figure 2.In addition, presenting the research in court introduced particular hurdles, in finding “the meaningful data” appropriate to the needs of the case, “visualizing social data for social purposes”, and the need to be “evocative as well as accurate” (Donath, 16). The visualisation and presentation of the data needed to afford a valid and meaningful expression of the social significance the Palace. Which layout algorithm to use? What scale do we want to use? Which community detection algorithm and colour scheme for nodes? These choices involve challenges regarding legibility of visualisations of public data (McCosker and Wilken; Kennedy et al.).The transformative actions at play in these tactics of public data analysis can inform other instances of data-driven protest or social participation, but also leave room for misuse. The interests of developers, for example, could equally be served by monitoring protesters’ actions through the same data, or by targeting disagreement or ambiguity in the data. Similarly, moves by Facebook to restrict access to Page data will disproportionately affect those without the means to pay for access. These tactics call for further work in ethical principles of open data, standardisation and data literacies for the courts and those who would benefit from use of their own public data in this way.ConclusionsWe have argued through the case of the Save the Palace protest that in order to make use of public social media data to define a data public, multiple levels of data literacy, access and affordising are required. Rather than assuming that public data simply constitutes a data public, we have emphasised: a) the consequentiality of the movement; b) sufficient connection over time; and c) affective or emotional qualities of connection and interaction with public events. This includes the activities of the core members of the Save the Palace protest group, and the tens of thousands who engaged in some way with the Page. It also involves Facebook’s data affordances as these allow for the extraction of public data, alongside our choices in analysis and visualisation, and the court’s capacity and openness to accept all of this as indicative of the social value (connections, sentiment, attachment) it sought for the case. The Senior Member and Member presiding over the case had little knowledge of Facebook or other social media platforms, did not use them, and hence themselves had limited capacity to recognise the social and cultural nuances of activities that took place through the Facebook Page. This does not exclude the use of the data but made it more difficult to present a picture of the relevance and consequence of the data for understanding the social value evident in the contested building. While the court’s acceptance of the analysis as evidence is a significant starting point, further work is required to ensure openness, standardisation and ethical treatment of public data within public institutions like the courts. ReferencesBruns, A., and J. Burgess. “The Use of Twitter Hashtags in the Formation of Ad Hoc Publics.” 6th European Consortium for Political Research General Conference, University of Iceland, Reykjavík, 25-27 August 2011. 1 Aug. 2018 <http://eprints.qut.edu.au/46515/>.Baym, N.K., and d. boyd. “Socially Mediated Publicness: An Introduction.” Journal of Broadcasting & Electronic Media 56.3 (2012): 320-329.Dewey, J. The Public and Its Problems: An Essay in Political Inquiry. Athens, Ohio: Swallow P, 2016 [1927].Donath, J. The Social Machine: Designs for Living Online. Cambridge: MIT P, 2014.Fredericks, J., and M. Foth. “Augmenting Public Participation: Enhancing Planning Outcomes through the Use of Social Media and Web 2.0.” Australian Planner 50.3 (2013): 244-256.Gerbaudo, P. Tweets and the Streets: Social Media and Contemporary Activism. New York: Pluto P, 2012.Gibson, J.J. The Ecological Approach to Visual Perception. Boston: Houghton Mifflin Harcourt, 1979.Graham, T., and R. Ackland. “SocialMediaLab: Tools for Collecting Social Media Data and Generating Networks for Analysis.” CRAN (The Comprehensive R Archive Network). 2018. 1 Aug. 2018 <https://cran.r- project.org/web/packages/SocialMediaLab/SocialMediaLab.pdf>.Gray J., C. Gerlitz, and L. Bounegru. “Data Infrastructure Literacy.” Big Data & Society 5.2 (2018). 1 Aug. 2018 <https://doi.org/10.1177/2053951718786316>.Halpin, T., A. Vromen, M. Vaughan, and M. Raissi. “Online Petitioning and Politics: The Development of Change.org in Australia.” Australian Journal of Political Science (2018). 1 Aug. 2018 <https://doi.org/10.1080/10361146.2018.1499010>.Henman, P., R. Ackland, and T. Graham. “Community Structure in e-Government Hyperlink Networks.” Proceedings of the 14th European Conference on e-Government (ECEG ’14), 12-13 June 2014, Brasov, Romania.Ito, M. “Introduction.” Networked Publics. Ed. K. Varnelis. Cambridge, MA.: MIT P, 2008. 1-14.Jacomy M., T. Venturini, S. Heymann, and M. Bastian. “ForceAtlas2, a Continuous Graph Layout Algorithm for Handy Network Visualization Designed for the Gephi Software.” PLoS ONE 9.6 (2014): e98679. 1 Aug. 2018 <https://doi.org/10.1371/journal.pone.0098679>.Jinshan Investment Group Pty Ltd v Melbourne CC [2016] VCAT 626, 117. 2016. 1 Aug. 2018 <https://bit.ly/2JGRnde>.Johnson, B., and G. Halegoua. “Can Social Media Save a Neighbourhood Organization?” Planning, Practice & Research 30.3 (2015): 248-269.Kennedy, H., R.L. Hill, G. Aiello, and W. Allen. “The Work That Visualisation Conventions Do.” Information, Communication & Society, 19.6 (2016): 715-735.Mattoni, A. Media Practices and Protest Politics: How Precarious Workers Mobilise. Burlington, VT: Ashgate, 2012.McCosker, A. “Data Literacies for the Postdemographic Social Media Self.” First Monday 22.10 (2017). 1 Aug. 2018 <http://firstmonday.org/ojs/index.php/fm/article/view/7307/6550>.McCosker, A. “Statement of Evidence: Palace Theatre Facebook Page Analysis.” Submitted to the Victorian Civil Administration Tribunal, 7 Dec. 2015. 1 Aug. 2018 <https://www.academia.edu/37130238/Evidence_Statement_Save_the_Palace_Facebook_Page_Analysis_VCAT_2015_>.McCosker, A., and M. Esther. "Coding Labour." Cultural Studies Review 20.1 (2014): 4-29.McCosker, A., and R. Wilken. “Rethinking ‘Big Data’ as Visual Knowledge: The Sublime and the Diagrammatic in Data Visualisation.” Visual Studies 29.2 (2014): 155-164.Meikle, G. Social Media: Communication, Sharing and Visibility. New York: Routledge, 2016.Papacharissi, Z. Affective Publics: Sentiment, Technology, and Politics. Oxford: Oxford UP, 2015.Pollock, N. “Ranking Devices: The Socio-Materiality of Ratings.” Materiality and Organizing: Social Interaction in a Technological World. Eds. P.M. Leonardi, Bonnie A. Nardi, and J. Kallinikos. Oxford: Oxford UP, 2012. 91-114.Rosvall, M., and C.T. Bergstrom. “Maps of Random Walks on Complex Networks Reveal Community Structure.” Proceedings of the National Academy of Sciences of the United States of America 105.4 (2008): 1118-1123.Ruppert E. “Doing the Transparent State: Open Government Data as Performance Indicators.” A World of Indicators: The Making of Governmental Knowledge through Quantification. Eds. R. Rottenburg S.E. Merry, S.J. Park, et al. Cambridge: Cambridge UP, 2015. 1–18.Smith, N., and T. Graham. “Mapping the Anti-Vaccination Movement on Facebook.” Information, Communication & Society (2017). 1 Aug. 2018 <https://doi.org/10.1080/1369118X.2017.1418406>.Victorian Planning Reports. “Editorial Comment.” VCAT 3.16 (2016). 1 Aug. 2018 <https://www.vprs.com.au/394-past-editorials/vcat/1595-vcat-volume-3-no-16>.
APA, Harvard, Vancouver, ISO, and other styles
46
Teague, Christine, Lelia Green, and David Leith. "An Ambience of Power? Challenges Inherent in the Role of the Public Transport Transit Officer." M/C Journal 13, no.2 (April15, 2010). http://dx.doi.org/10.5204/mcj.227.
Full textAbstract:
In the contemporary urban environment of mass transit, it falls to a small group of public officers to keep large number of travellers safe. The small size of their force and the often limited powers they exert mean that these public safety ‘transit officers’ must project more authority and control than they really have. It is this ambience of authority and control which, in most situations they encounter and seek to influence, is enough to keep the public safe. This paper examines the ambience of a group of transit officers working on the railway lines of an Australian capital city. We seek to show how transit officers are both influenced by, and seek to influence, the ambience of their workplace and the public spaces they inhabit whilst on duty, and here we take ambience to apply to the surrounding atmosphere, the aura, and the emotional environment of a place or situation: the setting, tone, or mood. For these transit officers to keep the public safe, they must themselves remain safe. A transit officer who is disabled in a confrontation with a violent offender is unable to provide protection to his or her passengers. Thus, in the culture of the transit officers, their own workplace safety takes on a higher significance. It affects not just themselves. The ambience exuded by transit officers, and how transit officers see their relationship with the travelling public, their management and other organisational work groups, is an important determinant of their work group’s safety culture. Researching the Working Lives of Transit Officers in Perth Our discussion draws on an ethnographic study of the working lives and communication cultures of transit officers (TOs) employed by the Public Transport Authority (PTA) of Western Australia (WA). Transit officers have argued that to understand fully the challenges of their work it is necessary to spend time with them as they undertake their daily duties: roster in, roster out. To this end, the research team and the employer organisation secured an ARC Linkage Grant in partnership with the PTA to fund doctoral candidate and ethnographer Christine Teague to research the workers’ point of view, and the workers’ experiences within the organisation. The two-hundred TOs are unique in the PTA. Neither of the other groups who ride with them on the trains, the drivers and revenue protection staff (whose sole job is to sell and check tickets), experiences the combination of intense contact with passengers, danger of physical injury or group morale. The TOs of the PTA in Perth operate from a central location at the main train station and the end stations on each line. Here there are change lockers where they can lock up their uniforms and equipment such as handcuffs and batons when not on duty, an equipment room where they sign out their radios, and ticket-checking machines. At the main train station there is also a gym, a canteen and holding cells for offenders they detain. From these end stations and central location, the TOs fan out across the network to all suburbs where they either operate from stations or onboard the trains. The TOs also do ‘delta van’ duty providing rapid, mobile back-up support for their colleagues on stations or trains, and providing transport for arrested persons to the holding cell or police lock up. TOs are on duty whenever the trains are running–but the evenings and nights are when they are mainly rostered on. This is when trouble mostly occurs. The TOs’ work ends only after the final train has completed its run and all offenders who may require detaining and charging have been transferred into police custody. While the public perceive that security is the TOs’ most frequent role, much of the work involves non-confrontational activity such as assisting passengers, checking tickets and providing a reassuring presence. One way to deal with an ambiguous role is to claim an ambience of power and authority regardless. Various aspects of the TO role permit and hinder this, and the paper goes on to consider aspects of ambience in terms of fear and force, order and safety, and role confusion. An Ambience of Fear and Force The TOs are responsible for front-line security in WA’s urban railway network. Their role is to offer a feeling of security for passengers using the rail network after the bustle of the work day finishes, and is replaced by the mainly recreational travels of the after hours public. This is the time when some passengers find the prospect of evening travel on the public transport rail network unsettling–so unsettling that it was a 2001 WA government election promise (WA Legislative Council) that every train leaving the city centre after 7pm would have two TOs riding on it. Interestingly, recruitment levels have never been high enough for this promise to be fully kept. The working conditions of the TOs reflect the perception, and to an extent, the reality that some late night travel on public transport involves negotiating an edgy ambience with an element of risk, rubbing shoulders with people who may be loud, rowdy, travelling in a group, and or drug and alcohol affected. As Fred (all TO names are pseudonyms) comments: You’re not dealing with rational people, you’re not dealing with ‘people’: most of the people you’re dealing with are either drunk or under the influence of drugs, so they’re not rational, they don’t hear you, they don’t understand what you’re saying, they just have no sense of what’s right or wrong, you know? Especially being under the influence, so I mean, you can talk till you’re blue in the face with somebody who’s drunk or on drugs, I mean, all you have to say is one thing. ‘Oh, can I see your ticket please’, ‘oh, why do I need a fucking ticket’, you know? They just don’t get simple everyday messages. Dealing with violence and making arrest is a normal part of this job. Jo described an early experience in her working life as a TO:Within the first week of coming out of course I got smacked on the side of the head, but this lady had actually been certified, like, she was nuts. She was completely mental and we were just standing on the train talking and I’ve turned around to say something to my partner and she was fine, she was as calm as, and I turned around and talked to my partner and the next thing I know I ended up with her fist to the side of my head. And I went ‘what the hell was that’? And she went off, she went absolutely ballistic. I ended up arresting her because it was assault on an officer whether she was mental or not so I ended up arresting her.Although Jo here is describing how she experienced an unprovoked assault in the early days of her career as a TO, one of the most frequent precursors to a TO injury occurs when the TO is required to make an arrest. The injury may occur when the passenger to be arrested resists or flees, and the TO gives chase in dark or treacherous circumstances such as railway reserves and tunnels, or when other passengers, maybe friends or family of the original person of concern, involve themselves in an affray around the precipitating action of the arrest. In circumstances where capsicum spray is the primary way of enforcing compliance, with batons used as a defence tool, group members may feel that they can take on the two TOs with impunity, certainly in the first instance. Even though there are security cameras on trains and in stations, and these can be cued to cover the threatening or difficult situations confronting TOs, the conflict is located in the here-and-now of the exchanges between TOs and the travelling public. This means the longer term consequence of trouble in the future may hold less sway with unruly travellers than the temptation to try to escape from trouble in the present. In discussing the impact of remote communications, Rubert Murdoch commented that these technologies are “a powerful influence for civilised behaviour. If you are arranging a massacre, it will be useless to shoot the cameraman who has so inconveniently appeared on the scene. His picture will already be safe in the studio five thousand miles away and his final image may hang you” (Shawcross 242). Unfortunately, whether public aggression in these circumstances is useless or not, the daily experience of TOs is that the presence of closed circuit television (CCTV) does not prevent attacks upon them: nor is it a guarantee of ‘civilised behaviour’. This is possibly because many of the more argumentative and angry members of the public are dis-inhibited by alcohol or other drugs. Police officers can employ the threat or actual application of stun guns to control situations in which they are outnumbered, but in the case of TOs they can remain outnumbered and vulnerable until reinforcements arrive. Such reinforcements are available, but the situation has to be managed through the communication of authority until the point where the train arrives at a ‘manned’ station, or the staff on the delta vehicle are able to support their colleagues. An Ambience of Order and Safety Some public transport organisations take this responsibility to sustain an ambience of order more seriously than others. The TO ethnographer, Christine Teague, visited public transport organisations in the UK, USA and Canada which are recognised as setting world-class standards for injury rates of their staff. In the USA particularly, there is a commitment to what is called ‘the broken windows’ theory, where a train is withdrawn from service promptly if it is damaged or defaced (Kelling and Coles; Maple and Mitchell). According to Henry (117): The ‘Broken Windows’ theory suggests that there is both a high correlation and a causal link between community disorder and more serious crime: when community disorder is permitted to flourish or when disorderly conditions or problems are left untended, they actually cause more serious crime. ‘Broken windows’ are a metaphor for community disorder which, as Wilson and Kelling (1982) use the term, includes the violation of informal social norms for public behaviour as well as quality of life offenses such as littering, graffiti, playing loud radios, aggressive panhandling, and vandalism.This theory implies that the physical ambience of the train, and by extension the station, may be highly influential in terms of creating a safe working environment. In this case of ‘no broken window’ organisations, the TO role is to maintain a high ‘quality of life’ rather than being a role predominantly about restraining and bringing to justice those whose behaviour is offensive, dangerous or illegal. The TOs in Perth achieve this through personal means such as taking pride in their uniforms, presenting a good-natured demeanour to passengers and assisting in maintaining the high standard of train interiors. Such a priority, and its link to reduced workforce injury, suggests that a perception of order impacts upon safety. It has long been argued that the safety culture of an organisation affects the safety performance of that organisation (Pidgeon; Leplat); but it has been more recently established that different cultural groupings in an organisation conceive and construct their safety culture differently (Leith). The research on ‘safety culture’ raises a problematic which is rarely addressed in practice. That problematic is this: managers frequently engage with safety at the level of instituting systems, while workers engage with safety in terms of behaviour. When Glendon and Litherland comment that, contrary to expectations, they could find no relationship between safety culture and safety performance, they were drawing attention to the fact that much managerial safety culture is premised upon systems involving tick boxes and the filling in of report forms. The broken window approach combines the managerial tick box with managerial behaviour: a dis-ordered train is removed from service. To some extent a general lack of fit between safety culture and safety performance endorses Everett’s view that it is conceptually inadequate to conceive organisations as cultures: “the conceptual inadequacy stems from the failure to distinguish between culture and behavioural features of organizational life” (238). The general focus upon safety culture as a way of promoting improvements in safety performance assumes that compliance with a range of safety systems will guarantee a safe workplace. Such an assumption, however, risks positioning the injured worker as responsible for his or her own predicament and sets up an environment in which some management officials are wont to seek ways in which that injured worker’s behaviour failed to conform with safety rules or safety processes. Yet there are roles which place workers in harm’s way, including military duties, law enforcement and some emergency services. Here, the work becomes dangerous as it becomes disorderly. An Ambience of Roles and Confusion As the research reported here progressed, it became clear that the ambience around the presentation of the self in the role of a TO (Goffman) was an important part of how ‘safety’ was promoted and enacted in their work upon the PTA (WA) trains, face to face with the travelling public. Goffman’s view of all people, not specifically TOs, is that: Regardless of the particular objective which the individual has in mind and of his motive for having this objective, it will be in his interests to control the conduct of the others, especially their responsive treatment of him. This will largely be through influencing the perception and definition that others will come to formulate of him. He will influence them by expressing himself in such a way that the kind of impression given off will lead them to act voluntarily in accordance with his own plan. (3)This ‘influencing of perception’ is an important element of performing the role of a TO. This task of the TOs is made all the more difficult because of confusions about their role in relation to two other officers: police (who have more power to act in situations of public safety) and revenue project officers (who have less), as we now discuss. The aura of the TO role borrows somewhat from those quintessential law and order officers: the police. TOs work in pairs, like many police, to support each other. They have a range of legal powers including the power of arrest, and they carry handcuffs, a baton and capsicum spray as a means of helping ensure their safety and effectiveness in circumstances where they might be outnumbered. The tools of their trade are accessibly displayed on heavy leather belts around their waists and their uniforms have similarities with police uniforms. However, in some ways these similarities are problematic, because TOs are not afforded the same respect as police. This situation underlines of the ambiguities negotiated within the ambience of what it is to be a TO, and how it is to conduct oneself in that role. Notwithstanding the TOs’ law and order responsibilities, public perceptions of the role and some of the public’s responses to the officers can position these workers as “plastic cops” (Teague and Leith). The penultimate deterrent of police officers, the stun gun (Taser), is not available to TOs who are expected to control all incidents arising on duty through the fact that they operate in pairs, with capsicum spray available and, as a last resort, are authorised to use their batons in self defence. Furthermore, although TOs are the key security and enforcement staff in the PTA workforce, and are managed separately from related staff roles, they believe that the clarity of this distinction is compromised because of similarities in the look of Revenue Protection Officers (RPOs). RPOs work on the trains to check that passengers have tickets and have paid the correct fares, and obtain names and addresses to issue infringement notices when required. They are not PTA employees, but contracted staff from an outside company. They also work in pairs. Significantly, the RPO uniform is in many respects identical to that of the TO, and this appears to be a deliberate management choice to make the number of TOs seem greater than it is: extending the TO ambience through to the activities of the RPOs. However, in the event of a disturbance, TOs are required and trained to act, while RPOs are instructed not to get involved; even though the RPOs appear to the travelling public to be operating in the role of a law-and-order-keeper, RPOs are specifically instructed not to get involved in breaches of the peace or disruptive passenger behaviour. From the point of view of the travelling public, who observe the RPO waiting for TOs to arrive, it may seems as if a TO is passively standing by while a chaotic situation unravels. As Angus commented: I’ve spoken to quite a few members of public and received complaints from them about transit officers and talking more about the incident have found out that it was actually [RPOs] that are dealing with it. So it’s creating a bad image for us …. It’s Transits that are copping all the flak for it … It is dangerous for us and it’s a lot of bad publicity for us. It’s hard enough, the job that we do and the lack of respect that we do get from people, we don’t need other people adding to it and making it harder. Indeed, it is not only the travelling public who can mistake the two uniforms. Mike tells of an “incident where an officer [TO] has called for backup on a train and the guys have got off [the train at the next station] and just stood there, and he didn’t realise that they are actually [revenue protection] officers, so he effectively had no backup. He thought he did, but he didn’t.” The RPO uniform may confer an ambience of power borrowed from TOs and communicated visually, but the impact is to compromise the authority of the TO role. Unfortunately, what could be a complementary role to the TOs becomes one which, in the minds of the TO workforce, serves to undermine their presence. This effect of this role confusion is to dilute the aura of authority of the TOs. At one end of a power continuum the TO role is minimised by those who see it as a second-rate ‘Wannabe cop’ (Teague and Leith 2008), while its impact is diluted at the other end by an apparently deliberate confusion between the TO broader ‘law and order’ role, and the more limited RPO revenue collection activities. Postlude To the passengers of the PTA in Perth, the presence and actions of transit officers appear as unremarkable as the daily commute. In this ethnographic study of their workplace culture, however, the transit officers have revealed ways in which they influence the ambience of the workplace and the public spaces they inhabit whilst on duty, and how they are influenced by it. While this ambient inter-relationship is not documented in the organisation’s occupational safety and health management system, the TOs are aware that it is a factor in their level at safety at work, both positively and negatively. Clearly, an ethnography study is conducted at a certain point in time and place, and culture is a living and changing expression of human interaction. The Public Transport Authority of Western Australia is committed to continuous improvement in safety and to the investigation of all ways and means in which to support TOs in their daily activities. This is evident not only in their support of the research and their welcoming of the ethnographer into the workforce and onto the tracks, but also in their robust commitment to change as the findings of the research have progressed. In particular, changes in the ambient TO culture and in the training and daily practices of TOs have already resulted from this research or are under active consideration. Nonetheless, this project is a cogent indicator of the fact that a safety culture is critically dependent upon intangible but nonetheless important factors such as the ambience of the workplace and the way in which officers are able to communicate their authority to others. References Everett, James. “Organizational Culture and Ethnoecology in Public Relations Theory and Practice.” Public Relations Research Annual. Vol. 2. Eds. Larissa Grunig and James Grunig. Hillsdale, NJ, 1990. 235-251. Glendon, Ian, and Debbie Litherland. “Safety Climate Factors, Group Differences and Safety Behaviour in Road Construction.” Safety Science 39.3 (2001): 157-188. Goffman, Erving. The Presentation of the Self in Everyday Life. London: Penguin, 1959. Henry, Vincent. The Comstat Paradigm: Management Accountability in Policing, Business and the Public Sector. New York: Looseleaf Law Publications, 2003. Kelling, George, and Catherine Coles. Fixing Broken Windows: Restoring Order and Reducing Crime in Our Communities. New York: Touchstone, 1996. Leith, David. Workplace Culture and Accidents: How Management Can Communicate to Prevent Injuries. Saarbrücken: VDM Verlag, 2008. Leplat, Jacques. “About Implementation of Safety Rules.” Safety Science 29.3 (1998): 189-204. Maple, Jack, and Chris Mitchell. The Crime Fighter: How You Can Make Your Community Crime-Free. New York: Broadway Books, 1999. Pidgeon, Nick. “Safety Culture and Risk Management in Organizations.” Journal of Cross-Cultural Psychology 22.1 (1991): 129-140. Shawcross, William. Rupert Murdoch. London: Chatto & Windus, 1992. Teague, Christine, and David Leith. “Men of Steel or Plastic Cops? The Use of Ethnography as a Transformative Agent.” Transforming Information and Learning Conference Transformers: People, Technologies and Spaces, Edith Cowan University, Perth, WA, 2008. ‹http://conferences.scis.ecu.edu.au/TILC2008/documents/2008/teague_and_leith-men_of_steel_or_plastic_cops.pdf›. Wilson, James, and George Kelling. “Broken Windows.” The Atlantic Monthly (Mar. 1982): 29-38. WA Legislative Council. “Metropolitan Railway – Transit Guards 273 [Hon Ed Dermer to Minister of Transport Hon. Simon O’Brien].” Hansard 19 Mar. 2009: 2145b.
APA, Harvard, Vancouver, ISO, and other styles
47
O'Malley, Nicholas. "Telemental Health." Voices in Bioethics 8 (March2, 2022). http://dx.doi.org/10.52214/vib.v8i.9166.
Full textAbstract:
Photo by National Cancer Institute on Unsplash ABSTRACT The COVID-19 pandemic has brought about the advent of many new telehealth technologies as providers have been forced to shift their practice from the clinic to the cloud. Perhaps, none of these fields has been as widely advertised and expanded as telemental health. While many have lauded this change, it is important to question whether this method of practice is truly beneficial for patients, and further whether it benefits all patients. This paper critically examines the current structure of telemental health interventions and compares them to more traditional in-person interactions, reflecting on the unique benefits and challenges of each method, and ultimately concluding that telemental health is the wrong modality for certain patients and modalities. INTRODUCTION As the e-health revolution rapidly progresses, scientists, healthcare professionals, and technology experts are attempting to determine which areas of medical practice will best adapt to changing dynamics. Two key professions that are ripe for this kind of disruption are psychiatry and psychology. The American Psychiatric Association, along with its partners in the American Telemedicine Association, states that “telemental health in the form of interactive videoconferencing has become a critical tool in the delivery of mental health care. It has demonstrated its ability to increase access and quality of care, and, in some settings, to do so more effectively than treatment delivered in-person.”[1] This claim, though appearing bombastic, is also reflected, though with more nuance, by the American Psychological Association. For its part, the American Psychological Association states that “the expanding role of technology and the continuous development of new technologies that may be useful in the practice of psychology present unique opportunities, considerations, and challenges to practice.”[2] Thus, the point of this paper will be to examine whether the rapidly expanding system of telemental health is ethical based on its adherence to accepted standards of care, privacy concerns, and concerns about the boundaries of the patient-provider relationship. l. Standard of Care Concerns One of the most considerable objections to the broader implementation of telemental health services is the speculation that it is less effective than in-person treatment. It would follow that a system that is broadly implemented would not only fail to be beneficent, but it would also fail to be non-maleficent. Providers would be knowingly providing an ineffective treatment. Some may argue that such a system would also violate the principle of justice. It would create an unequal system of care in which those patients who could afford to see their therapist in person would benefit more than those who could not. However, data from a wide variety of sources at first glance, would seem to contradict these fears.[3] A review of the literature regarding the implementation of telemental health in geriatric patients, for example, showed that telemental health was as good as in-patient psychiatric care in several areas, including the diagnosis of dementia, nursing home consultations, and in conducting psychotherapy for geriatric patients and their caregivers.[4] On the other end of the age spectrum, a review of nineteen randomized controlled trials and one clinical trial demonstrated high comparative effectiveness between telemental health interventions in children and adolescents.[5] Hailey et al. found that telemental health interventions were effective in over half of the 65 studies reviewed. These studies encompassed a diverse and wide-ranging number of psychiatric disciplines, including child psychiatry, post-traumatic stress disorder, dementia, cognitive decline, smoking cessation, and eating disorders. Methods included phone- and web-based interventions.[6] Indeed, the data is not just limited to outpatient settings. For example, Reinhardt et al. conducted a literature review of studies about telemental health visits for psychiatric emergencies and crises. They found that no studies reported a significant statistical difference in diagnosis or disposition among psychiatric patients who presented to the Emergency Department. In addition, their review demonstrated a reduction in length of stay, reduction in time to care, and decreased costs among these patients. The authors also reviewed literature pertaining to crisis response teams and patients with severe mental illness. Both studies demonstrated that telemental health visits for these patients were similar, if not better, than face-to-face visits. In addition, both patients and practitioners showed high satisfaction with these services.[7] Thus, the implementation of telemental health is limited to out-patient settings and could feasibly be implemented in the in-patient and emergency settings. There is, however, one particularly glaring gap in telemental health services: group therapy. Perhaps the most famous example of group therapy is Alcoholics Anonymous, but group therapy has expanded to include many different modalities. Group therapy is a common intervention for many mental illnesses and can be incredibly effective in treating diseases ranging from PTSD to borderline personality disorder.[8] In a pilot study comparing a video teleconference based Dialectical Behavioral Therapy (DBT) group to an in-person DBT group, Lopez et al. found that while patients had similar levels of cohesion with the facilitator, participants in the video teleconference group saw less group cohesion than their peers in the in-person group. Further, while many patients in the video teleconference group believed that the convenience offset the adverse effects, many also wished for an in-person group. Attendance was also significantly higher in the video teleconference group.[9] Thus, while the video teleconference group did report some positives, some significant differences raise ethical questions. How well does a group do without cohesion? For example, if a person needing to be consoled breaks down and cries in front of the group, the in-person response may be different from the video conference. In the in-person group, other group members may place a gentle hand on the shoulder of the grieving person or maybe even hug them. The group facilitator or group members in the video conference group could say the same words of consolation as those in the in-person group. However, there still seems to be some missing action. The idea of physical touch, in this way, can mean a lot more than just a small action. Van Wynsberghe and Gastmans argue that this kind of deprivation may lead to feelings of depersonalization.[10] And, to an extent, their supposition is supported by the data presented by Lopez et.al. The low level of group cohesion in the video conference group could suggest that other group members seem unimportant to the participants. They are simply things on a screen, not real people. Dr. Thomas Insel, former National Institute of Mental Health Director writes that while technology may hold the key to improving mental health on the population level, there is a human-sized piece of the puzzle missing from these interventions. The solution, he asserts, lies somewhere in the integration of these two types of experiences, one that he terms “high-tech and high-touch.”[11] The lack of touch and physical presence is an obstacle for both patients and providers. At best this may lead to a slightly poorer provider-patient relationship and at worst may result in poorer quality care. ll. Privacy & Confidentiality Concerns Privacy and confidentiality are among the most serious concerns for practitioners and patients, made more complex by the advent of e-health. Major news outlets provide plenty of examples of breaches of confidentiality of people’s electronic records. Even significant systems, often thought to be secure, used to facilitate direct contact between people in the wake of COVID-19, like Zoom, have been breached. Not too long ago, "Zoom Bombing” was a national phenomenon, appearing in online classrooms, often sharing explicit or politically motivated content. Psychiatric patients are susceptible to issues surrounding privacy and confidentiality, and they may even come from communities that ostracize and stigmatize mental illness. These concerns must be taken seriously. Of course, both the American Psychiatric Association and the American Psychological Association address privacy concerns. Both organizations note in their guidelines that relevant HIPAA regulations apply to telehealth and doctors must use apps and videoconferencing tools with the highest levels of security.[12] Interestingly, the American Psychiatric Association takes these instructions one step further. It requires providers to be in a private room during telehealth videoconferences or calls and that people seeking care also have a private space so that any conversations are not overheard. This not only prevents violations of privacy but reassures the therapeutic relationship between provider and patient.[13] While providers can take these steps to ensure their patients’ privacy, an internet connection may not guarantee privacy. Many privacy issues are more easily mitigated in a clinical space. For example, walls and doors can be soundproofed, or white noise can be played in the waiting room to ensure that therapeutic conversations are not overheard. And while the American Psychiatric Association asks providers to mitigate these risks as they would in their respective clinics, there is another layer to online privacy. Providers should be concerned about telecommunications providers, how they collect information, and what types of information they collect.[14] If, for example, the patient must navigate to the practitioner’s webpage to enter into the therapy portal, that information might be tracked and used to generate personalized ads for the patient. If a person suffering from severe paranoia started receiving ads for psychiatric medication, they may react negatively to the invasion of privacy. That type of targeted advertising could even exacerbate a mental health condition. The scandals surrounding the National Security Administration (NSA) in recent years have added another layer of complexity to the issue of privacy. Whistleblowers like Edward Snowden, revealed that the government was collecting metadata from text messages, videos, and social media. Government surveillance is an added risk of mental health videoconferencing.[15] The government would not be bound by the rules that require privacy with few exceptions like the Tarasoff law, which could require disclosure to stop a violent act as a clinical care provider. The government might judge someone a risk-based on ill-gotten surveillance data, wrongly add a person to a watch list, or engage in further surveillance of a patient whom non-clinicians working in government assess to be a potential danger. Protection from government surveillance is a fundamental ethical endeavor. Yet government as a collector of data without a warrant or with easily attained FISA and other warrants is problematic. Scenarios may seem far-fetched but are within the realm of possibility. Secondly, the provider must envision how this might hinder care. For example, patients aware of the possibility of government surveillance may be reluctant to show up to online meetings if they show up at all. Perhaps they are so sensitive to these issues that they stop checking with their therapist altogether. It is easy to see how a person who has schizophrenia and shows signs of paranoia may avoid telehealth for fear of being tracked. Of course, one could also have privacy concerns about a therapist’s office. Perhaps patients are nervous about being seen in the office or parking lot. They might worry about being overheard. These concerns, however, can be mitigated fairly simply, for example, patients could find anonymous means of transportation and practitioners can soundproof their offices. Thus, in both the office and the videoconference, concerns can be mitigated easily and tangibly, but not eliminated entirely. Mental health providers should use the highest quality communication services with end-to-end encryption to bolster online privacy. lll. Boundary Issues and Professionalism The boundaries here are philosophical, not physical. Both the American Psychiatric Association and the American Psychological Association work to ensure that the patient-professional boundaries are kept as close to normal as possible. Both organizations expect practitioners to maintain the highest levels of professionalism when dealing with patients using telemental health services.[16] Practitioners are responsible for enforcing boundaries through informing their patients about appropriate behavior so that patients are discouraged from calling at inappropriate times absent an emergency. Videoconferencing systems and multi-layered protections like passwords and gatekeeping would prevent patients from logging into another patient’s appointment. These boundaries exist for a good reason. A 2017 report demonstrated that there is an escalating shortage of psychiatrists.[17] Nearly 1 in 5 people in the US has a mental health condition.[18] Mental health providers are nearly overwhelmed, therefore inappropriate, frequent, and unnecessary contact adds another level of complexity to treating patients. Mental health providers need to be stewards of the resource they provide. They must concentrate on the patient they are with. They also must guard themselves against burnout, because dealing with patients too often, even though technology allows for it, will lead to them being less effective for the rest of their patients. While these professional boundaries must be policed carefully, practitioners should also be careful of having boundaries that are too high. Thus, providers must balance between too much intimacy and too little.[19] Presence and physical touch have symbolic meaning. Being with a person reaffirms their personhood, and both provider and patient can feel that. Humans are relational beings, and a physical relationship often comforts people. It may also legitimize and reinforce the patient through sensation and perception. There may be something inherently missing from the practice of telemental health, as exemplified by the group members’ inability to console others in group therapy sessions over teleconference.[20] The screen may also be an agent of depersonalization. It may make the patient’s complaints seem less real. Or perhaps the patient may feel as though they are not being heard. Although the evidence of telemedicine’s successes above may seem to contradict this, none of the studies that extoll the benefits of telemental health have follow-up periods greater than one year. And while many studies show that patients are highly satisfied with telemental health, measurements of satisfaction are not standardized. It remains unclear whether patients benefit enough from their telemental sessions or whether they require more regular sessions to stay as satisfied as they were with in-person mental health care. Perhaps as time goes on, patients become more frustrated with telemental health. The research must answer these questions, but currently, it does not sufficiently address metaphysical arguments against telemental health. CONCLUSION Privacy is a key practical issue that remains. Although providers try to combat issues of privacy by using high-level conferencing software, which has end-to-end encryption,[21] surveillance and breaches may occur. While not suitable for all kinds of patients, telemental health services prove to be effective for groups of people that otherwise may not have been able to receive care over the past two years. There are some settings, such as group therapies, that are best suited for in-person meetings. Although online sessions encourage individuals to show up regularly, their downsides are not yet known. There is incredible power in the idea of presence, and humans are inherently relational beings. For some, a lack of contact is unwelcomed and makes therapy less satisfying. Opportunities to use in-person clinical care remain a priority for some patients, and healthcare providers should further investigate prioritizing in-person care for those who want it. Telemental health could be beneficial for emergencies, natural disasters, vulnerable groups, or when patients cannot get to their provider's office. However, for now, telemental health should not take a leading role in providing mental health treatment. - [1] Chiauzzi E, Clayton A, Huh-Yoo J. Videoconferencing-Based Telemental Health: Important Questions for the COVID-19 Era from Clinical and Patient-Centered Perspectives. JMIR Ment Health, 2020. doi:10.2196/24021 [2] Joint Task Force for the Development of Telepsychology Guidelines for Psychologists. Guidelines for the practice of telepsychology. American Psychologist, 2020. 791–800. doi.org/10.1037/a0035001 [3] Gentry MT, Lapid MI, Rummans TA. Geriatric Telepsychiatry: Systematic Review and Policy Considerations. Am J Geriatr Psychiatry. 2019 doi: 10.1016/j.jagp.2018.10.009; Campbell R, O'Gorman J, Cernovsky ZZ. Reactions of Psychiatric Patients to Telepsychiatry. Ment Illn. 2015;7(2):6101, 2015. doi:10.4081/mi.2015.6101; Malhotra S, Chakrabarti S, Shah R. Telepsychiatry: Promise, potential, and challenges. Indian J Psychiatry, 2013. doi: 10.4103/0019-5545.105499; Reinhardt I, Gouzoulis-Mayfrank E, Zielasek J. Use of Telepsychiatry in Emergency and Crisis Intervention: Current Evidence. Curr Psychiatry Rep, 2019. doi: 10.1007/s11920-019-1054-8 [4] Gentry, Lapid, and Rummans, Geriatric Telepsychiatry [5] Abuwalla, Zach & Clark, Maureen & Burke, Brendan & Tannenbaum, Viktorya & Patel, Sarvanand & Mitacek, Ryan & Gladstone, Tracy & Voorhees, Benjamin. Long-term Telemental health prevention interventions for youth: A rapid review, 2017. Internet Interventions. Doi.11. 10.1016/j.invent.2017.11.006. [6]Hailey D, Roine R, Ohinmaa A. The effectiveness of telemental health applications: a review, 2008. Can J Psychiatry. doi:10.1177/070674370805301109. [7] Reinhardt, Gouzoulis-Mayfrank, and Zielasek, Use of Telepsychiatry in Emergency and Crisis Intervention [8] Kealy, David & Piper, William & Ogrodniczuk, John & Joyce, Anthony & Weideman, Rene. Individual goal achievement in group psychotherapy: The roles of psychological mindedness and group process in interpretive and supportive therapy for complicated grief, 2018. Clinical Psychology & Psychotherapy. doi:10.1002/cpp.2346. Schwartze D, Barkowski S, Strauss B, Knaevelsrud C, Rosendahl J. Efficacy of group psychotherapy for posttraumatic stress disorder: Systematic review and meta-analysis of randomized controlled trials. Psychother Res, 2019. doi: 10.1080/10503307.2017.1405168; Wetzelaer P, Farrell J, Evers SM, Jacob GA, Lee CW, Brand O, van Breukelen G, Fassbinder E, Fretwell H, Harper RP, Lavender A, Lockwood G, Malogiannis IA, Schweiger U, Startup H, Stevenson T, Zarbock G, Arntz A. Design of an international multicentre RCT on group schema therapy for borderline personality disorder. BMC Psychiatry, 2014. doi: 10.1186/s12888-014-0319-3 [9] Lopez, Amy et al. “Therapeutic groups via video teleconferencing and the impact on group cohesion.” mHealth, 2020. doi:10.21037/mhealth.2019.11.04 [10] Van Wynsberghe A, Gastmans C. Telepsychiatry and the meaning of in-person contact: a preliminary ethical appraisal. Med Health Care Philos, 2009. doi: 10.1007/s11019-009-9214-y. [11]Thomas Insel, “Tech Can Help Solve Our Mental Health Crisis. But We Can’t Forget The Human Element.,” Substack newsletter, Big Technology (blog), January 27, 2022, https://bigtechnology.substack.com/p/tech-can-help-solve-our-mental-health. [12] Armstrong, C. M., Ciulla, R. P., Edwards-Stewart, A., Hoyt, T., & Bush, N. Best practices of mobile health in clinical care: The development and evaluation of a competency-based provider training program, 2018. Professional Psychology: Research and Practice. doi.org/10.1037/pro0000194 [13] Armstrong, C. M., Ciulla, R. P., Edwards-Stewart, A., Hoyt, T., & Bush, N. Best practices of mobile health in clinical care: The development and evaluation of a competency-based provider training program [14] Sabin JE, Skimming K. A framework of ethics for telepsychiatry practice. Int Rev Psychiatry, 2015. doi:10.3109/09540261.2015.1094034 [15] Lustgarten, S. D., & Colbow, A. J. Ethical concerns for telemental health therapy amidst governmental surveillance, 2017. American Psychologist. doi.org/10.1037/a0040321 [16] Armstrong, C. M., Ciulla, R. P., Edwards-Stewart, A., Hoyt, T., & Bush, N. Best practices of mobile health in clinical care: The development and evaluation of a competency-based provider training program [17] Merritt Hawkins. An Overview of the Salaries, Bonuses, and Other Incentives Customarily Used to Recruit Physicians, Physician Assistants and Nurse Practitioners, 2018. http://physicianresourcecenter.com/wp-content/uploads/2018/09/Merritt-Hawkins-2018-Review-of-Physician-and-Advanced-Practitioner-Incentives.pdf [18] Bose, J., Hedden, S., Lipari, R., Park-Lee, E. Key Substance Use and Mental Health Indicators in the United States: Results from the 2015 National Survey on Drug Use and Health, 2015. https://www.samhsa.gov/data/sites/default/files/NSDUH-FFR1-2015/NSDUH-FFR1-2015/NSDUH-FFR1-2015.pdf [19] Sabin and Skimming. A Framework of Ethics for Telepsychiatry Practice [20] Van Wynsberghe and Gastmans, Telepsychiatry and the Meaning of In-Person Contact [21] Lustgarten and Colbow, Ethical Concerns for Telemental Health Therapy amidst Governmental Surveillance
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!