The world today is facing some serious global challenges: creating sustainable development in the face of climate change, safeguarding rights and justice, and growing ethical markets, for a start. All of these challenges share some connection with science and technology – some more explicitly than others.
We are currently witnessing a growth in traditional technology – with computers processing data in new and exciting ways. We’re also seeing the birth of transformative technology, such as bioengineering. But the question is not about old or new technology – rather, it is about how they are being used to facilitate or change human behaviour.
Good tech, bad tech
Developments in information and communication technology (ICT) are vitally important to help us make better, more informed choices about how we prepare for the future. For instance, democratic governance is about being able to articulate contesting views across society and from different parts of the government. The advent of the internet allows us to receive and spread such information. Likewise, security and public safety relies on having good information on risks and their potential threats. Consider, for example, the way police departments in New York and Memphis have been able to make better use of data to prevent crime.
While science and technology are giving us the tools to improve, they – and the people who use them – are also presenting serious problems. Technology connects us, but it also makes us vulnerable to cyber-attacks. The amount of information that we produce every day through our phones and computers can help shape our environment to cater to us. But it also means that our identities are perhaps more vulnerable than ever before, with smart phones and club cards tracking our every move.
Similarly, in biology, we are able to make amazing gains in physical corrections, repairs, amendments, and augmentations, whether replacing old limbs or growing new ones. But we must also seriously consider the issues around ethics, safety and security. The debate around gain of function experiments, which give diseases new properties to help us study them, is a good example.
Hopes and fears
To help us grasp the shape and scope of these challenges, the Millennium Project – an international think tank – releases an annual State of the Future report, which outlines the major hurdles facing humanity over the next 35 years. It illustrates our complicated relationship with science and technology. Just as the beginning of the industrial revolution influenced the underlying themes of Mary Shelley’s Frankenstein, we too are worried about the unforeseen complications that the latest developments could bring.
The report tells us of the great hopes that synthetic biology will help us write genetic code like we write computer code; about the power of 3D printing to customise and construct smart houses; of the future of artificial intelligence where the human mind and the computer mind meet, rather than conflict.
But at the same time, the authors of the report – Jerome Glenn, Elizabeth Florescu and their team – express fears that there is a great chance we could be outstripped in pace by the evolution of scientific and technological development. The authors suggest that we seek out human-friendly control systems, since advances in these fields mean that lone individuals could make and deploy weapons of mass destruction.
There are two concerns here: one to do with agency, the other relating to structures. Individuals have the potential to use scientific and technological advances to cause harm. This is a growing problem, as science and technology continues to degrade what Max Weber referred to as the state’s “monopoly on violence”.
To reduce the risks associated with agency, we will rely on structures that encourage good behaviour, such as systems for justice, education and the provision of basic necessities for life.
But it is not clear how we will arrive at such structures, and where the responsibility to develop them will fall; whether it’s to regions, states or international organisations. This is especially pressing, as many states have either foregone a welfare system, or are in the process of destroying it. It’s unclear where education and training come in, or how regulatory control is to work across so many local, national, societal, and commercial boundaries.
An ethical approach?
Whether or not our global society is outstripped by science and technology largely depends on us. And this is part of the problem, as William Nordhaus warned us as early as 1982, in his work on the Global Commons. The report calls for an ethical approach to creating systems, forms of information, and models of control that would allow us to engage with science and technology as it develops.
This means embedding ethical considerations into the way we think about the future. The authors want a larger discussion on global ethics, such as that we have seen rooted in the work done by the International Organisation for Standardisation – the world’s largest developer of voluntary international standards.
Ultimately, where we end up in relation to science and technology is a matter of coming to terms with how we interact with these developments. Until we do so, a safe and prosperous world may elude us.
David J Galbreath is Professor of International Security, Director of Centre for War and Technology at University of Bath.
This article was originally published on The Conversation. Read the original article.