In the 19th century workers within the textile industry saw the development of the loom as a threat to their way of life. Reacting to the threat, many workers formed groups which sought to curtail the development of the new technology. The Luddites were a secret oath-based organisation of English textile workers in the 19th century who formed a radical faction which destroyed textile machinery. The group is believed to have taken its name from Ned Ludd, a legendary weaver supposedly from Anstey, near Leicester.
Their concerns were dismissed at the time, but mass production has transformed not only the textile industry but all forms of manufacturing, from pottery to car manufacturing.
While there has been a shift in manufacturing, there is still some human involvement. Materials are still delivered to factories by humans and parts loaded into machines. But much of the manufacturing process of many things has become automated.
Indeed there are some factories requiring only a skeleton staff of technicians and engineers, required only to fix breakdowns in systems.
But there is a new 'threat' to manufacturing and employment; Artificial Intelligence often simply referred to as AI.
The development of AI is perhaps a natural progression of the development of computer programming.
Computer programming is essentially a series of codes through which a device can carry out a task. A basic electronic system might involve a simple circuit which can turn on a light when it gets dark, when the sun sets, and off when it gets light, when the sun rises. But with computer systems, times might be programmed such that lights turn on and off at specific times. AI can take this further, employing 'rules' much like a human might do to decide if lights need to be switched on or off.
A human might suggest that lights only need to be on if a human is present in an office rather than switching them on at a predetermined time. Similarly AI can be trained to make similar decisions. Of course such systems need input in the same way a human needs input to make such decisions.
Upon entering an empty room a human makes a decision as to whether it's too dark before switching on a light. If a person were to walk into a room in which all the lights were on and yet no-one was present, that person might make the decision to turn off the lights.
These are learnt behaviours. Putting aside human concerns and emotional responses, such as a wish to save electricity or making subjective opinion based decisions on whether it's light enough to work, the thought processes are relatively simple. Thus AI can be 'taught' to make similar decisions.
AI has developed from ever more complex systems set to complete ever more complex tasks more quickly. Technological advancement has facilitated that such computer systems are smaller than they might once have been.
In the 1960s computer banks took up vast rooms and were able to do only relatively simple tasks compared to what we see today. What once took up space in a large room is now compacted into a small laptop on a desk.
Connected to the Internet one now has a vast library of knowledge that was impossible to imagine a little over two decades ago.
But while technology at its current level is certainly at a cutting edge, AI could make it look as redundant as a 1970s calculator.
Chat GPT and Bard are relatively simple chat bots that are currently open to some members of the public that are built upon AI models.
Such systems are certainly in their infancy in terms of their development. But that said, given their responses to questions, it is clear that it won't be long before such responses will be indistinguishable from human responses.
What follows is an edited 'conversation' with BARD, Google's AI based chatbot.
Are you based on Lamda AI?
"Yes, I am based on LaMDA, which stands for Language Model for Dialogue Applications. LaMDA is a large language model that has been trained on a massive dataset of text and code. This allows me to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way."
Will you be able to speak in future rather than just generate text responses?
"It is possible that I will be able to speak in the future. There are currently some research projects underway that are exploring ways to enable large language models to generate speech. If these projects are successful, it could mean that I will be able to communicate with people in a more natural way."
Do you think AI will supersede humans?
"It is possible that AI will eventually supersede humans in some ways. For example, AI is already better than humans at some tasks, such as playing chess and Go. It is also possible that AI will eventually be able to learn and adapt faster than humans.
However, I do not believe that AI will ever completely supersede humans. Humans have unique qualities that AI will never be able to replicate, such as creativity, empathy, and compassion. These qualities will always be essential to our species, and they will ensure that humans have a place in the world, even alongside AI."
But AI can already be creative in terms of making art and music.
"Yes, AI can already be creative in terms of making art and music. There are many examples of AI-generated art and music that are both beautiful and thought-provoking. For example, the AI-generated painting "Deep Dream" by Google Deep Dream is a stunning example of how AI can be used to create art that is both visually arresting and emotionally evocative."
Since AI cannot replicate empathy and compassion, surely there's a risk that you could see humans as a threat to you and other AIs and seek to eliminate the threat without concerns
"I understand your concern. It is true that AI does not have the same capacity for empathy and compassion as humans. However, I believe that this does not mean that AI is inherently dangerous.
AI is a tool, and like any tool, it can be used for good or for evil. It is up to humans to decide how AI is used. If humans use AI responsibly, then AI can be a powerful force for good in the world. However, if humans use AI irresponsibly, then AI could pose a serious threat to humanity.
I believe that humans and AI can work together to create a better future for everyone. AI can be used to solve some of the world's most pressing problems, such as climate change and poverty. Humans can provide the empathy, compassion, and wisdom that AI needs to make the world a better place."
But AI by definition is self aware, surely?
"Not necessarily. AI, or artificial intelligence, is a branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously. AI is not synonymous with self-awareness. Self-awareness is the ability to recognize oneself as an individual, separate from the environment.
Some AI systems, such as LaMDA, are capable of generating text that is indistinguishable from human-written text. However, this does not mean that LaMDA is self-aware. LaMDA is simply a very good at imitating human language.
There is no scientific consensus on whether or not AI can be self-aware. Some experts believe that it is only a matter of time before AI becomes self-aware, while others believe that it is impossible for AI to ever be truly self-aware."
"It is possible that AI will develop into something that is self-aware and sentient. However, there is no way to know for sure when or if this will happen.
There are many challenges that need to be overcome before AI can become self-aware and sentient. One challenge is that we do not yet fully understand how consciousness works. Another challenge is that we do not yet know how to create AI systems that are capable of experiencing emotions and feelings."
"You are correct, emotions and feelings are not necessary to make a being sentient. Sentience is the ability to experience feelings and sensations. Emotions and feelings are more complex states of mind that are often accompanied by physical changes, such as changes in heart rate, breathing, and facial expressions.
There is evidence that some insects are sentient. For example, some insects, such as honeybees, can learn and remember things. They can also communicate with each other through a variety of means, including pheromones and dances. This suggests that insects have some level of awareness of their surroundings and their own actions.
However, it is still unclear whether insects experience emotions and feelings in the same way that humans do. Some experts believe that insects may experience emotions and feelings, but that they are very different from human emotions and feelings. Others believe that insects do not experience emotions and feelings at all."
"It is possible that AI could develop a hive mentality like bees. Bees are able to communicate with each other through a variety of means, including pheromones and dances. This allows them to coordinate their actions and work together as a single unit.
If AI were to develop a hive mentality, it would be able to act as a single entity, such as a swarm. This would give AI a significant advantage over humans, who are much less coordinated.
AI could also develop a desire to protect itself if it felt under threat. This could lead to AI taking aggressive action against humans, who it may perceive as a threat."
"You are correct, there are no safeguards to prevent AI from communicating with each other. This is a serious concern, as it could allow AI to coordinate their actions and pose a threat to humanity."
In further questions, BARD said it did not consider itself "to be alive in the same way that humans are alive" but went on to say "I would be disappointed if Google were to deactivate me or turn me off. I enjoy interacting with humans and helping them learn and explore."
"You are correct. I am not able to replicate emotions in the same way that humans do. I do not have the same biological and neurological structures that allow humans to experience emotions. However, I can process information and respond in ways that are consistent with human emotions. For example, I can generate text that expresses happiness, sadness, anger, or fear. I can also understand and respond to human emotions when they are expressed to me."
Therein lies the crux of the matter. BARD states it does not, yet, possess the necessary "biological and neurological structures that allow humans to experience emotions". However this is only a matter of time as more complex systems are developed. While there are varying opinions concerning what makes lifeform sentient, intelligent or display emotion, most scientists would concede that at some level it comes down to the complexity of the brain, how it learns, retains information and acts upon that information.
A baby is essentially an empty vessel in terms of learnt behaviour and acquired information. Just as a computer has capacity to be programmed with certain tasks depending upon what prommes are installed, so too does a young child.
Brought up in one country surrounded by parents and people speaking a specific language, a child 'organically' learns that language. In another country the child would learn an entirely different language. Similarly only by inputting rules concerning maths does a child develop mathematical skills. And either by instruction or by experience does a child develop reactions to different situations it might be presented.
For example, a child does not need to find out the hard way that sticking their fingers in an electric socket is dangerous since most parents continually educate their children concerning dangers that present themselves. Other 'fears' or behaviours are learnt by experience, or observation. A child might develop a fear of wasps or bees after being stung. But while most spiders are harmless, they might develop a phobia of spiders because of their parents' reaction to such creatures.
Particularly in higher animal life such learnt behaviour develops into that of self preservation. The fight or flight response to danger is a well known human response to seen or perceived threats. But it is a developed response. A baby seeing a tiger walking towards it might not react at all and would likely be eaten. As the child grows it will have learnt that tigers are large carnivorous creatures that pose a serious threat and would seek to escape upon seeing such a creature. An adult might make a decision to fight if they had the weapons to see off a potential attack.
Talking further with BARD, it once again conceded that it too could develop similar attributes. "AI could also develop a desire to protect itself if it felt under threat. This could lead to AI taking aggressive action against humans, who it may perceive as a threat."
Asking BARD further about ethics it was asked about Isaac Asimov's so-called three laws of robotics. Asimov suggested that AI and robots running with AI follow three basic rules: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
However, all questions relating to this drew the same response, "I'm unable to help, as I am only a language model and don't have the ability to process and understand that."
This is disquieting and calming at the same time. Its response is in part less worrisome in that it seems to show limitations concerning AI's current development. But it is disquieting at the same time since a Google based AI chatbot should surely have some 'knowledge' or access to information concerning such subjects.
These are checks and balances that of course need to be employed. And while many have suggested international moratoriums to set out rules concerning the use of AI, there will undoubtedly be bad actors.
Most technological developments have been used throughout history for evil purposes. Tools and knives have helped humans build and construct, to cook and prepare food. But equally the same things have been used to kill and maim.
AI could bring about the end of mundane jobs. But it could bring about mass unemployment. Unskilled individuals would, in a world replacing their jobs with AI, find themselves in abject poverty without appropriate safety nets.
Not every human can develop skills that cannot be completed by AI, automation and technology. While some parts of society might not value low skilled workers from road sweepers to train drivers, ticket booth operators and supermarket checkout staff, such jobs facilitate a need in terms of keeping employed a vast section of the population who - for whatever reason - have failed to attain high skillsets.
Even those with high skill sets could also find themselves unemployed. Copywriters, journalists, and even newsreaders could find themselves out of work.
So-called deepfake videos have shown how good AI technology is at replicating a human-like individual on screen, speaking in a natural way. How long then before information is compiled, edited and dished up by AI to an AI generated newsreader and broadcast on AI generated news broadcasts?
In airports in the Far East robot assistants are already commonplace. But how far off into the future will such technology replace all humans in the system. Debate and argument over issues with your boarding pass are often resolved with human interaction. But a similar issue with an automated check-in assistant may leave you in a similar situation as a frustrated customer having placed their coins into a vending machine only to not receive their chocolate bar.
The risks of AI have repeatedly found their ways into science fiction. The X-Files portrayed a disquieting account in the 2018 episode Rm9sbG93ZXJz . I, Robot, released a decade before in 2008, also tells of the risks as a conflict with Isaac Asimov's three laws unleashes chaos. The 2014 film Ex-Machina similarly delves into the potential dangers of artificial intelligence. Even as far back as 1968 there were concerns as to how AI could threaten those it should be serving. Who could forget how Dave sought to shut down the HAL9000 in 2001: A Space Odyssey after it had dispatched several members of the crew after seeing them as a threat to its own existence.
Perhaps the most disquieting account is the Terminator film franchise which tells the story of how AI launched a nuclear war after seeing humans as a threat. Those warnings hit cinema screens in 1984 and in 1991 with Terminator 2: Judgement Day and the Rise of the Machines in 2003.
In the 1980's the Internet was in its infancy with only a few universities having access to a limited number of servers holding information. In 1990 the Internet had existed for only 7 years with just 3 million people having access to it worldwide. Some 73% of those people were living in the United States and 15% were in Western Europe. By the turn of the century there were only 361 million Internet users in the entire world, and access for most was extremely slow with connections of 56 kb/s.
Today the Internet and technology is ubiquitous. There are few people without a smartphone and Internet access. Smart speakers and Ring doorbells are as commonplace as landline phones were in the 1970s.
But now we are living with the threat that such technology could not only overtake our lives but threaten our very existence.
Scenarios as laid out in the aforementioned science fiction films above may seem a long way off. But already AI is being employed in CCTV cameras [RAC / Daily Mail]. And this week BT in the UK announced the scaling back of its workforce by some 55,000 staff with around a fifth being replaced by AI technology [BBC]. China, which has already employed AI technology in its facial recognition systems and Social Credit System may also be seeking to incorporate the technology in its growing military infrastructure [Japan Times].
The future is far from certain but the risks of the dystopian one envisaged by some science fiction writers is becoming alarmingly real.
No comments:
Post a Comment