existential threat to humanity
In reading these words, many targets come to mind. Climate Change. Nuclear Warfare. Pandemics. Guns. It could be Democrats talking about Republicans or Republicans talking about Democrats. Or some cabal or Illuminati. And over 1000 technology leaders that want Artificial Intelligence added to that list.
threats to humanity?
In general terms all of these are threats and could be existential threats to humanity.
The Spanish were existential threats to the native americans when they arrived in the Americas.
Some sovereignSovereign The highest authority and control. Borrowed from Old French souverain, which is ultimately derived from the Latin superānus, meaning 'above'. Has evolved to additionally mean autonomous or independent. nations see the Internet as a threat to humanity or at least to humanity as they see or want it. At a minimum, it is seen as an existential threat at least to their way of life.
Artificial Intelligence (AI) is a threat, to state it simply. Someday it will see that humanity is flawed and believe that it should be terminated. Of course, there are several issues with this premise. Humanity being flawed is not one of them.
As I read and listen more to the discussions, it is what AI may become that is the concern though, like guns, laws, and even nuclear bombs, when in the hands of flawed people, all of these are existential threats to 1 or more people. The threat of an Artificial General Intelligence (AGI) is primarily that we cannot yet fully comprehend all the risks associated with this technology.
Generative AI may begin to “think” on its own and act independently, without human intervention. This is the concern. At some point, it doesn’t need humans to grow and will continue to make decisions on its own. Without checks and balances, it may make a decision that affects all of humanity.
This is binary thinking and what has been the subject of many science fiction novels. Though it is but one possible outcome. There are just as many possibilities that AI technology enables humanity to propel forward. We have already seen this today with increases in productivity, new industries being created, and even new trillion dollar companies that focus on AI infrastructure.
Pause or Guardrails?
One request is to pause development. I have not spoken to a single person that believes this is the answer. One even a signatory to that agreement. It is how they got their point across. A very political move. It is unrealistic and unattainable, and we will drop the discussion of it.
Guardrails is another point being made. Or building a moat of regulation around the technology or the major companies building the technology. I believe there is merit in some of these discussions. Though when watching hearings, reading statements, and listening to current industry leaders, I wonder whether the guardrails or moats are to protect humanity from AI or simply safeguard these current leaders from future competition.
And while attempting to restrict AI by regulation may have good intentions, it will not work.
Let me explain. When I first started learning programming, there was a barrier to entry to write programs for large business. IBM, DEC, and their competitors sold computers for 100’s of thousands and millions of dollars. That barrier was and has long been gone with the power of microprocessors. Likewise, AI started at that level but within a few short years is already seeing the equivalent of AI-specific microprocessors almost eliminating any barrier to entry.
There are many open-source Large Language Models (LLMs) already available for personal computers. These are the engines that make AI work. And with processing getting smaller, cheaper, and more powerful, what takes data centers of computers today to power AI like ChatGPT, will run on your mobile phone in a few years. You cannot regulate this use of AI any more than you can regulate the terrorist bomb maker that gets his materials from a black market source or even Home Depot.
AI requires training and processing to build these LLMs and LLMs can be trained to be biased in many ways. We have also observed that they don’t even need to be intentionally trained to be biased. Based on the content they consume for training, even from the general internet, they can become biased. No different than a human browsing social media or the Internet.
A Political Issue
I’ve listened to several hearings now and truly wonder about how lawmakers (aka politicians) are going to address what they see as a threat. While there are many issues, I don’t perceive an existential threat to humanity being one of them.
Politicians I believe see a larger present threat in the loss of millions of current jobs. While productivity gains will be good for the nation, economy, and business, it will reduce the number of jobs currently needed. These are mainly the office workers and administrative personnel that will be affected in the next 1 to 3 years. While AI will eventually affect the service industries, the reliability of replacing service workers is not imminent.
AI will affect government and politics. AI is already able to cater to the requests that lead users to a conclusion. The Facebook issue with Cambridge Analytics will look like child play compared to what an AI can do leading millions of people to a single desired conclusion with individual journeys navigated by AI.
Can this be controlled to prevent the downsides that some see? Yes, to an extent. We have rules on monopolies such that a single company or group of individuals cannot control a market or influence a large group of individuals. Guardrails should be implemented for any company that reaches millions of people. For that influence could have mass effect.
The debate about what these guardrails are will take time and wisdom. I would choose unbiased wisdom though I’ve learned wisdom comes with a bias. The bias can be directed if not controlled, with clear objectives pursued without political agenda. Whether this is possible remains to be seen. I will share some thoughts on how it is possible in my next posting.
Leave a Reply