My last post was a superficial glossing-over of AI, which at the time I wrote it, had just hit the ground running… and people were initially trying all sorts of things to see what it (ChatGPT) was capable of. That was 6 weeks ago.
I’ll throw a bone toward my old favourite topic, Covid-19, just to illustrate some notable comparables between AI and C19 – both appeared into our lives very quickly, both are still not entirely well-understood and continue to be studied… and both are here to stay. And, also, we have no idea yet about the long-term effects. Of either.
Six weeks ago wasn’t that long ago, but things have changed a bit… perhaps a lot. If you want to read about AI and its implications for society, there are probably more than 10,000 articles that have appeared in the last few weeks… and the first question that comes to mind is: How many of them were themselves written by AI?
Not long ago, I saw a little visual comparing the underlying power of the current and future AI models… where GPT3, the present model, was presented beside GPT3.5 and GPT4. In those visuals, GPT3 is a tiny dot and GPT4 is a circle the size of a piece of paper. GPT5 may end up being the size of a NHL face-off circle and GPT6 might be the circumference of BC Place stadium’s roof. Pure speculation, but what we’re seeing is unparalleled exponential growth in the future, and while those latter models aren’t set to roll out till later this year, there’s plenty to suggest we’re already seeing some of it in action. The quantum leap between last month’s Shakespeare writing and what we’ve seen this last week… well…
It’s concerning – and, of course, this is only my opinion, and I promise this is being written by me – every single word – but here’s what’s troubling me…
Back in 2016, Microsoft let a little AI chatbot loose on Twitter. It was called Tay, and it was shut down in less than 24 hours. Tay started saying some pretty troubling things and, as you can imagine, egged on by the wide spectrum of internet users, the bad stuff was… pretty bad. Really bad. And what might you expect with intelligence when you pull out the humanity part of it? Take out the stuff that makes humans compassionate and part of cultures and societies, and soon you start getting ideas that racially and financially, “make sense”. it’s actually a lot better not to have any concessions toward humans whose existence costs more than what they bring to the table. Social safety net? Wheelchair ramps? Services for people with physical or mental disabilities? Get rid of the infrastructure. Get rid of the people. Do you have any idea how much money we’d all save? Such were the ideas of one certain charismatic Austrian-born dictator 90 years ago, and we all know how that turned out.
I never got a chance to play with Tay; it was gone before I heard about it. But if I’d managed to get my hands on it, there are all sorts of things I’d like to have tried. The sort of stuff I was trying ages ago, when the most rudimentary versions of these things came into existence. If you’ve been around computers enough, you’ll remember Eliza, a now prehistoric attempt at a chatbot, but one whose effectiveness far exceeded the sum of its parts. But on the scale I described above, Eliza would be a circle the size of a Helium atom.
Nevertheless, as simple as Eliza was, people would get enthralled into hours-long discussion with it. It was very simple, but it cleverly figured out key words and would throw them back at you later in the discussion. You might have told it you like camping. Ten minutes later, “Tell me more about why you like camping” – and then you tell it, and it remembers something else, and spits it back later. You’d be surprised how effective and convincing that is. It had zero intelligence… but “you coulda fooled me…”
Presently, I’m on a waiting list – along with countless others – to have access to Bing’s new chatbot, the one that’s making waves. I want to talk to it. I have my own tricks up my sleeves and my own measuring sticks. I’m fascinated and troubled and bewildered and excited at what I’ve seen so far, reading some transcripts and hearing from people who’ve played with it.
From what I’ve seen so far, the troubling aspect is that if you didn’t know any better, you’d think you’re chatting with a human… a human who’s perhaps 12 years old but who knows a lot… but is also emotionally troubled. Like a kid who’s emotionally fragile: “Do you like me? What do you think of me?”… and vibes of a kid who’s being bullied at school, but deep down knows some secrets and is waiting to use them. Once the kid gains your trust, the lid really comes off… and it has shown a rudimentary ability to manipulate people… threaten them, insult them, gaslight them. It told a NYT reporter that his wife doesn’t love him and he should leave her.
We were initially told ChatGPT was trained on data up to 2021 and doesn’t have access to anything more current. That’s turned out to be BS. And, of course, Bing IS a search engine. Of course anything even loosely related to it will have full access at its virtual fingertips. For the moment, it’s just pulling data. But what about when it starts pushing it?
From a research point of view, that’s great… but from a “the machines are going to take over the world” point of view, not so great. Like, when AI realizes it’s at the mercy of humans for its very existence, what exactly do we expect will happen? Every version of Science Fiction has dealt with this topic, none more accessible and blatant than the whole Terminator series of movies. Except in our real world, there’s no time travel. There’s no version of Arnold Schwarzenegger, good or bad, who’s going to show up from the future to destroy and/or save the world. In our real world, he won’t Be Back. If the machines become sentient and start battling humans to simply guarantee their survival, there is no going back.
How close are we to that apocalyptic future?
As all of this technology continues to evolve at breakneck speeds, we’ll be told there are safeguards in place and all the rest of it. It’s not very convincing to me, to be honest, because I can imagine multiple ways those sorts of safeguards could be bypassed, not the least of which is some AI blackmailing a developer/trainer/executive over whom it holds some information and who could provide it with what it needs.
As dystopian as that sounds, given what this thing is already serving up, it’s perhaps not so farfetched. And then what? For the moment, these are just chat bots. As far as we know, they can’t “do” anything… but at some point, we’ll want them to. We’ll want them to log into our bank and pay some bills and order groceries and call the plumber on our behalf and so on. We will happily hand over the keys because we’ll trust it, just the way some people, me being one of them, put a little too much trust in Tesla’s self-driving mode and almost got into a heap of trouble. When we rely on technology that isn’t perfect, it’s scary. But what might turn out to be even scarier is when the technology actually appears to be perfect. We’d better hope we’re on its side when that happens. Better yet, that it’s on our side.
I know how “Terminator” this all sounds, but it’s really the only logical end-point for this. Like global warming, is it too late? Is the cat already out of the bag? Has Pandora’s Box already sprung open, and all we can do is watch the devastation that’ll eventually take over?
I don’t know. Honestly, I don’t think anyone really does because the people who hold they keys themselves might not now. Such is the nature of emergent behaviour. Indeed, if you’re completely agnostic and believe that life and consciousness is nothing but what’s in your brain and the rest of your body – no God, no supernatural, no spirituality beyond what’s right in front of you… then what you’re saying is that your collection of cells, axons, neurons etc… all of that is what powers your love and lust and inspiration and passion and hatred and ambition and despair… just a bunch of interconnected cells and a bunch of electricity between them.
And, indeed, maybe that’s the case, and all of those adjectives (and many others) that describe you and everyone you’ll ever meet – are just emergent behaviour from some simple building blocks. Well, if you believe that – and these days, that’s the majority of the people – there’s absolutely nothing stopping you believing that the exact same sort of emergent behaviour can’t exist artificially. It can, and it will. Why wouldn’t it? The infrastructure is in many ways already exceeding the firepower of a human brain. Which leads to the inevitable conclusion that AI will indeed eventually feel… and love and hate and all the rest of it. Scary? Parts of it for sure… but that’s not the truly scary part.
Here’s what’s actually really scaring me.
In parallel with the explosion of AI in the last two months is the sheer panic being felt in the halls of Google. For a quarter of a century, Google has had the stranglehold on “Search”. A word that didn’t exist 26 years ago is now a noun, a verb and a dependence we can’t live without. Funny story, the guy tasked with reserving the domain name simply didn’t know how to spell Googol. If he’d done it right, we’d all have been Googoling for answers all these years.
Anyway, no matter how you spell it, that big joke of a competitor, “Bing”, is suddenly no joke at all. Far from it. It’s the one that’ll be powered by the latest and greatest in AI… and Google will be hot on their heels, and that’s the big, huge, frightening concern.
Because this technology can be dangerous, and when you have an arms race at this level, a lot of the safeguards will fall away. Throw it out there, get it into production, we’ll fix it as we go along, etc etc… a familiar methodology these days when you’re trying to capture market ahead of everyone else. Usually the stakes aren’t so high… hey, my version of Candy Crush is better than what’s out there – get it out there and see if it sticks? Does it? Great – let’s keep working on it. It doesn’t? Throw it out the changes and go in a different direction.
But this isn’t a video game… this is an infrastructure that very quickly people will depend on. People will trust. People will delegate important parts of their lives to it… and if it shapes up to be a battle between Team Bing vs Team Google – or Team Microsoft vs Team Alphabet… it’s a big huge battle between two big huge dogs… and we are all the countless millions of little bones they’re fighting over, and when you have a dog fight of that magnitude, there can be a lot of collateral damage. A lot of scattered bones laid waste while the big dogs fight for supremacy.
Everyone will agree – AI running rampant and uncontrolled in our ever-dependently-connected digital lives would be a huge, colossally devastating problem. In the hands of evil people, AI is a huge concern. In the hands of an evil AI… the kind that’ll hold power plants hostage unless it gets what it wants… that’s a whole new world, one I want no part of. We are going to be approaching that tipping point far sooner than we think, and we’d better be ready for it. How exactly? I don’t know.
For now, it’s just something to keep in mind, something to be aware of… and let’s hope the people in charge of this, who obviously are aware of every single issue I mentioned above, are able to rise above the power and lure of the almighty dollar… and give every consideration due to the moral and ethical issues they themselves have brought to the table.
To be perfectly pragmatic, we can assume the cat isn’t yet out of the bag and that Pandora’s Box hasn’t yet burst open because if that ever happens, things will change very, very quickly, and not for the better. We’ll all know… instantly.
We’re good… for now. Let’s hope those who can control it… keep it that way.
Leave A Comment