If you think the capabilities and uses of technology have accelerated rapidly in your lifetime, brace yourself: the next few years will leave the last 10 in the dust. We’re about to experience an even greater technology evolution.

How far will we allow technology to assist us? How can we use it to augment our weaknesses, particularly through artificial intelligence (AI)?

What will we trust and value in an even more technology-oriented world?Click To Tweet

If we’re smart, we’ll answer these questions before – not after – the sea change.

These observations may sound dramatic. However, could we have imagined 12 years ago that we would all be carrying supercomputers in our pockets? And that we would spend more time looking at them than any other single object in our lives? Over the last 30 years, the devices we use have become smaller, smarter and cheaper than ever before.

Intel co-founder Gordon Moore predicted the technology that fuels our tiny wonders. In 1965, he developed a theory now called Moore’s Law. He predicted that every year, the number of transistors per square inch on integrated circuits will double.

Essentially, Moore meant that processors would increase in performance even as they decreased in size. He was right – today’s processors are almost one million times more powerful than they were 30 years ago. At the same time, the smallest processor is now at 7-nanometers, with a 5-nanometer processor in development. For reference, a human hair is 75,000 nanometers. It’s hard to fathom anything so small, but these advancements fuel the incredibly small, powerful phones we rely on every day.

Moore’s Law held true until last year, where it slowed from every year to approximately every 18 months to two years. We are now at the point where we cannot make processors any smaller, at least at the rate that Moore’s Law predicted.

Enter stage left: Artificial Intelligence (AI). For the sake of definition, AI is a technology that uses machines and machine learning to create software and experiences that improve over time based on user feedback and data.

Any task that is repetitive can be performed by a robot or AI. With AI, technology not only will follow the rules that it is programmed to do, it will learn and change how it accomplishes tasks. It actually will think like a human. Sensors will be – and in some cases already are – embedded in anything and everything.

The slowing of Moore’s Law is being counterbalanced by the power of AI and connected devices and sensors, where the processing of information is now exponential and innately intelligent.

Thanks to Moore’s Law, our technology can now fit onto and within anything – including us. Sony, Google and Samsung have filed patents for smart contact lenses that take photos, record video and display information, just to start. If you’re a diabetic, these lenses will no longer require the daily finger pricks, as they will measure glucose levels continuously.

The impact will span far beyond automated warehouses and self-driving cars to touch each of us in ways we can’t even imagine today. This shouldn’t be hard to believe, considering most adults spend more time on their digital devices than they spend thinking.  In some ways, we’re actually “feeding the beast” that is AI and technology advancement, giving it valuable information on who we are, how we think and what we want and need.

We’re entering into uncharted territory. Technology and the behaviors that make us uniquely human will be mimicked by machines. Sometimes, the machines might perform human behaviors better than humans! Now, more than ever, we should consider the value we bring, as individuals and organizations, and how we are augmented by this technology, not replaced by it.

Check out our recent Padilla POV webcast, AI: You Can’t Automate Trust, where we examined AI – the myths, the truths and why human trust is still so important.