Does Google’s Language Model for Dialogue Applications, or LaMDA, have consciousness? Is their artificial intelligence-driven conversational agent (commonly known as a chatbot) sentient? Google says, “no.” An employee who published a transcript of his exchanges with LaMDA and was fired for violating the company’s confidentiality policy disagrees.
What I find intriguing about the situation is the advanced degree of machine learning LaMDA demonstrates, and how chatbots as well as other use cases enabled by artificial intelligence (AI) have become part of our lives. Furthermore, AI has significantly impacted business by helping to optimize daily operations, elevate customer experiences and ignite new revenue opportunities. It’s quickly becoming an essential part of any business strategy.
While it’s interesting to learn about amazing AI-powered use cases, let’s take a step back to AI basics, a move that can help take some of the “fuzziness” out of an increasingly complex technology.
Artificial intelligence involves algorithms, or sets of programmable instructions, to automate tasks and problem-solve. Many functions that would normally require human intelligence are now executed by smart machines. These machines don’t have I.Q.s like humans, so there are many aspects that AI cannot master, such as multi-tasking, social interaction, self-awareness and relationships. However, that’s not to say computer scientists cannot come close.
AI technologies are categorized by their capacity to mimic human characteristics, the technology they use to do this, their real-world applications and the theory of mind.1 Real or hypothetical, all AI systems fall into one of three categories:
Fortunately, for the time being, we are saved from the unknown consequences resulting from machines being humanoid or even far more “intelligent” than people, as AGI and ASI are currently not possible. And while AI is still in its infancy, practices like machine learning and deep learning are inching us closer to that reality.
Machine learning (ML) is an AI technique that uses data, algorithms and models to understand and learn from experience over time. Essentially, it finds insights from data, adjusts itself based on the data being processed and continues to improve. With machine learning, AI systems get better at predicting and computing without being programmed to do so.
As an example of machine learning, streaming music services learn from the songs, artists and albums that you’ve previously listened to and suggest music you might like. The algorithms simply determine the genre(s) of music you are most likely to enjoy and provide similar options.
Deep learning, a subfield of machine learning, processes large amounts of data with complex patterns through neural network architectures. Without getting into the weeds, that just means the program can utilize a decision tree built from all the deeply hidden data it is storing and self-correct. Deep learning is far more advanced than traditional machine learning.
Deep learning requires large amounts of data to return the right results. Translation services, for example, rely on this type of learning to quickly and accurately translate while ensuring proper grammar.
Today, all of us are walking around with not-exactly-humanoid smart devices in our hand. Our houses have smart thermostats and doorbells, our cars can predict where we are driving and our computers can recognize our face to log in. No matter the technology, AI has slowly crept into our everyday lives without us even realizing.
Supply chains, factories, connected cars and smart homes all deploy millions of sensors that require actions based on data. Many of these devices input information, transmit it to the cloud, execute instructions and return the decision to the device. Because of the number of parallel operations required for most machine learning models, they typically live in data centers that have the processing capability required to better learn and reflect behavior.
Now, many applications need to gather, process and analyze the data closest to where it’s been generated to reduce the time it takes from data to be ingested to the response. Enter decision-making at the edge.
Edge data centers can enable AI decision-making at the edge, thanks to their proximity to the application. For example, healthcare IoT systems can apply machine learning to analyze data, identify issues and alert caregivers. To be effective, the latency needs to be in the sub-10 millisecond range.
AI-assisted cybersecurity is another application. Part of that includes using AI to monitor web traffic behavior for bots, and then take action to protect mobile apps, networks and infrastructure in clouds. In this instance, the AI system can reside in a data center that’s not on the edge. However, that data is sent back to the cloud or data center to be aggregated for further analysis, continuous learning and even smarter decision making.
This combination speeds up data processes, allows the most critical data to drive intelligent real-time decisions and delivers better real-world outcomes. It’s no wonder that businesses see edge computing as key to driving success. In 2018, only 10% of data was created and processed outside the data center, but this number is expected to rise to 75% by 2025.2
In this world of ever-changing technology, AI will continue to reshape much of the world’s businesses. The rapid growth of knowledge and learning within machines powers new digital services that can bring this technology up to its highest potential. It’s not exactly science fiction anymore, and the possibilities are infinite.
When you are ready to discuss how colocation data centers can help you when adopting or optimizing AI in your business, get in touch. We would welcome the opportunity to learn about your goals and exchange thoughts on how we can help.