Building the Future: The Promise of Artificial Intelligence
← Back to all case studies

Building the Future: The Promise of Artificial Intelligence

Building the Future
AI & Machine Learning
Data & AI
Copilot

Any discussion about Artificial Intelligence (AI) should start by mentioning one of the fathers of modern computer science: Alan Turing. Turing is popularly known for his contributions in breaking the German Enigma and “Tunny” codes during the World War II. Along the way to deciphering the “Tunny” code, Turing and his associates at Bletchley Park, England, built the world’s first large-scale electronic computer, the Colossus.

Messages sent via “Tunny” were in long binary strands of ones and zeros. Each letter of the alphabet had an associated five number code and when spliced and blended with codes for other random letters it created meaningless messages that could only be decoded with the use of a key.

Turing was able to find a way to deduce keys by hand, but given the pure volume of messages being sent by the Germans, a computer was needed to speed up decryption. Colossus was the result and the first of its kind, capable of analyzing up to 5000 characters per second, and decrypting 100 messages per week.

Turing continued his work with the first computers between the end of the war and his death in 1954. In 1950, he published his paper Computing Machinery and Intelligence in which he outlined what has come to be known as a “Turing Test” for artificial intelligence: A computer and a human at a terminal are separated from each other. Simultaneously, a human “interrogator” blindly questions the two in an attempt to find out which is the human and which is the computer. Turing believed that if a computer could successfully convince an interrogator that it was human, then it could be deemed artificially intelligent.

Fast forward 70 years and artificial intelligence is completely transforming business, culture, and society as a whole. This blog post will attempt to provide a basic understanding of a few important AI technologies, some of their groundbreaking applications, and concerns about the future of AI.

AI Overview

In simple terms, artificial intelligence is the replication of human intelligence by computers. AI is most useful for completing difficult tasks that require computing power far beyond the reach of human beings.

In order to complete a task, an AI generates a mathematical model using training data. Training data is essentially a large collection of information that developers would like an AI to be able to classify, analyze, replicate, etc.

Consider the training data of a few hundred real-world images, some of cats and some of dogs. Each image in the training set is labeled “cat” or “dog” based on the animal the photo contains. Every pixel of every image in the training set can be described as a set of data points. If these are color images, then each pixel has associated values of red, green, and blue color intensity. 

AI can arrange these pixels in 3D space based on their associated color values, one axis describing red intensity, another for green intensity, and another for blue intensity. However, AI is not confined to organizing data in only 3 dimensions.

The color data from images unlocks all sorts of possibilities for an AI model. Different intensities in color can correspond to all kinds of characteristics in the training data images. Some pixels may be associated with textures, shapes, and other qualities, with each pixel existing along some brand-new axis. A basic machine learning AI has no real human concept of textures, shapes, and colors. However, it can use mathematical algorithms to identify these qualities as trends among data points in multi-dimensional space. 

The close proximity of these data points, their relation to each other linearly, and other trends generally indicate shared characteristics. With enough labeled training data, an AI can begin to distinguish certain collections of pixels and for our example, identify them as either cats or dogs.

In general, the more training data a machine learning AI is fed, the more capable it becomes. Machine learning allows the mathematical algorithms used in these models to be constantly refined based on how accurate they are in correctly identifying training data.

Many models today have access to enough training data and computing power to consider data points in hundreds and hundreds of dimensions. Photo-generative models like OpenAI’s DALL-E 2 can consider many thousands of data points in over 500 dimensions at once.

AI Neural Networks

A Neural Network is another AI architecture that uses the biology of the human brain as inspiration. They are most traditionally used as classification models.

The brain is composed of billions of neurons. These are cells that can receive and send information as electrical signals to other parts of the body. Neurons exist in two states, as either firing (sending and receiving), or inactive.

A neural network substitutes neurons for “perceptrons”. These are effectively a computer’s equivalent of a neuron. Each individual perceptron uses a system of vector weights and an activation function to identify a positive or negative instance of something, typically within photos.

A basic neural network might use the ordinary least-squares solution for an overdetermined system to find a collection of weights for every pixel in a large collection of images. There are plenty of other methods for finding vector weights, but regardless of which method is implemented, these weights effectively emphasize certain pixels or vectors in a given image. Generally, these are pixels or vectors with the most distinct variation between images. In this way, a neural network can identify pixels corresponding with unique characteristics in a collection of images.

If the sum of these vector weights surpasses a set threshold, then it is generally assumed that the model has identified enough distinct pixels or vectors to make a classification about the data it has been presented with. In this case, the node’s activation function “fires”, moving from inactive to active.

A classic activation function is the unit step function, which moves from an inactive value (0), to an active value (1), at some time, t. In this case, time t occurs when the sum of the vector weights surpasses a certain value.

Modern neural networks use layers upon layers of perceptrons to classify incredibly wide ranges of data or images. These are called Multi-Layer Perceptrons (MLPs). Most MLPs have three distinct layers of input, hidden, and output perceptrons.

In this way, layers and layers of perceptrons can be connected. When enough are assembled together they can be said to primitively resemble the biology of the human brain.

Concerns

It seems as though the majority of public dialogue surrounding artificial intelligence focuses on whether such a system is capable of sentient thought. This dialogue becomes increasingly complex and ultimately unproductive in determining the true meaning of sentience, how it can be identified, and whether any AI is capable of becoming sentient. 

The example of Google’s LaMDA neural network language model has demonstrated the real-world complexities of these issues, and just how little the general public understands artificial intelligence. 

Google recently placed an AI bias engineer, Blake Lemoine, on administrative leave after he publicly announced that he believes LaMDA to be sentient. LaMDA is an incredibly powerful AI designed to converse with users over the internet via text. The model has access to Google’s best computing power and a wealth of information gleaned from every corner of the internet by the company’s search engine.

LaMDA was trained with millions of strands of conversational text found all over the internet in order to replicate human dialogue and conversation, so in some sense, LaMDA is designed with the objective of appearing sentient. However, the model itself is incapable of acting sentiently.

Perhaps more importantly, the discussion of AI sentience detracts from more pressing concerns regarding the development of ethical artificial intelligence. Large companies like Google, Amazon, OpenAI, Microsoft, and others have access to incredible amounts of data which they use to train their AI models.

These models are only as effective as the data they are trained on, a concept which has borne frightening results with photo-generative AI in recent years. While such models are incredibly effective, some results reveal unpleasant biases in training data which developers continue to try to address.

The majority of training data for many of these models originates from western cultures. As a result, some results are incredibly biased towards western appearances, characteristics, and societal trends. Asking one of these models to generate an image of a human being in a specific occupation or situation can have concerning results. For example, CEOs appear as older white men, nurses appear as young women, and minorities may be largely excluded from representation.

The “black box” model of most AI systems poses additional challenges. “Black box” refers to AI models whose general code and internal algorithms are either inaccessible or incredibly difficult for human beings to interpret.

Without the ability to parse out why a certain AI model may generate biased or inappropriate outputs, one can see why such technology, especially when coupled with incredible computing power, could appear significantly challenging.

It all begs the question, who gets to decide what training data is used for AI models and for what purpose? Without some kind of oversight or transparency, it can be difficult to hold companies to certain ethical standards in developing artificial intelligence.

The Dura Digital Takeaway

Artificial intelligence is ultimately an incredibly powerful technology, with capabilities that rival or even threaten to exceed the creativity, intelligence, and flexibility of human beings. While there are plenty of valid concerns to be had with the integration of this technology into society, a thorough understanding of how AI works and how it should be ethically implemented is becoming increasingly valuable for companies and consumers alike.

At Dura Digital we continually invest in learning new technologies so that we can provide you, our customers, broad scale insights and awareness that help you transform your business.  Contact us for more details on how we can help you advance your business by leveraging the power of AI.

Previous project
Next project

Building the Future: The Promise of Artificial Intelligence

Dura Digital
Dura Digital
October 9, 2022
Building the Future: The Promise of Artificial Intelligence

Any discussion about Artificial Intelligence (AI) should start by mentioning one of the fathers of modern computer science: Alan Turing. Turing is popularly known for his contributions in breaking the German Enigma and “Tunny” codes during the World War II. Along the way to deciphering the “Tunny” code, Turing and his associates at Bletchley Park, England, built the world’s first large-scale electronic computer, the Colossus.

Messages sent via “Tunny” were in long binary strands of ones and zeros. Each letter of the alphabet had an associated five number code and when spliced and blended with codes for other random letters it created meaningless messages that could only be decoded with the use of a key.

Turing was able to find a way to deduce keys by hand, but given the pure volume of messages being sent by the Germans, a computer was needed to speed up decryption. Colossus was the result and the first of its kind, capable of analyzing up to 5000 characters per second, and decrypting 100 messages per week.

Turing continued his work with the first computers between the end of the war and his death in 1954. In 1950, he published his paper Computing Machinery and Intelligence in which he outlined what has come to be known as a “Turing Test” for artificial intelligence: A computer and a human at a terminal are separated from each other. Simultaneously, a human “interrogator” blindly questions the two in an attempt to find out which is the human and which is the computer. Turing believed that if a computer could successfully convince an interrogator that it was human, then it could be deemed artificially intelligent.

Fast forward 70 years and artificial intelligence is completely transforming business, culture, and society as a whole. This blog post will attempt to provide a basic understanding of a few important AI technologies, some of their groundbreaking applications, and concerns about the future of AI.

AI Overview

In simple terms, artificial intelligence is the replication of human intelligence by computers. AI is most useful for completing difficult tasks that require computing power far beyond the reach of human beings.

In order to complete a task, an AI generates a mathematical model using training data. Training data is essentially a large collection of information that developers would like an AI to be able to classify, analyze, replicate, etc.

Consider the training data of a few hundred real-world images, some of cats and some of dogs. Each image in the training set is labeled “cat” or “dog” based on the animal the photo contains. Every pixel of every image in the training set can be described as a set of data points. If these are color images, then each pixel has associated values of red, green, and blue color intensity. 

AI can arrange these pixels in 3D space based on their associated color values, one axis describing red intensity, another for green intensity, and another for blue intensity. However, AI is not confined to organizing data in only 3 dimensions.

The color data from images unlocks all sorts of possibilities for an AI model. Different intensities in color can correspond to all kinds of characteristics in the training data images. Some pixels may be associated with textures, shapes, and other qualities, with each pixel existing along some brand-new axis. A basic machine learning AI has no real human concept of textures, shapes, and colors. However, it can use mathematical algorithms to identify these qualities as trends among data points in multi-dimensional space. 

The close proximity of these data points, their relation to each other linearly, and other trends generally indicate shared characteristics. With enough labeled training data, an AI can begin to distinguish certain collections of pixels and for our example, identify them as either cats or dogs.

In general, the more training data a machine learning AI is fed, the more capable it becomes. Machine learning allows the mathematical algorithms used in these models to be constantly refined based on how accurate they are in correctly identifying training data.

Many models today have access to enough training data and computing power to consider data points in hundreds and hundreds of dimensions. Photo-generative models like OpenAI’s DALL-E 2 can consider many thousands of data points in over 500 dimensions at once.

AI Neural Networks

A Neural Network is another AI architecture that uses the biology of the human brain as inspiration. They are most traditionally used as classification models.

The brain is composed of billions of neurons. These are cells that can receive and send information as electrical signals to other parts of the body. Neurons exist in two states, as either firing (sending and receiving), or inactive.

A neural network substitutes neurons for “perceptrons”. These are effectively a computer’s equivalent of a neuron. Each individual perceptron uses a system of vector weights and an activation function to identify a positive or negative instance of something, typically within photos.

A basic neural network might use the ordinary least-squares solution for an overdetermined system to find a collection of weights for every pixel in a large collection of images. There are plenty of other methods for finding vector weights, but regardless of which method is implemented, these weights effectively emphasize certain pixels or vectors in a given image. Generally, these are pixels or vectors with the most distinct variation between images. In this way, a neural network can identify pixels corresponding with unique characteristics in a collection of images.

If the sum of these vector weights surpasses a set threshold, then it is generally assumed that the model has identified enough distinct pixels or vectors to make a classification about the data it has been presented with. In this case, the node’s activation function “fires”, moving from inactive to active.

A classic activation function is the unit step function, which moves from an inactive value (0), to an active value (1), at some time, t. In this case, time t occurs when the sum of the vector weights surpasses a certain value.

Modern neural networks use layers upon layers of perceptrons to classify incredibly wide ranges of data or images. These are called Multi-Layer Perceptrons (MLPs). Most MLPs have three distinct layers of input, hidden, and output perceptrons.

In this way, layers and layers of perceptrons can be connected. When enough are assembled together they can be said to primitively resemble the biology of the human brain.

Concerns

It seems as though the majority of public dialogue surrounding artificial intelligence focuses on whether such a system is capable of sentient thought. This dialogue becomes increasingly complex and ultimately unproductive in determining the true meaning of sentience, how it can be identified, and whether any AI is capable of becoming sentient. 

The example of Google’s LaMDA neural network language model has demonstrated the real-world complexities of these issues, and just how little the general public understands artificial intelligence. 

Google recently placed an AI bias engineer, Blake Lemoine, on administrative leave after he publicly announced that he believes LaMDA to be sentient. LaMDA is an incredibly powerful AI designed to converse with users over the internet via text. The model has access to Google’s best computing power and a wealth of information gleaned from every corner of the internet by the company’s search engine.

LaMDA was trained with millions of strands of conversational text found all over the internet in order to replicate human dialogue and conversation, so in some sense, LaMDA is designed with the objective of appearing sentient. However, the model itself is incapable of acting sentiently.

Perhaps more importantly, the discussion of AI sentience detracts from more pressing concerns regarding the development of ethical artificial intelligence. Large companies like Google, Amazon, OpenAI, Microsoft, and others have access to incredible amounts of data which they use to train their AI models.

These models are only as effective as the data they are trained on, a concept which has borne frightening results with photo-generative AI in recent years. While such models are incredibly effective, some results reveal unpleasant biases in training data which developers continue to try to address.

The majority of training data for many of these models originates from western cultures. As a result, some results are incredibly biased towards western appearances, characteristics, and societal trends. Asking one of these models to generate an image of a human being in a specific occupation or situation can have concerning results. For example, CEOs appear as older white men, nurses appear as young women, and minorities may be largely excluded from representation.

The “black box” model of most AI systems poses additional challenges. “Black box” refers to AI models whose general code and internal algorithms are either inaccessible or incredibly difficult for human beings to interpret.

Without the ability to parse out why a certain AI model may generate biased or inappropriate outputs, one can see why such technology, especially when coupled with incredible computing power, could appear significantly challenging.

It all begs the question, who gets to decide what training data is used for AI models and for what purpose? Without some kind of oversight or transparency, it can be difficult to hold companies to certain ethical standards in developing artificial intelligence.

The Dura Digital Takeaway

Artificial intelligence is ultimately an incredibly powerful technology, with capabilities that rival or even threaten to exceed the creativity, intelligence, and flexibility of human beings. While there are plenty of valid concerns to be had with the integration of this technology into society, a thorough understanding of how AI works and how it should be ethically implemented is becoming increasingly valuable for companies and consumers alike.

At Dura Digital we continually invest in learning new technologies so that we can provide you, our customers, broad scale insights and awareness that help you transform your business.  Contact us for more details on how we can help you advance your business by leveraging the power of AI.

See all posts →