Part 1: Data & AI Ethics, Machine Learning, and $100,000+ Expert Insights with Reid Blackman
Understanding Artificial Ethics, Machine Learning, And AI
Terms like “Artificial Ethics”, “Data Ethics”, and “Machine Learning”, were discussed a lot in this episode, and before we go any further, it’s important to know what they actually mean.
You may have a rudimentary idea of what these things mean, but to truly understand these ideas and how they impact the modern world, we need to take a deeper dive, and Reid was more than happy to do that.
Narrow AI Vs General AI
Artificial Intelligence (AI) is a subject we’re all somewhat familiar with. We’ve all seen films like The Terminator, Blade Runner, Ex Machina, and Her. We know that AI, in general, refers to a form of manmade intellect that has similar capabilities to the human brain and can be used to assist, improve, and—in the case of The Terminator—destroy the human race.
But none of that applies to business. Arnie isn’t going to break down the door of your HQ and start advising you on a growth hacking strategy.
That brings us to “narrow AI”. In simple terms, general AI refers to artificial intelligence on the whole. It’s about creating synthetic “brains” for application in many different contexts; narrow AI, on the other hand, focuses on specific tasks and purposes.
What Is Machine Learning?
Machine learning is a form of narrow AI. Reid succinctly described machine learning as AI that “learns by example”, which is an excellent way of explaining it.
Reid used pictures of a dog as an example. Let’s suppose that you create a software program to recognize pictures of your dog so they can be properly organized and stored. Every time you upload an image, the software detects whether it’s a picture of your dog and processes it accordingly.
To reach that level of understanding, the machine needs to understand what your dog looks like, it needs to learn the patterns, and so you upload thousands of images in advance. The software learns what your dog looks like and knows what to look for.
AI has yet to reach the point where it can process and understand in the same way as the human brain. In this context, it looks for visual cues and patterns to determine how the image needs to be filed.
However, it’s getting better and its potential is increasing.
I recently saw a brilliant and bizarre example of just how powerful machine learning can be and how much promise it holds for the future. It’s called MeowTalk, and it’s designed to help you understand your cat’s meow.
It sounds like something right out of a Sci-Fi film (or The Onion) but it’s actually quite ingenious how it works.
The idea is that every app user records their cat’s meow and leaves a comment to establish context. If it’s early in the morning and you’re fast asleep, only to be awoken by a persistent cat crawling over your bed and pawing your face, it’s fair to say that it’s hungry.
As you clamber out of bed and wander downstairs, little Felix begins to meow, desperate for food. At this point, you record the meow and label it as “I’m Hungry”.
In a year or two, the app will have millions of recordings from thousands of cat owners, and these recordings will cover every possible pitch and tone, as well as relevant labels.
At this point, the software can look for patterns in the audio. Using the same software employed by smart voice devices like Amazon Alexa (unsurprisingly, the creator of the app is a former Amazon Alexa engineer), it can listen to new sounds, detect those familiar pitches and tones, and then decipher what the cat is saying.
If we extrapolate a decade or two in the future, we’ve essentially just taken huge leaps toward understanding what our feline friends are saying, and there’s no Doctor Doolittle in sight!
Machine learning is also helping to find new drugs and could be instrumental in the search for a cancer cure. It’s not just something that can help you with simple apps and programs—its potential is limitless.
What Are Artificial Ethics?
We know what ethics are. We talk about them on a daily basis. They concern the things that we do and the ways in which we act—the rights and the wrongs.
In essence, ethics are the principles that govern your behavior, and that’s true whether you’re a medical professional considering the best outcome for your patients, or you’re trying to live a morally just life.
Artificial ethics, contrary to what you might think, don’t simply concern whether or not it is ethical to create and then destroy AI. Unfortunately, we’re a long way from the point that AI can accurately replicate the human mind to the point where it can consciously think and feel.
Instead, artificial ethics concerns the way that AI and data interact with our environment.
A great example is the data gathered by companies like Google and Facebook. They know everything about us, from our names and addresses to our likes and dislikes. By using machine learning, it’s easy to see how companies like Google can make predictions about your life and learn things that even your closest friends don’t know.
After all, it has access to your browsing habits and your search results. Have you suddenly grown curious about decomposition while simultaneously shopping for shovels? There’s a good chance you’re up to no good, and as Google can even track your location, there’s an equally good chance it knows everything that you just did.
But in processing and exposing all of this data, it’s treading some morally ambiguous ground.
The Biggest Issues With AI
As with any revolutionary idea, AI is facing some serious issues. These are the things that Reid deals with on a daily basis, the dilemmas that keep his clients awake at night.
A little later, we’ll look at how the evolution of AI will impact employment, which is a concern that many consumers have. But we’re not quite there yet, and it will be a while before millions of people start losing their jobs to machines.
For now, let’s address some of the AI ethics issues occurring right now.
1. Companies Need Lots Of Data
For machine learning to work, companies need to gather lots of data, and that often raises a series of challenges.
Let’s return to the Google example cited above. If Google wanted to test whether it was possible to predict crimes and detect criminals, it would certainly have the means to do so.
It could track where you go, which locations you search for, what you buy, and what queries you enter into the Google search engine.
Using this data, it can paint a pretty accurate picture concerning your habits and it can use this to predict the likelihood of you committing certain acts.
It seems preposterous and like something plucked out of a Sci-Fi novel, but it’s not that far removed from where we are right now.
If you have ever used Google Ads, you will know just how powerful Google’s algorithms are. It can create visual and text ads based purely on your page and product content, and it can serve these to people in specific locations and demographics.
Furthermore, it knows which of its users are more likely to make a purchase, and if sales are your only goal, it will focus on those users.
The problem is, to gather all of that data, Google would need to create a detailed database of every single user of its services. It would break many ethical codes and even a few privacy laws, and if any of that data were to fall into the wrong hands, it would give an immense amount of power to hackers, scammers, and spammers.
It’s a veritable nuclear bomb of information—whether it’s in the wrong hands or the right hands, it’s still pretty deadly.
These concerns are not just limited to potential Minority Report scenarios, either.
Let’s focus on the problems that you might face as a small business.
Imagine that you want to know what kind of income bracket your customers fall into, whether they’re homeowners, and how many children they have. Are you dealing with students struggling to make ends meet, or are you selling to 1-percenters with several kids and properties?
You don’t want to ask them directly or rely on social media analytics, and so you use your order data to process names and addresses, before comparing this to national census data.
You’re using information that was willingly supplied to you, along with publicly available data, but in doing so, you’re creating a list that every customer (and probably most regulators) would be uncomfortable with.
You could argue that the data was given to you with consent, and that’s true, but a customer gives you their address because they expect you to send them a product, not because they want to be added to a database and exploited for profit.
The more data that a company needs, the more problematic the situation becomes. They have to clear multiple legal obstacles and make sure that what they are doing is not only legal, but ethical.
The problem, as noted already, is that this is a new industry and the rules are not yet clearly defined, but that’s where AI ethics experts like Reid Blackman come in.
2. Explainability
If we return to the example of the dog-recognizing software mentioned earlier, let’s suppose that you feed thousands of images into the software and it becomes adept at recognizing your dog.
You’re so happy with the results that you develop the software further and teach it to recognize different breeds of dog. You then sell the software commercially, positioning it as an easy way to catalog and organize dog pictures.
After a few days, you’re shocked to discover that some cats are being recognized as Pomeranians, it thinks foxes are Chihuahuas, and for some reason, it’s recognizing newborn babies as Pugs.
Your users are angry. Not only is your app seemingly incapable of telling a cat from a dog, but by labeling their beautiful little babies as Pugs, it’s offensive. Surely this is some massive troll program designed to mock new mothers—how could you be so heartless?
To remedy the issue, you try to get into the AI’s head, so to speak. You try to understand why it’s making those mistakes, but you can’t, and as you’re trying to remedy that issue, more are coming.
Your app has gone from slightly offensive to incredibly racist, and now you have a trainwreck on your hands.
In the space of just a few days, you’ve changed from an innovative software developer into a useless creator and a borderline racist.
The unfortunate truth is that we don’t always know the hows or the whys behind AI.
When you fed thousands of images into the program, it became adept at recognizing your dog in the same way that a stranger would. But while a stranger will look for distinctive marks, colors, and expressions, AI works on the pixel level and sees things that you can’t see.
This is not a huge issue for an app that catalogs images. I’m sure that a few people would be irate if their babies were mistaken as Pugs, but most of what I said above was tongue-in-cheek and I think the majority of users would simply use those mistakes as justification for a bad review and maybe an amusing Twitter post.
It becomes a bigger problem, however, if you’re mislabeling people or using machine learning to make important decisions.
For instance, machine learning is becoming increasingly common as an employment tool. By processing thousands of CVs and other datasets, it can determine which applicants are best suited for any particular role.
Similar methods have been used to assign priorities in the healthcare and law enforcement sectors.
If those programs start discriminating against certain groups, there will be uproar, and rightly so. But it’s not always easy to understand why those issues exist, which means it isn’t easy to fix them.
This is something that programmers and AI creators deal with on a regular basis. They know they don’t have 100% control over how the AI operates, but they also know that their necks will be on the line if anything goes wrong.
3.Dealing With Bias
Humans are biased, there’s no getting away from that fact. You might consider yourself to be a fairly tolerant person, and that could be true, but does that tolerance extend to every group and every situation, does it come naturally?
We all suffer from something known as unconscious bias or implicit bias, which is to say that we’re biased without being aware of it.
I can use an old riddle as an example:
A father and son are involved in a terrible traffic accident that kills the father. The son, heavily wounded, is rushed to the hospital and quickly prepped for surgery. But just as he’s about to go under the knife, the surgeon states, “I can’t operate on this boy. He is my son!”
There is a good chance you’ve heard it already; in which case, you’ll have to think back to your reaction when it was first told to you. If not, and if you genuinely don’t have any bias, subconscious or not, the answer should have been obvious immediately (and not just after you have the context provided by this paragraph!)
The answer, of course, is that the surgeon is a woman.
Every female doctor has been mistaken for a nurse at least a dozen times, and every female pilot gets mistaken for an air stewardess.
And even if you’ve never had those specific preconceptions, there’s a good chance you’ve assumed that a group of youths are up to no good, just because they’re in a group, or have been surprised to see someone with a strong accent, tattoos, or piercings in a position of authority.
Some of the biggest biases can be seen during the job application process.
Female applicants have less chance of being accepted for roles involving any kind of physical activity or assertiveness. Many black and Latinx applicants also find it more difficult to get jobs, even when they have the necessary qualifications and experience.
This is key, and it causes some of the issues that we see with AI.
Imagine, for instance, that Company A is headed by a racist and Company B is managed by a sexist. They both process thousands of CVs every year and they both give employee evaluation reports.
Your goal is to create a system that can effectively predict which applicants are best suited for a role, and so you feed CVs from Company A and Company B into an algorithm and combine these with evaluation reports.
On the surface, it seems like a pretty air-tight way of finding the best employees and automating the entire process. But because those companies are less likely to hire non-white/female employees and are less likely to evaluate them on merit, your software will learn to give priority to white men.
Despite having the best of intentions, you’ve just created an incredibly racist piece of software.
This is not just a theory, either. It’s something we have seen multiple times already and something that continues to create problems for programmers.
Simply put, if the data is biased then the AI will be biased as well.
When this happens, people become outraged, the regulators get involved, and companies find themselves in serious hot water.
4. Bad Intentions
Machine learning, like all technologies, has a dark side.
It can be used by criminal organizations and even governments to track, monitor, and manipulate people.
In the past, facial recognition apps have been used to identify government protestors in countries like Russia, and we’ve also seen the rise of Deepfake, which utilizes existing images to edit a video and make it look like someone is doing and saying something they never did or said.
To date, Deepfake has mainly targeted celebrities and world leaders, but if you are active on social media, there’s no reason why it can’t be used against you, as well.
Imagine a world where someone can superimpose images of you and a friend onto an existing video to make it look like you’re having an affair. It could be posted to defame you or it could be used to blackmail you.
Spammers can also use AI and machine learning to fine-tune their phishing attempts and gather user data. It’s something we’re already seeing and you may have already been a victim, but it will get worse.
Machine learning could even replicate your handwriting, which means it can write and sign letters and checks with such accuracy that even you would struggle to tell the difference.
If it has enough data from your emails and messages, these could be simulated as well, with the AI capturing everything from your punctuation and grammar quirks to the length and style of your messages.
In other words, nothing is safe. Everything that you do online, and even some of the things that you do offline, can be replicated using machine learning.
Some of the responsibility falls on social media networks and on companies like Google, as they need to protect their users as best they can. But they can only do so much. If you decide to share images of yourself to the world, those images can be used against you. If you create a blog detailing your habits, likes, and dislikes, Google will index it for everyone to see.
Experts are increasingly stressing the importance of monitoring your online activity because scammers are finding more and more ways to use it against you.
How Will Machine Learning Impact Business?
Machine learning has been hailed as the next major evolution in business. We’re on the precipice of a major change and one that will have some significant implications for the healthcare, employment, and education sector while also impacting your role as a business owner or entrepreneur.
I discussed many of these changes with Reid and what follows is a list of the most important changes.
Machine Learning’s Impact On Employment
In the late 18th century, we saw the emergence of the industrial revolution. It began on farms and worked its way into factories, with machines assuming roles previously occupied by the labor force. After another century or so, all major factories operated primarily on machines, and we gradually moved from an industry run by humans to one dominated by machines.
The fear that machines will take over and decimate the workforce has been a threat ever since. It’s often referred to as “Technological Unemployment”.
It’s a genuine concern, but the future isn’t as bleak as you might expect.
Take the digital revolution as an example.
On the one hand, computers and the internet have led to the loss of millions of jobs. Brick-and-mortar stores moved online, swapping teams of shelf-stackers and shop assistants for a few remote customer service reps.
Bookstores, libraries, video stores, music stores, travel agents, newspapers, and flea markets all saw a massive decline in activity or were rendered completely obsolete.
At the same time, however, the internet gave rise to new industries and new possibilities. Without the web, there would be no online freelancing or social media. App companies, mobile gaming companies, social media networks, and digital media brands all supply millions of jobs to the global workforce and none of those would exist.
Many have argued that the same thing will happen with the AI revolution. They often cite the industrial revolution as their main example and note, as I have done, that it created more jobs than it lost.
But as Reid mentioned during our interview, the AI revolution could be a little more problematic.
There are a couple of issues here.
Firstly, let’s assume that AI does create more jobs than it costs. Maybe we will see a rise in the robotics industry, for instance.
After all, if we suddenly start producing machines to help with housework, the creation of those machines requires quality control and supervision. People need to apply finishing touches and deal with returns, sales, and customer support.
The problem is, that process may take one or more generations, and, in that time, the job market will suffer.
The carpenter, electrician, welder, and engineer who has trained for a lifetime to attain a certain skillset, now has to give it up and retrain in something else. They’re the veritable fish out of water and won’t provide the same value as a child or grandchild who knows that world intimately.
For comparison, imagine if your technophobic parents or grandparents suddenly lost their jobs and were forced to work as remote IT technicians. They panic every time they need to send an email and they hold a smartphone like it’s some kind of magical, fragile gift from the gods.
They will struggle to adapt and, as a result, they’ll be left behind—jobless, penniless, and struggling. Their kids will be okay, as that world is not so alien to them. Their grandkids will be even better, as they were raised with smartphones in their hands, but that transition period will leave many people behind.
The second issue is a little more obvious and a lot more destructive: What happens if we just don’t need those employees anymore?
What happens if factory workers, builders, and technicians, are no longer needed? Sure, someone needs to maintain whatever machines are taking over the workforce, but that’s a role for a small number of well-trained professionals, not the bulk of the labor force.
AI is even being used to create art. It’s writing novels, articles, and film scripts; it’s painting pictures and creating pieces of music. If even our creators aren’t safe, what hope does everyone else have?
This is another aspect of AI ethics that businesses need to consider. Their ultimate goal is to increase profits and reduce expenses, but at what cost?
If a business can use AI to automate 90% of their tasks, how many employees will lose their jobs? And if one company makes those moves, how long will it be before their competitors follow suit?
Imagine that you use machine learning to automate the Live Chat process. It’s something that many companies are already doing, but if you’ve ever been forced to chat to a customer service bot, you’ll know it’s not very effective.
If you can find a solution, you could reduce your team of 20 staff to just 1 or 2 people, whose job it is to deal with the requests that the AI can’t solve. That program can then be sold to other companies.
The only “employees” involved in that process are the writers creating the preset messages and the engineers developing/maintaining the software. That’s a handful of people who could potentially replace most of the 3.1 million customer service reps in the US.
Who Is Responsible?
If AI breaches some of the aforementioned codes of ethics, who is responsible and why does it matter?
If a Facebook technician creates a piece of software that gathers user data and uses this to predict trends, only for that data to be leaked and user privacy to be exposed, is the technician at fault?
As far as Facebook is concerned, they might be. If their code was at fault, then they probably were. If there was a security flaw, the blame could also be placed on the shoulders of the development and implementation team, as well as anyone tasked with testing the software.
Maybe they’ll be briefed on what went wrong and warned against future issues. Maybe they will be fired. As far as the public is concerned, it doesn’t matter, because they’re blaming Facebook on the whole, and not any of its employees.
In the eyes of the public, a company is always responsible for the actions of its employees.
If Coca-Cola tweets something offensive, gets a lot of hate and then announces that it was the result of a rogue customer service agent, does everyone stop hating on Coca-Cola and start directing their hate towards that employee? Of course not.
If you call your cable company with an issue and the customer support rep is rude to you, is your reaction to find that rep on Facebook and comment on their wall to tell them how offended you were, or do you go straight to the company page and leave a bad review?
Obviously, it’s the latter, because we expect companies to be in control of their employees and their systems and when they are not, we hold them responsible.
This is a problem when it comes to AI and data ethics.
Every time new advancements are made in this field, there is a level of uncertainty and risk. The company doesn’t know if that software will perform exactly as intended, it doesn’t know how regulators will react, and it has no way of knowing whether hackers and cybercriminals will seek to exploit it.
If any of those things happen and something goes wrong, they get blamed, their reputation suffers, and they may never recover.
What About Consumer Responsibility?
One of the potential issues that we addressed during our discussion, was whether or not an individual is responsible if they are using a machine that kills someone.
It is a question that has been raised many times before and one that is always interesting to address, especially when you consider how soon it might be relevant.
Imagine, for instance, that you’re in a self-driving or assisted-driving car and it malfunctions. The car hits and kills a pedestrian—who is to blame?
Reid’s argument, which I agree with, is that the driver is only responsible if that car is not 100% self-driving and the driver is expected to intervene in the process, such as by keeping an eye on the road and applying the brakes.
It’s hard to pinpoint the company’s liability here, but it all comes down to whether or not they are deemed negligent. If the car is 100% self-driving and it fails because its sensors don’t detect the pedestrian or the brakes are not applied in time, it may be dismissed as an accident.
The family of that pedestrian can sue, and they will almost certainly have a strong case, but it’s unlikely that anyone will end up behind bars. However, the regulators may come down hard on the company, especially if they are deemed negligent, as might be the case if there was a lapse in quality control.
It might seem like a massively complicated and convoluted scenario, and one that the laws of the future will struggle to manage, but in a way, we do have precedent for this sort of thing.
As an example, if you’re going for a walk with your dog and it attacks and kills someone, you’re not necessarily to blame. Unless, that is, the dog is a banned breed and you were breaking the law by owning it, or it is proven that you provoked the dog.
The victim’s family can and probably will sue you, and there’s a good chance they will win, but if you show remorse and it’s clear this was just one of those rare things, it’s unlikely you’ll be charged.
Maybe it was a rescue dog suffering from PTSD, and something just snapped inside its mind. Maybe it deemed the person to be a threat, attacked as a warning, and opened a carotid artery.
Of course, there are extenuating circumstances, but rarely do these result in serious charges. If, for example, it was proven that you mistreated the dog and that result in it developing aggressive tendencies, it may give the court some pause for thought.
By the same token, if a tree falls down in your yard and kills someone, you won’t be charged, but if it fell because you didn’t maintain it properly and ignored warning signs, it’s a different story.
The same mistreatment could apply to a car. It’s scary to think that we’re looking forward to a future where delaying an oil change and running your vehicle into the ground could result in death and a serious charge, but maybe future generations will be much more fastidious with their vehicular maintenance.
We can only hope!
Learn More About Data And Business
Big data plays a significant role in today’s high-tech world. It will redefine how we discover new medicines, develop new products, and provide a better service. As a small business, it may seem a little irrelevant and out of reach, but if you use social media marketing, Google Ads, or even Google Analytics, you’re already taking advantage of this technology.
To learn more about the benefits of using data for your business, take a look at some previous This Week With Sabir episodes, including a guide to improving conversion rates with Gajan Retnasaba and tips on understanding analytics with Avinash Kaushik.
Reid was the 23rd guest on This Week With Sabir for 2020 and there’s still more to come, so if you haven’t already, make sure you subscribe to my YouTube channel and bookmark the Growth By Sabir website.
About Our Guest: Reid Blackman
Meet Reid Blackman, Ph.D. Reid is the Founder and CEO of Virtue. In that capacity, he works with senior leaders of organizations to integrate ethics and ethical risk mitigation into the company culture and the development, deployment, and procurement of digital products. He is also a Senior Advisor to Ernst & Young and sits on their Artificial Intelligence Advisory Board, and is a member of IEEE’s Ethically Aligned Design Initiative and the EU AI Alliance.
Reid’s work has been profiled in The Wall Street Journal and Dell Perspectives and he has contributed pieces to The Harvard Business Review, TechCrunch, VentureBeat, and Risk & Compliance Magazine. He has been quoted in numerous news articles, and he regularly speaks at various venues including The World Economic Forum, SAP, Cannes Lions, Forbes, NYU Stern School of Business, Columbia University, and AIG.
Prior to founding Virtue, Reid was a professor of philosophy at Colgate University and a Fellow at the Parr Center for Ethics at the University of North Carolina, Chapel Hill. His research appears in numerous prestigious professional journals including the European Journal of Philosophy, The Canadian Journal of Philosophy, The British Journal for the History of Philosophy, and Erkenntnis. He also founded a fireworks wholesaling company and was even a flying trapeze instructor. He received his B.A. from Cornell University, his M.A. from Northwestern University, and his Ph.D. from The University of Texas, Austin.