Meet Your New Colleague: AI

• • • • • • • • • • • • • • • •

Put the quirky, sexy or scary robots of sci-fi aside; in reality, communicating with inhuman tools like artificial intelligence is more transactional than fantastical.

• • • • • • • • • • • • • • • •

BY Andie Burjek

A

rtificial intelligence is increasingly people’s interviewer, colleague and competition. As it burrows its way further into the workplace and different job functions, it holds abilities to take over certain tasks, learn over time and even have conversations. Many of us may not even be aware that who we’re talking to isn’t even a “who” but a “what.”

In 2017, 61 percent of businesses said they implemented AI, compared to 38 percent in 2016, according to the “Outlook on Artificial Intelligence in the Enterprise 2018” report from Narrative Science, an artificial intelligence company, in collaboration with the National Business Research Institute. In the communication arena, 43 percent of these businesses said they send AI-powered communications to employees.

Defining ‘Artificial Intelligence

For all the attention and conversation surrounding artificial intelligence, many people have a skewed idea of what it really means, according to Eric Shangle, director of people operations at AI platform Figure Eight. “We need to have a broader conversation about what AI is in the context of work,” he said. It’s not like the sci-fi version of AI. For example, most chatbots are nothing more than an automated tool that doesn’t have the intelligence or the capabilities to learn over time. Automation is not the same as AI.

Here’s what AI really is, along with important definitions of key terms in the AI space:

Artificial Intelligence: “A branch of computer science in which computers are programmed to do things that normally require human intelligence. This includes learning, reasoning, problem-solving, understanding language and perceiving a situation or environment.” (Source: “2018 Tech Trends Report,” created by Amy Webb, founder of The Future Today Institute)

Chatbots: “A service, powered by rules and sometimes artificial intelligence, that you interact with via a chat interface.” (Source: Chatbots Magazine)

Machine Learning: “Describes a system that uses algorithms to analyze big data sets in order to perform a wide array of tasks better than we can. Over time, the system gets better at those tasks. … Within the field of AI, machine learning is useful because it can help computers to predict and make real-time decisions without human intervention.” (Source: “2018 Tech Trends Report”)

Predictive Analytics: “The branch of advanced analytics used to make predictions about unknown future events. It uses many techniques from data mining, statistics, modeling, machine learning and artificial intelligence to analyze current data to make predictions about the future.” (Source: Predictive Analytics Today)

Natural Language Processing: “A branch of AI that helps computers understand, interpret and manipulate human language.” (Source: SAS)

— Andie Burjek

Many candidates don’t even realize that they’re not speaking to a human, according to Sahil Sahni, co-founder of computer software company AllyO, which uses an AI-enabled chatbot to speak to candidates and answer questions in the recruiting process.

Based off data from AllyO’s applicants, he found that less than 30 percent of candidates think that they’re speaking to something not human. The other 70 percent either did not disclose what they thought or believed there’s a person behind that chatbot.

AllyO does not disclose up front to the candidate that they are not speaking to a human. However, if they were to ask outright if they are speaking to a person or an AI-enabled chatbot, the system discloses that information. “The goal is not to goof anyone here. The goal is to have the best candidate experience. Lying about it is not the best candidate experience,” Sahni said.

Candidates don’t behave differently when speaking to an AI as opposed to a human, Sahni added.

“When you’re a job seeker, it’s not like you’re calling customer service to complain about something. You’re at your best behavior,” he said. “You tend to be a lot more tolerant, you tend to be a lot more respectful, no matter what the process might be.”

Dennis R. Mortensen, CEO and founder of New York-based technology company X.ai, also has access to conversations between people and machine agents, and his team spent the past four years assembling a data set of more than 10 million emails on these dialogues. Their findings have similarly found that people don’t communicate differently just because they’re speaking to a robot.

Giving X.ai’s own personal assistants Amy and Andrew as an example, he said, “It would be very easy to imagine that I will treat them like machines and remove any level of emotion otherwise applied to a traditional conversation with a human, or that the system as a whole would not leave any room for empathy toward the machine. I am happy to say that it is not the case.”

If people tend to be more aggressive or rude with a real person, that same communication style can be seen in how they converse with a machine.

This is not to say that everyone treats a machine with respect. If people tend to be more aggressive or rude with a real person, that same communication style can be seen in how they converse with a machine. The same trend goes with people who are neutral or overly friendly in how they speak to others.

How potential employees actually speak to AI is a different conversation than how potential employees should speak to AI, he added. That is, it’s unclear whether how a person treats a machine says anything about how that person would treat other people, and it’s unclear whether something like a person being rude to a machine agent should impact their job prospects.

“We can certainly agree that we do care if it’s a human recruiting coordinator,” Mortenson said. But machines have no feelings or emotions and cannot be offended, so it would be easy to argue why employers shouldn’t care. Ultimately, “I do think we should care even if it is a machine,” Mortenson said. “I understand why we might care a little bit less, but I don’t think we can just discard that as a signal.”

He gave the example of a report which found that this technology could have implications on how kids learn how to communicate and teach them that speaking harshly or impolitely to people has no consequences.

“In real life there’s a penalty to being an asshole,” Mortensen.

Limits and Capabilities of AI in the Hiring Process

Machine learning allows AI to gain knowledge over time and learn from its interactions, much like a person would. That being said, even though it has the ability to mature in its own way and become more humanlike over time, that still doesn’t make it human, and there are certain questions that a person might have to answer, for example questions about company culture, according to Sahni.

AI systems are capable of taking this into account. For example, AllyO can recognize when a candidate asks a question that cannot be answered by a machine and brings in a person who can answer that question, Sahni said. This way, the candidate can have a positive experience and not feel like they’ve lost out by not speaking to a real person.

“If the process is objective, AI knocks it out of the park. If the process has any subjectivity to it, AI does really well looping in the hiring team,” he said. “A good AI system typically has human support behind it.”

Much like people themselves, AI has the potential for bias.

Much like people themselves, AI has the potential for bias, according to Eric Shangle, director of people operations at AI platform Figure Eight, based in San Francisco. For example, Wired reported in July 2018 that Amazon’s facial recognition software system Rekognition confused many black members of Congress with publicly available mugshots and that facial recognition technology’s problem in detecting darker skin tones is a well-established problem.

One reason why a tool may be biased is training data bias, Shangle said. From the developmental side of machine learning, the creator of a tool must input a data set to train the algorithm, and if it does not use a diverse data set, then an employer using the tool may come across bias blind spots.

“What are the biases of this tool?” is a legitimate question for employers who are looking to purchase a machine learning tool such as facial recognition software, Shangle said. A recruiting tool may, for example, have a bias toward college-educated job seekers.

David Dalka, founder of Chicago-based management consulting company Fearless Revival, agrees that AI has its limits. He has a more traditional view of what recruiting should look like, arguing that companies should invest less in technology and more in human recruiters who work at the company long-term, know the company culture and know what kind of person would be a best fit for the job, rather than look for trendy keywords or job titles in résumés.

“I’m not opposed to AI tools if someone built the full data library of all the factors and stopped focusing trivially on things like job titles,” he said.

He suggested that companies should more carefully consider the attributes that matter in a candidate — Do they read any books? Are they naturally curious? What are their skills and degrees? — and consider how they would weigh these attributes in an AI system. Ultimately AI is simply a tool that analyzes content.

“This idea that some wizard will magically create this black box that will hire the right people without you thinking of these things is a fallacy,” Dalka said.

Andie Burjek is a Talent Economy associate editor. To comment, email editor@talenteconomy.io.