Comprehensive coverage

Artificial intelligence is starting to manage us

The artificial intelligence follows employees, ostensibly to make sure they do a good job. Should employers be allowed to track their employees using artificial intelligence?

Job interview with a bot. The image was prepared using DALEE and is not a scientific image
Job interview with a bot. The image was prepared using DALEE and is not a scientific image

Congratulations - you are looking for a job! You finished 12 years of school, the army, four years at the Technion with an engineering degree, and now you finally feel that you can submit your resume without being ashamed. Within ten minutes you receive an answer and an invitation to an interview. 

When is the interview? in three hours. 

You immediately understand the matter: you are a sought-after commodity! The interviewer must have been so enthusiastic about your skills and your impressive personality that stands out from the resume, that he cleared his schedule especially for you. The job is right in the palm of your hand!

It's time for the interview. You open the video camera, enthusiastically connect to the website and find yourself sitting in front of... a computerized avatar. He shines an illuminated face on you, and you dare to hope that maybe there really is a person sitting there on the other side of the screen. Then he makes it clear to you himself that he is just an artificial intelligence. 

And the interview begins.

If this all sounds like science fiction, well, it's time to get real. Already today, artificial intelligence is starting to interview people, give them scores and ratings, and decide whether to advance them to the next level of recruitment.

And according to those involved in the matter, they also do it excellently.

Micro1's platform is a good example of the new way of recruiting employees. The potential employers direct the job seekers to the platform, and define to the artificial intelligence what skills the perfect employee should have. The artificial intelligence does the rest: it automatically generates questions that are supposed to test the interviewee's level of expertise, and then presents them to him in spoken language. She listens to his words, transcribes and records them. If the interviewee is interested in being accepted for a development position, the artificial intelligence asks him to complete a real-time test - which is immediately checked by GPT1. 

And what if the interviewee tries to cheat? Micro-1 has a solution for that too: the platform connects to the job seeker's video camera, and monitors him to make sure he doesn't cheat on the test. 

For managers, Micro-1's service may seem like a godsend. According to a Glassdoor survey, the average corporate job in the United States today attracts approximately 250 applicants. HR managers are forced to wade through hundreds of pages of resumes, just to select around five candidates to move on to the next round of interviews. And the interviews, of course, require time, attention and a lot of patience.

Artificial intelligence can dramatically shorten this whole process. Instead of spending long hours on interviews, the artificial intelligence does all the work of reviewing and interviewing dozens or hundreds of potential employees at the same time. And not only that, but it produces meticulous reports with grades and explanations about the way they were determined. The grades do not focus only on the level of success in the test. They also include reference to the candidate's communication skills, demonstrated passion and general approach to work. They even rate the likelihood that the candidate cheated in the interview.

Given all these capabilities - it's no wonder that the company recently raised more than three million dollars already in its earliest stages of existence. This is one of the most impressive examples of the way in which artificial intelligence can determine the fate of an employee - even before he has even been accepted into the company.

But this is not the only example. In fact, artificial intelligences are starting to be involved in every field of work today - but they do not necessarily work for the employees, but for their managers.

Peppers and pineapple in the hand of artificial intelligence

Almost five years ago, Domino's Pizza realized that if it wanted to win the race against other pizzerias, it had to improve the performance of its employees. Too often they put in too many extras, or too few extras, or these slipped between the halves. And of course, from time to time real catastrophes happened, and the workers got confused between the extras. In short, they worked like human beings, and we all know how it ended.

Then Dominos introduced artificial intelligence to the business. 

Domino's launched advertisements according to which customers can see the pizza they ordered while it is still being prepared. At the same time, the workers in the branches in Australia and New Zealand noticed cameras in the kitchen that follow them while the pizzas are being prepared. Domino's explained that it "uses advanced machine learning, artificial intelligence and sensor technology to identify the type of pizza, balance the spread of toppings and fix toppings." 

All this sounds great for the customers, but you have to remember that this is actually monitoring the workers and the products they put out under their hands. As you can imagine, this monitoring is a double-edged sword: it serves the customers well and the managers who can identify defective product and negligent workers. The employees, on the other hand, may actually be harmed by it because it can also estimate the personal output of each of them. Beyond that, such a system inevitably leads to the 'standardization' of pizza preparation. That is, as soon as the elements of the preparation are carefully quantified, a new standard for the optimal pizza will be created: exactly 13 mushrooms in one family pizza, and if the employee puts even one mushroom more - he will be reprimanded for this sin.

According to reports on the network, the 'artificial intelligence of the pizzas' system has expanded since 2019 as well to branches in the United States, but it is not clear whether it also operates in Israel. In any case, it's clear that Domino's automated pizza-checker is just one example of many AI technology that supervises workers. Among the others we can name A Walmart invention who listens to the rustling of the bags and the beeping of the laser scanners at the cash registers to make sure that the cashiers are doing their job faithfully, or Amazon smart bracelet who is supposed to monitor the movement of workers.

In the last two cases, these are patents issued by the same companies, and it is difficult to know to what extent they really use this equipment. But in at least one case, more than three million workers are known to have been tracked with artificial intelligence.

It's not paranoia if everyone is really against you

The Aware company recently reached an impressive achievement: it tracked more than three million employees in recent years. The company, which has already raised more than sixty million dollars, is hired by companies to constantly monitor employees. According to reports, the AI ​​analyzes more than 100 million content messages every day in conversations and private meetings in Slack, Microsoft Teams, Zoom and other apps. It detects "toxic" speech, and is able to quantify the feelings of employees regarding various issues - and deliver regular reports to managers at all levels.

Air says it only wants to help companies understand employee sentiment at a general level, not track individuals. But its tools explicitly allow managers to receive detailed information about specific employees, especially in cases where they pose a threat to the company or their friends. Who decides what constitutes a threat to the company or the other employees? The company's management, of course. This means that managers can use Air's services in principle to follow certain employees, at a level that there is a fear that it will even lead to harassment towards them by the managers.

who uses Air services? Just a few of the largest companies in the United States: Walmart, Delta, T-Mobile, Starbucks and others. European giants like Nestlé also decided to use Air's nosy artificial intelligence. 

As in the case of Dominos, here too it is a double-edged sword. Yes, monitoring employees' feelings in general can lead to managers identifying obstacles and problems in the conduct of the company at an early stage, and acting to correct them. Yes, they will also be able to identify bullies and harassers of all kinds, and weed them out. But at the same time, employees can easily lose their privacy as well. Every word they say, every emoji and every letter can be counted and scrutinized. Those workers can no longer speak freely with their colleagues. They cannot exchange opinions about the bosses or grumble about their colleagues. They are only expected to do their job, smile and shut up.

like robots

threats and opportunities

"Good morning, Mr. Zargham007," I wrote monotonously. "Thank you for calling technical support. I am service representative number 338645. How can I help you tonight?" The customer courtesy software filtered my voice, changed the tone and flavor to make sure I always sounded happy and cheerful.

"Oh, yes..." Zarghem007 began. “I just bought this rare sword, and now I can't even use it! … What the hell is wrong with this piece of junk? Is she broken?”

"Sir, the only problem is that you're a complete fucking moron." I said

I heard a familiar warning buzz, and a message appeared on my screen: 

Violation of courtesy - signs: stupid, fucking

The last response was silenced - the violation was recorded

The customer courtesy system… recognized the inappropriate nature of my response and muted it so the customer didn't hear what I said. The software also recorded my "courtesy violation" and forwarded it to Trevor, the supervisor in my department, so that he could raise the matter at my next fortnightly performance review.

  • From the book Ready Player One

In the book "Player Number One" Ernst Klein describes a dystopian futuristic world, in which corporations monitor every conversation of their employees with customers. Not only that, but the artificial intelligence is able to detect deviations from the desired conversation pattern in real time, and even correct and fine the employee for his sins. The worker, in this case, is nothing but a machine himself, which requires another machine - artificial intelligence - to adjust and adjust its performance.

Is this the future encoded for us by the artificial intelligences that determine whether to recruit us to the company, do we speak politely and with a broad smile to our colleagues, and how do we sprinkle the mushrooms on top of the cheese on the pizza?

Probably so. And maybe it's not such a terrible idea.

The bitter truth, after all, is that in many jobs the worker supposed act primarily as a machine. Without creativity, without independent thinking and certainly without insulting the customers he is supposed to serve courteously. And that's okay. This is not generally appreciated work, but it serves as a source of income for many in the world. If it is important for an employee to gossip about his bosses, he will be able to do so on an unmonitored platform. And if he really feels like treating the customers he serves with disdain - well, it's probably better that his employer knows that this is how the employee behaves.

The situation becomes more complex in jobs that require teamwork and open discussions, because we know that monitoring systems negatively affect employees' willingness to open up to their peers. After all, if you know that your every word will reach the boss's ears almost immediately - you are much more careful with your words. 

"It has a chilling effect on what people say in the workplace." Amba Kak said, CEO of the "Artificial Intelligence Now" Institute at New York University, in an interview with CNBC. She added and referred to the privacy and security problems of this type of system, and claimed that "there is no one who can seriously tell you that these challenges have been solved."

And this is the situation today, when the artificial intelligences that supervise the employees are under the full control of the managers. What will happen when they become more autonomous - as everyone expects will happen - and can make more advanced decisions on their own? For example, to fire an employee based on his failed performance, or due to a 'toxic' statement regarding the company's flagship product?

Trust me, it will be fine

In 2015, Amazon realized that something strange was going on with its artificial intelligence. A year before, the company started trying to rank resume documents submitted by job seekers. The Holy Grail, according to inside sources quoted in Reuters, was to produce an engine that receives "100 resumes, and shoots out the five best documents, and we will hire them."[1]

The basic idea is good, overall, but the execution was lacking. Specifically, Amazon's engine was biased against women. The reason is that the high-tech industry is flooded with men, and the engine trained on the existing resumes and understood - without being told explicitly - that if you are a woman, your chances of being in high-tech are lower.

So far no surprise. Such biases are something to be expected. In fact, it is most likely that the developers of the artificial intelligence realized that it ranks women and men differently, and acted to correct the matter automatically. They must have defined the artificial intelligence, for example, that it should ignore the gender of the candidate. I guess they just hid him from her.

What did the artificial intelligence do? She did not refer to gender, but she found small signals that indicate gender - and rejected candidacies based on them. She fined, for example, candidates who wrote in their CVs that they participated in the "Women's Chess Club". It displayed a similar bias against women's only universities, downgrading applicants from such universities. 

The developers also tried to deal with these biases of the artificial intelligence by making the names of the programs neutral - "Chess Club" instead of "Women's Chess Club" - but it is clear that this is only a band-aid. The AI ​​must have managed to find other ways to differentiate between women's and men's resumes, and continued to penalize the women.

In the end, Amazon decided to eliminate the project after it failed to produce objective results. We see the same problems crop up with every use of artificial intelligence: it is biased by the very fact that it always receives biased information for training. 

The cynics will now say that 'bias' is not always negative. Data from Israel show, for example, that Arab drivers commit more traffic violations with casualties, and half of the drivers who drove at an excessive speed that led to an accident - are Arabs. One can understand a business owner who would see these data and prefer not to hire Arabs as delivery drivers. 

The problem is that data can always be interpreted in several ways, and it is difficult to identify the nuances in them. Is it possible, for example, that the statistics about traffic accidents in the Arab sector are due to the fact that many drive ATVs there? Or maybe due to the disrupted and dilapidated road infrastructure in many villages? If this is the case, then we may automatically blame the Arab workers for the sins of a very narrow segment of the population within them, or add to the injustice that the government does to them in the first place.

Will business owners and managers be able to contain these statistical nuances? Will they be able to judge each case individually? And this is when artificial intelligence instructs them because - "You must be careful! This employee came from a population prone to road accidents!"

It seems clear to me that human managers will have difficulty dealing with such advice and recommendations. We are always attentive to every piece of information we receive from an 'authoritative source', and when managers know that the computerized tool they rely on has received approval from the company's top management - they will understand that they must put at least part of their insurance in it. There are also studies that show that people tend to trust robots and artificial intelligence more than other humans. In some cases - such as in money management - they trust the artificial intelligence more than they trust themselves.

There is no escaping the fact that from the moment artificial intelligence provides a negative opinion regarding a certain employee - regardless of the actual reason - the human manager will be adversely affected by it. He will include her in his decision-making process consciously or unconsciously, and will probably give her a lot of weight. Maybe bigger than it should be.

Such are we, human beings.

אז מה עושים?

the way forward

What can be done to integrate artificial intelligence in management and decision-making nodes in the best way? The question can be answered in three ways, each of which touches on a different challenge.

The first challenge, if we're being honest, is making sure the company doesn't get sued because of issues of discrimination. Here the answer is simple, even if not satisfactory: the managers need to be trained to treat the recommendations of the artificial intelligence carefully. In other words, they need to be taught critical thinking skills and taught about its limitations and the limitations of the data it relies on.

The second challenge is preserving the innovation and thinking-outside-the-box ability of the company's employees. When the employees know that they are being followed day and night, and that the manager is able to receive a report about their every word and every action, there is a real fear that they will not try creative and innovative solutions to existing problems. The employee who knows that every topping on the pizza topping that is there is strictly calculated, will not add to the topping - and Dominos will never know that its customers intrested Addition-to-addition, but do not come to their satisfaction. Diversity is important as a way to try new directions and improve the existing one, and constant monitoring will harm the ability and desire of employees to do this.

The simplistic answer to this challenge is to completely avoid monitoring the discourse and the employees. But a company that does this may find that others are bypassing it in the round. That her pizzas are less round than the others, that the half-mushrooms get mixed up with the half-olives, and that the branch manager curses the employees while they prepare the pizzas. All these difficulties will cost them customers and internal frictions. 

what can we do? It is impossible to avoid the use of technology to monitor employees, but it must be used correctly. Smart managers will define clear rules in advance - and preferably in cooperation with the employees themselves - regarding the type of monitoring of employees that will bring the most benefit to the company, with as little harm as possible to the employees. They will make sure not to cross the lines they have drawn for themselves, because in the end the happiness of the employees is also important and meaningful. Employees who feel that they are trusted, who are able to speak freely and bring innovative and creative ideas, can uplift and advance the entire company.

The third challenge is that we are rapidly advancing to a time when artificial intelligence will manage entire departments, and shortly after - entire organizations as well. The artificial intelligence will be the CEO, the VP, and will fulfill any other management role in the company. And, it will monitor, rate and make decisions on its own regarding the human employees in the company. Any bias it has will be perpetuated within society.

In order to prepare ourselves for such a situation, we need to decide that when artificial intelligences make decisions that directly affect the lives of human beings, they should maintain modesty. They need to examine their decisions carefully and pass them through a series of other artificial intelligences that will provide additional opinions and enrich the computerized internal discussion about the decision. The final decision, when it is made, should be supported by data and take into account both the good of the company - and the good of the individual employee, as far as this is possible.

Good luck to us, and may all our pizzas always be uniform in their toppings.

More of the topic in Hayadan: