Stop Overengineering People Management
For decades, the business world has embraced worker empowerment. But recently a countermovement—workforce optimization—has been on the rise. It treats labor as a commodity and seeks to cut it to a minimum by using automation and artificial intelligence, tightly controlling how people do their jobs, and replacing employees with contractors. This approach is especially prevalent in the tech sector and the gig economy. And it is cause for deep concern, says Wharton professor Cappelli.
Optimization appeals to most executives because they’ve been taught how to do it and understand it. It aligns with hard priorities, like lowering costs, that make Wall Street happy. Yet there’s no evidence that it improves business results. Moreover, history suggests that seeing people management as solely an engineering challenge leads to enormous problems. Taking responsibility away from workers demotivates them and undermines productivity and innovation. When algorithms make all the decisions, it isn’t even clear how employees can make suggestions.
Though many processes can still be improved by optimization, managers shouldn’t choose it over empowerment. The key is to find the right mix of the two approaches, as the successful “lean production” model first introduced by Toyota does.
For four decades the belief in worker empowerment has been ascendant. But in recent years a movement to optimize labor has been gaining strength. It treats labor as a commodity and strives to cut it to a minimum by using automation and software, tightly controlling how people do their jobs, and replacing employees with contract and gig workers.
There is no evidence that this new form of “scientific management” is an improvement. By taking responsibility away from workers, companies demotivate them, undermining their productivity and innovative contributions.
Don’t choose optimization over empowerment. Instead, make an effort to find the right mix of the two, as the highly successful “lean production” approach has. The notion that you can treat people like machines is dangerous.
The long march toward enlightened management is typically seen as beginning in the 1930s, when researchers and, more important, corporate leaders began to abandon the assumption that workers should be treated like machines and required to perform tasks according to precisely engineered specifications. They started to embrace the belief that business performance would improve if employees were actually involved in work decisions. For decades the camp that favored empowering employees grew. But now there are strong signs that the pendulum is swinging the other way—that the old engineering model is reasserting itself with gusto. And that’s cause for deep concern.
While many organizations—especially ones that are flatter or have adopted agile methods—still claim to believe that engaged employees matter, a significant and rising number seem to be following an optimization approach, wherein decision-making and control are pushed back to experts and algorithms. Labor is treated as a commodity, and the goal is to cut it to a minimum by replacing employees with contract and gig workers and by using automation and software to reduce the need for human judgment. Ideal behaviors are dictated to the remaining employees, who are closely monitored for compliance. So far, this change has not been backed up by evidence that it’s an improvement.
Optimization appeals to most executives because they’ve been taught how to do it and understand it. History suggests, though, that knock-on problems caused by seeing worker productivity solely as an engineering challenge have been enormous and persistent. So we should know better this time around. Generations of evidence about the benefits of employee empowerment and the costs of taking it away are being ignored. It is possible to strike a balance between the two models and get benefits from both, but that requires backing away from the idea that worker performance is fundamentally an engineering issue.
Labor is treated as a commodity, and the goal is to cut it to a minimum.
The popularity of the engineering approach has increased during economic downturns—when workers don’t quit even though they hate being treated like machines—and has fallen in upturns, when workers do jump ship or protest. The coronavirus recession will most likely further entrench it. Without resistance from the labor market and any careful internal measurement of the effects, optimization will easily carry the day. That would be a terrible mistake.
“Scientific management” and its goal of operating organizations efficiently began with Frederick Taylor in the early 1900s. His view was that there was one best way to perform work tasks. Engineers could figure it out, and the role of workers was only to execute it. These arguments soon extended from production work into white-collar jobs, shaping everything from pay systems to the design of offices and buildings.
In the 1930s, Western Electric and other employers saw problems with this approach—in particular, evidence that employees were holding back effort—and began experimenting with programs in which workers were given more say. Piece rates (paying individual workers for the amount they produced) and performance targets were relaxed. The changes led to sizable improvements. Elton Mayo and his colleagues at Harvard Business School documented these results and put together lessons about how to get them, launching the human relations movement. It centered on paying attention to the psychological and social needs of employees: They wanted to have relationships with other employees, to feel as though their work mattered, and to be involved in decisions. When those conditions were met, workers’ performance skyrocketed; when they weren’t, it plummeted.Bruce Peterson/Gallery Stock
In 1957 the renowned management scholar Douglas McGregor observed in Harvard Business Review that management views on how to get the most out of workers were deeply divided: One camp subscribed to the view that workers had to be tightly controlled and directed; the other believed that workers contributed much more when they had the freedom to express their ideas and take initiative. In his seminal 1960 book The Human Side of Enterprise, McGregor labeled the first approach Theory X and the second Theory Y.
In the past four decades the Theory Y model has been on the rise. Joint employer-employee health and safety committees, quality circles, and empowered factory teams have proliferated. The big push toward Theory Y began in the late 1970s, when there was overwhelming evidence of the poor quality of work being done in U.S. manufacturing and the rest of the world to which Taylor’s ideas had spread. At least part of the problem was that automation had made jobs so boring that workers were disengaged from their tasks. When management responded to their lack of effort by monitoring them more closely and punishing them more severely, performance and quality declined further. The antidote was arrangements whereby employees doing the work, not quality inspectors at the end of the production line, found problems and took charge of fixing them. Japanese companies were early adherents. Toyota’s lean production method, for example, had several components, but its core idea was granting frontline employees the authority to improve quality and productivity—to the point of giving them the power to stop production lines. The clear superiority of cars and other products made at such factories soon caught managers’ attention.
By the 2000s lean production (also known as the Toyota Production System) had spread from automobiles to health care to government and every industry in between. Quality and productivity and worker outcomes, such as reduced turnover, were better. But it was often a struggle to introduce lean production, most famously in unionized U.S. auto factories, where work rules were extensive, distrust between managers and workers ran deep, and a “not-invented-here” attitude prevailed. In recent years, however, the trend toward agile project management helped spread Theory Y ideas further.
One could argue that the popularity of the behavioral model started to wane with the Great Recession, whose effects lingered so long that many younger managers came of age knowing nothing else. But other factors were at work as well.
One big concern of companies was always that while market demand fluctuated a lot, their workforces were pretty fixed. They were hard to cut when business was down and hard to bring back quickly if things suddenly picked up. The gig economy suggested a different approach.
Runaway-growth stories like Uber, whose drivers were paid only when there was something to do right at that moment, made a big impression on other employers, which opted to cut full-time staff and add contractors who didn’t get benefits or need to be paid when business fell. Shifting to a workforce that was like a faucet—turn it on when you need it, turn it off as soon as you don’t—and squeezing fixed costs in the process became an explicit goal. Staffing firms and recruitment process outsourcing (RPO) companies stepped in to enable the transition. They introduced terms like “liquid workforce” and “talent on demand” to describe systems in which contractors were paid by the task and vendors provided just-in-time staffing. Now RPO firms offer “full cycle” engagement, managing the balance of hiring, layoffs, and contracting for employers to secure the minimum level of staffing required to get the work done each day.
When we take away all decisions from employees, they no longer feel accountable.
The talent-on-demand model is now widespread. Studies show that about a third of the individuals working in U.S. corporations are not employees of those companies. Google has more contractors and temp workers than full-time employees (130,000-plus versus 123,000, according to a 2020 story by Daisuke Wakabayashi in the New York Times), a phenomenon not uncommon among tech firms. Contract work is at the core of virtually all the car service companies and of delivery businesses such as Amazon Flex and Deliveroo. They push the legal boundary between employees and contractors by effectively supervising much of what contractors do: monitoring exactly where drivers are and plotting out turn-by-turn routes for them. According to a New York Times story by Patricia Callahan, Amazon Flex even requires an eye-popping 999/1,000 standard for on-time delivery. (Amazon didn’t respond to a request for a comment about its practices.)
And there’s no proof that shrinking the workforce actually improves business results. On average, cutting employees early and hard in recessions is not associated with better financial performance, and according to studies, including one by Wayne Cascio, Arjun Chatrath, and Rohan Christie-David, companies that hold off on layoffs do better. Moreover, every contract requires someone to manage it, and that counts against any cost savings—something that Lauren Weber of the Wall Street Journal found in the computer games industry.
In addition, my research and that of others has shown that using agency workers alongside employees has negative effects on the permanent staff, weakening loyalty and relationships with peers, and lowers operational performance. We don’t know much yet about how the productivity of individual contractors compares with that of employees, but we do know that, unlike employees, they have no legal or psychological obligation to look after the company’s interests. So while there are certainly plenty of engaged contractors, companies shouldn’t expect discretionary efforts from them—it might actually violate their contracts to jump in and do something companies didn’t ask for. Nor should they be expected to go out of their way to pass along good ideas to companies (as employees often do) when they can sell them to those clients or their competitors.
A final reason that the assumptions behind the liquid workforce don’t hold is that contractors do not actually seem to go away when business heads south. (The pandemic-related shutdowns that caused Great Depression–level unemployment for both regular employees and contractors is an obvious exception.) Research shows that contractors often stay with clients just as long as regular employees do because they start to take on more-vital roles. If they leave, their knowledge and information go with them. Consulting engineer Tim Near, for example, finds that he is pretty valuable as the only person who knows the original specifications and design for an aircraft component, now back in demand, that he began work on as a contractor 15 years ago.
A simple but important practice from optimization theory—price differentiation—is now being applied to starting salaries. It’s easy to forget that employers used to have fixed starting salaries, especially for entry-level jobs; now negotiating them is in vogue. Fifty-two percent of employers responding to a 2017 survey conducted by CareerBuilder reported that they offered prospective hires salaries lower than what they were willing to pay, undoubtedly hoping that some people wouldn’t try or be able to bargain them up. They were right. Most employees didn’t.
Workplace experts know that in the long run few issues cause more difficulties, including legal problems, than paying people with similar skills different amounts for doing the same job. But the up-front savings generated by minimizing starting pay—which we can easily measure—seem to have enticed companies to take that chance.
The most powerful force pushing companies toward Theory X is artificial intelligence. At present, AI tools are virtually all algorithms derived from machine-learning programs: sets of equations that optimize staffing requirements, the fit of job candidates, marketing moves, and so on. Algorithms take decision-making away from employees and move it to experts—the data scientists who build them. This is exactly the shift that Taylor advocated: finding the one best way using engineering principles.
Consider a job that used to be a bastion of individualism and autonomy: long-haul trucking. Once upon a time, truckers could drive how and when they wanted as long as they got to the destination on time. Now algorithms dictate routes and schedules, driving practices, and everything else. Truck cabs are outfitted with equipment that monitors drivers and collects information, both to enforce the requirements and to improve the algorithms. Cameras record whether drivers take their hands off the wheel, allowing companies to dock their pay if they do; speed and driving time are watched minute by minute; and drivers are given turn-by-turn instructions for getting to each destination (which, say, reduce left-hand turns because they account for more accidents and take more time).
A good example of where this can lead comes from Amazon and its more than 125,000 warehouse employees, who are given targets, created by algorithms, for how long they should take to pick each item in an order. Failure to meet a target leads to a warning, also issued by the algorithm, and three warnings are grounds for dismissal, according to a 2019 New York Times article by Scott Shane. The supervisor still has the final call on firing the employee, but how long that will last isn’t clear.
When we take away all decisions from employees, they no longer feel accountable, and their interest in contributing extra falls. With AI-based algorithms calling all the shots, it isn’t even clear how they could help. Suppose a truck driver discovers a better way to get in and out of loading docks: Whom does the driver tell? Yes, the algorithms save gas and money, on average, but worker-generated innovations won’t happen if we pull away from empowerment and institute the planning and controls associated with optimization.
Transferring decisions from line managers and workers to experts and software has significant costs that are harder to track. One is that it undermines supervisors and line managers whose responsibility for hiring, scheduling, assessing performance, and the like was the source of their authority. What does a supervisor say to an unhappy employee who has been slotted to work three Saturdays in a row by scheduling software? How can that supervisor later ask the employee for extra help when the supervisor can’t do anything for her? The exchange of favors that builds relationships and gives employees the sense that the organization supports them disappears in this environment.
Then we come to monitoring white-collar work, something that used to be extremely difficult to do, keeping optimization in that realm at bay. No longer. New performance-management software that counts keystrokes and captures and analyzes screenshots to track goof-offs is just the tip of the data collection iceberg. Vendors such as Teramind and InterGuard sell off-the-shelf systems that provide all these functions and more. Popular software such as Microsoft Outlook Calendar and Slack already identifies whom we meet with and how much time we spend with them; that information then goes into models of how long it should take to get given projects done.
Bruce Peterson/Gallery Stock
Just by measuring how long motion-detector lights stay on, software can already tell us how much time people are spending in their offices. The time clock is back in the form of badges that swipe us in and out of buildings, tracking when we arrive and leave as well as which areas we enter to see other people. Indoor-mapping software goes much further, identifying where individual employees are in facilities in real time. Vendors now offer software that purportedly identifies employees by how they walk when their faces can’t be observed. Sensors measure who is meeting with whom, how long we sit at our desks, and so forth. As Sarah Krause of the Wall Street Journal found, employers are listening in on conference rooms and analyzing the conversations to better organize and manage teams. The fitness company Life Time, for instance, analyzes conversations from team meetings as a development exercise for new managers.
A revealing moment came with the Covid-19 shutdowns, when vast numbers of organizations sent people home to work. Would companies trust employees to be productive—or try to monitor them? The answer appears to be the latter: Drew Harwell of the Washington Post reported a rise in the use of “tattleware” software that literally watches everything that employees working at home do on their computers. One vendor quoted in Harwell’s article said its clients feel “completely entitled to know what employees are doing” at home.
Konrad Putzier and Chip Cutter of the Wall Street Journal reported that as companies prepared to bring employees back to their workplaces following the shutdowns, some were setting up indoor-mapping software to monitor whether employees were complying with new social-distancing requirements. Observers noted that there would be no good reason to take it down after the pandemic passes.
All this information can be used for constructive purposes, such as designing better office layouts. But it could also identify which employees duck out of the building for extended periods of time, who is organizing March Madness betting pools, and so forth. Ethan Bernstein and Ben Waber note that top-down efforts to design workspaces to produce desired effects often backfire—for instance, reducing collaboration rather than increasing it. They recommend firms experiment to see what practices get the outcomes that matter. (See “The Truth About Open Offices,”) HBR, November–December 2019.
Employees have never liked being monitored. The wave of strikes that created industrial unions in the 1930s was motivated as much by a desire to push back on management control and Taylorist job requirements, such as the demeaning timing of bathroom breaks, as by dissatisfaction with wages. What’s more, monitoring rarely works as intended, because employees find ways to get around it. More than a quarter of employees admit to covering their work computers’ webcams, and almost one-third switch from their company phones to their personal cell phones when talking to coworkers to prevent their employers from listening in, according to a survey by SimplyHired, an online provider of job services.
The challenge for managers is to find the mix of practices that actually works.
Moving toward AI-based optimization isn’t free, either. Just as Taylor’s scientific management required firms to hire a slew of experts from the then-emerging field of industrial engineering, today’s optimization efforts are feeding demand for data scientists. Jobs for the folks who build algorithms are rapidly increasing, and the average base salary for them is $113,309, according to Glassdoor.
One could argue that the deck is stacked against Theory Y. Executives with engineering and computer science degrees represent as many as one-third of CEOs of all major corporations, by some estimates. Forty-seven percent of CEOs have a background in finance, a field where cost minimization, formulas, and numerical targets—not empowerment—hold sway. Behavioral approaches associated with Theory Y appear in only a modest way in business schools’ curricula, and they’re bookended by microeconomics, accounting, finance, and operations courses—all of which rely on optimization processes. Meanwhile, corporate management-training programs that teach behavioral ideas have largely disappeared.
Finally, Theory Y approaches require a lot of leaders’ and managers’ time and energy and are squishy. In contrast, optimization approaches can be stipulated by rules, delegated, and aligned with hard priorities, like maximizing efficiency and lowering costs, that make CFOs and Wall Street happy.
A sad example of the disdain for Theory Y management that prevails in C-suites can be found in Alec MacGillis’s New Yorker story about Boeing’s restructuring and how that contributed to its travails with the 737 Max jetliner. The company’s lean-production-like program, in which engineers sought process improvements, once was a hallmark of quality and cost-effectiveness. When a top executive announced that Boeing was cutting funding for it, an engineer involved in it objected at a labor-management breakfast, pointing out how much money the program had saved. The executive responded, “The decisions I make have more influence over outcomes than all the decisions you make.”
The grand challenge for managers isn’t to choose between Theory X and Theory Y. Rather, it’s to find the mix of practices that actually, not theoretically, works. When scientific management was first introduced, it was spectacularly more effective than the chaos in manufacturing that had preceded it, and it was a key factor in helping U.S. corporations dominate global markets. Many business practices still are done poorly and could be far more effective and even more fair if optimized. Hiring comes to mind: At most companies, managers with little if any training in how to hire still make choices based on their gut and biases.
Incorporating optimization and employee empowerment in tandem works far better, though. One of the strengths of lean production is that it captures both by turning over the task of improving productivity and quality to frontline workers, teaching them how to design jobs better. It’s therefore dispiriting to see companies replacing that approach with software. A similar phenomenon is happening with scheduling and flextime. Workers as a group once figured out the best way to get the work done while accommodating employee needs. Now software is available that promises to “optimize” work schedules for business needs. As companies juggle staffing schedules to achieve social distancing in offices, it will be revealing to see whether they use the employee-driven approach or go with algorithms.
The biggest constraint at play seems to be the same one it has always been: the intellectual appeal of optimization and its promise of one simple, best way to manage that you can put in place and then be done with. Managers can then avoid the hard work of engaging employees in solving workplace problems and move on to the more exciting tasks of strategy. As Kurt Vonnegut put it in his novel Player Piano, “If it weren’t for the people, the god-damn people always getting tangled up in the machinery…the world would be an engineer’s paradise.” It may be easier to ignore people, but we’re still here. It matters greatly to consider our needs and interests, and effective leaders have to take that into account.
Peter Cappelli is the George W. Taylor Professor of Management at the Wharton School and a director of its Center for Human Resources. He is the author of several books, including Will College Pay Off? A Guide to the Most Important Financial Decision You’ll Ever Make (PublicAffairs, 2015).
Stop Overengineering People Management
Research & References of Stop Overengineering People Management|A&C Accounting And Tax Services
Source
0 Comments