Early pockets were bags sewn to the inside of the waistband and otherwise hanging loose. They were significantly larger than modern pockets—a rare surviving example from 1567 is a foot deep—and sometimes included drawstrings. Regardless of size, the critical change was that the pocket became part of the clothing and thus a more secure and intimate extension of the wearer. “Yoking bag to breeches in what looks like an improvisational ‘lash-up’ created a tool demonstrably more private and personal than the public-facing purse,” writes Hannah Carlson in her newly published Algonquin Books release, Pockets: An Intimate History of How We Keep Things Close.
To tell the story of pockets, Carlson, a dress historian at the Rhode Island School of Design, mines literature and art, including once-popular works that have lapsed into obscurity. She reads old newspapers, magazines, and advice books. She scours descriptions of runaway indentured or enslaved workers. She examines historical garments. Organized in thematic chapters across a loosely chronological arc, Pockets readably presents the results of Carlson’s impressive research, demonstrating the many ways in which material culture can illuminate social, economic, and political history.
Pockets changed the nature of clothes, allowing them to encompass hidden tools and meaningful treasures. “Once the wearer places something inside their pocket,” Carlson notes, “that thing disappears, enfolded and seemingly absorbed into uncertain depths.” If that thing is a handgun, the pocket’s combination of convenience and secrecy poses a threat. Western gun control was a response to pocket pistols.
Invented around the same time as the pocket, the wheel-lock pistol could be loaded in advance, and was small enough to be secreted in a pocket, leading to its slang term “pocket dag.” In 1549, an intruder broke into Edward VI’s apartments. When the eleven-year-old king’s dog started barking, the intruder—an ambitious and notorious rake named Thomas Seymour—pulled a pistol out of his pocket and shot it. The monarch promptly banned anyone within three miles of his residence from carrying a pocket pistol. In 1584, a Spanish sympathizer used one to kill Prince William (the Silent) of Orange, the leader of the Dutch revolt against Spanish rule. It is recorded as the first political assassination by handgun.
Once the layered ensemble of knee-length coat, vest, and breeches introduced in the seventeenth century made multiple pockets practical and easily accessible, they quickly proliferated. By 1726, Jonathan Swift portrayed the Lilliputians in Gulliver’s Travels discovering ten pockets in the clothing of the “man mountain”: two each in Gulliver’s coat and waistcoat and six in his breeches. Their contents include a handkerchief, a snuff box, a leather-bound journal, a comb, two pistols, silver and copper coins, a razor, a knife, a watch on a chain, and a net purse holding gold coins. Gulliver also records that he had an undetected “secret pocket,” containing spectacles, a pocket telescope, and “some other little conveniences.”
Pockets, Carlson suggests, encouraged craftsmen to “miniaturize useful instruments,” such as knives and sextants, “with the notion of portability in mind.” In The Theory of Moral Sentiments, published in 1759, Adam Smith suggested the opposite causality. He singled out the era’s popular small gadgets, which included watches, nutmeg grinders, and “tweezer cases,” or etuis, as “trinkets of frivolous utility” whose usefulness was less important than their ingenuity. Their owners, he wrote, “contrive new pockets, unknown in the clothes of other people, in order to carry a greater number.”
For all the appeal of pocket gadgets, however, the most frequent contents of pockets may well be the owner’s hands. Carlson devotes an enjoyable chapter to the centuries-long debates over the propriety of sticking one’s hands in one’s pockets. “Anxious mothers and schoolmasters,” she reports, “stitched boys’ trouser pockets closed to prevent them from ‘the trick of putting their hands into them’ through the end of the nineteenth century.” Depending on posture and circumstances, the gesture could be sexually provocative or fashionably nonchalant, menacing or friendly.
Carlson presents contrasting portraits of Walt Whitman, dressed in a loose open shirt and canvas trousers, and W.E.B. Du Bois wearing a top hat, starched collar, waistcoat, and frock coat. Whitman’s clothes are timeless work garb, Du Bois’s the height of 1900 fashion. With his right hand propped at his waist, Whitman has left hand in his pocket. Du Bois puts both hands in his pockets, sweeping his coat behind him. In each case, the resulting pose is self-assured, arresting, and quintessentially American.
It’s also implicitly masculine. Du Bois’s pose mirrors another proud image in the book—an 1860 magazine illustration titled A Boy’s First Trousers. Pockets and their contents defined boys as explorers, collectors, and tool wielders—as “real boys” and future men.
“Our fondness for pockets calls for a revision of what it means to be dressed, to acknowledge that we’ve achieved our sense of self-sufficiency with an array of concealed compartments,” Carlson writes. Pockets foster a sense of personal independence, allowing the owner to march into the world unencumbered yet prepared. So, Carlson argues, the common absence of pockets in women’s garments rankles.
Women used to have their own pockets, tied around the waist and layered where needed—outside the skirt when taking money at a market, above the petticoat for ordinary use. Like men’s original sewn-in pockets, tie-on pockets were long and capacious. And, just as men’s pockets needed enormous breeches before the three-piece suit divided up their contents, so women’s tie-on pockets required full skirts. The slim, high-waisted silhouettes of the early 19th century, often in the gauziest of muslin, made them impractical. To accommodate the classically inspired fashions of the day, women began carrying small bags known as reticules.
Skirts eventually widened, resurrecting pockets, only to have bustles make them difficult to place. By the twentieth century, most women relied on purses. Derived from sturdy luggage, modern handbags “came to be seen not just as fashionable accessories but as a sign of independence,” writes Carlson. A certain British prime minister famously wielded one, as did her sovereign.
In taking up the pocket gender gap, Carlson answers a question my husband once posed: Why do women carry their cell phones in their insecure back pockets? She cites evidence that the front pockets of women’s jeans are 48 percent shorter than comparable men’s pockets. “Such findings support the contention that midrange fashion is at times driven entirely by aesthetics rather than consideration of wearers’ needs,” she concludes. Women often put their phones in their back pockets because they don’t fit in the front.
The closer she gets to the present, the more Carlson emphasizes runway fashion, where pockets make decorative or social statements rather than seeking to unobtrusively carry stuff. (The book’s cover features a woman with an enormous, utterly impractical red breast pocket on her slim gray 1941 suit.) A call for pocket equality begs for specifics about the design challenges of everyday garments. Are pockets more difficult to fit to women’s bodies? What do designers in the outdoor apparel industry do about women’s pockets? How has the spread of casual, sports-derived clothing changed the pocket calculation? Carlson betrays her academic orientation by focusing on Miuccia Prada and Marine Serre rather than Champion and Columbia Sportswear.
Reading Pockets sent me back to a faded photo showing two smiling blonde girls in matching red plaid dresses. We were best friends, dressed for our first day of first grade in dresses our mothers sewed and hand-smocked for us. Marjory’s dress hangs with graceful symmetry while mine is pulled awkwardly to the right. With a thrust of my hand, I’m demonstrating the subtle difference in the otherwise identical outfits. Mine has a pocket.
For its first two-thirds, Walter Isaacson’s mammoth biography of Elon Musk is an epic romance, like The Lord of the Rings (a Musk favorite) or the Arthurian legends. It portrays the hero and his comrades overcoming seemingly insurmountable obstacles through daring, determination, cleverness, and skill, all in the pursuit of noble goals.
The critical moment in that tale comes in 2008, which Musk described to Isaacson as "the most painful year of my life." His marriage broke up. One after another, the first three SpaceX rockets exploded before reaching orbit. The first Tesla Roadsters came off the line, but only with hand fitting at an exorbitant and unsustainable cost. He ran out of money. His audacious ventures appeared doomed. Everyone told Musk that his best chance was to try to save one company and let the other go out of business. But he refused to choose between Tesla and SpaceX.
"For me emotionally, this was like, you got two kids and you’re running out of food," he told Isaacson. "You can give half to each kid, in which case they might both die, or give all the food to one kid and increase the chance that at least one kid survives. I couldn’t bring myself to decide that one was going to die, so I decided I had to give my all to save both." (Lest you think the analogy callous, consider that Musk had seen his first child die in infancy. "He cried like a wolf," his mother told Isaacson. "Like a wolf.")
SpaceX was saved by a $20 million venture capital infusion from an unlikely source: the PayPal founders who’d kicked Musk out as CEO eight years earlier. "It was an interesting exercise in karma," he told Isaacson, a reward for not holding a grudge. The infusion would allow SpaceX to try one last launch before running out of cash.
First, however, the last remaining rocket had to get from the Los Angeles factory to the launch site in the South Pacific, leading to one of the wilder—though not the wildest—stories in the book. To save weeks in shipping, Musk agreed to charter an Air Force C-17 transport plane. Twenty employees rode along in the hold. It’s a good thing they were there.
As the plane descended to refuel in Hawaii, the pressure outside the rocket exceeded that inside. The precious cargo began to collapse. One employee dashed to ask the pilot to halt the descent. Others attacked the rocket’s wrapping with pocket knives, rushing to open the valves before the plane had to resume its descent or run out of fuel. They saved the rocket, but it suffered a dented side and a broken interior part.
Musk told them to fix it at the launch site, deploying his personal jet to bring the replacement parts and launch director Tim Buzza. On site, Buzza estimated it would take five weeks to repair the rocket if they followed the procedures they’d adopted to reduce risk after the first three disasters. Abandoning those checks would take the time down to five days. "Go as fast as you can," Musk said. Working frantically, they hit the five-day estimate. "It was unlike anything that the bloated companies in the aerospace industry could possibly have imagined," Buzza told Isaacson. Musk’s ruthless, risk-taking approach to getting things done triumphed—for neither the first time nor the last.
The all-or-nothing launch went perfectly. "Falcon 1 had made history as the first privately built rocket to launch from the ground and reach orbit," Isaacson writes. "Musk and his small crew of just 500 employees (Boeing’s comparable division had 50,000) had designed the system from the ground up and done all the construction on its own." In late December, Musk got news that SpaceX would be awarded a $1.6 billion NASA contract to make 12 round trips to the space station. Unlike NASA’s traditional contractors, SpaceX "would get paid only if and when they succeeded. There were no subsidies or cost-plus contracts." The company was saved and a new era in American space exploration began.
Meanwhile, Tesla’s perils continued. The company was bleeding cash, paying bills with the deposits customers had put down on future Roadsters. Neither the company nor Musk had enough money to make the year-end payroll. On Christmas Eve, existing investors agreed to fund a new equity round of $20 million, enough to keep the company going for a few more months. "Musk broke down in tears," Isaacson writes. "‘Had it gone the other way, Tesla would have been dead,’ he says, ‘and maybe too the dream of electric cars for many years.’"
In January, Musk dazzled executives from Daimler by demonstrating an electric model of the German company’s Smart car—a Mexican model with its gasoline engine replaced with a Roadster motor and battery pack. A few months later Daimler agreed to buy about 9 percent of Tesla for $50 million. "If Daimler had not invested in Tesla at that time, we would have died," Musk told Isaacson. More substantial, if less critical, was a $465 million loan from the Department of Energy, whose first check arrived in early 2010. Tesla repaid the loan and interest in 2013.
Its finances secured, Tesla’s next survival test came in July 2017, with the introduction of its relatively affordable sedan, the Model 3. To become a sustainable enterprise, Musk calculated that Tesla needed to produce 5,000 Model 3s a week. That first meant that its battery plant had to reach the same goal. The man who designed the battery production line told Musk the target was insane. The best they could do would be 1,800. "If you’re right, Tesla’s dead," said Musk. "We either have 5,000 cars a week or we can’t cover our costs." In a characteristic Musk move, he replaced the skeptic with a more gung-ho executive, brought in his most trusted lieutenants, and put himself in charge of production, first at the battery plant, then at the car factory.
Thus began "production hell," a period of intense, round-the-clock work that Musk seems to crave. On just a few hours of sleep a night, often on the factory floor, he and his hardcore associates critiqued each and every production step, looking for ways to speed up the line. Out went plastic caps that were added to battery prongs in one factory only to be removed in the other. Out went patches in the small holes that let paint drain. Out went fiberglass strips between the battery pack and the floor panel. Out went sensors deemed less than critical. Out went robots that were slower than humans. Musk admitted he’d automated too much.
By May 2018, the plant was up to 3,500 cars a week—a huge gain but well short of the goal. The only way to get to 5,000 was to add capacity. But Tesla had neither the time nor the planning permission to build more factory space. Borrowing a technique Musk remembered from World War II plane production and exploiting a zoning provision allowing a "temporary vehicle repair facility," workers cleared an old parking lot behind the plant and set up an enormous tent. A thousand feet long and 150 feet wide, it housed a new assembly line. From idea to execution took just three weeks. A little before 2 a.m. on July 1, the week’s 5,000th Model 3 left the factory. "If conventional thinking makes your mission impossible," Musk told a visiting reporter, "then unconventional thinking is necessary." Through relentless effort and creative problem solving, Musk and his team had again defied the odds.
Within days, however, Musk’s brother Kimbal found his honeymoon interrupted by an urgent email. "You have to come back right away," it said. "Elon is having a meltdown." A decade after his "most painful year," Musk experienced what he called his most agonizing one. In 2018, the agony was self-inflicted.
"If Musk had been the type of person who could pause and savor success, he would have noticed that he had just brought the world into the era of electric vehicles, commercial space flight, and reusable rockets," writes Isaacson. "Each was a big deal. But for Musk, good times are unsettling." That’s an understatement. Musk craves stress and drama. He can’t cope with success. If he doesn’t face enough genuine obstacles, he impulsively acts to create trouble for himself.
In 2018, Musk baselessly called a cave explorer in Thailand "pedo guy," driving down Tesla’s stock price and landing himself in court. He declared he was going to take Tesla private and claimed, inaccurately, that he had secured funding. The Securities and Exchange Commission charged him with fraud. He gave a New York Times business reporter a long, emotional interview, fueling concerns about his health. To reassure shareholders, he then went on Joe Rogan’s podcast, where he rambled for two-and-a-half hours and lit up a cigar-style joint of marijuana and tobacco. More bad press and falling stock prices followed. Top Tesla executives, including cofounder JB Straubel, began to flee.
Musk’s sanity, and with it his companies’ fortunes, was eventually saved by two new problems: designing what became the futuristic-looking Cybertruck and radically simplifying the satellites that would provide Starlink’s internet service. That the truck was years away from production only made the problem more therapeutic. Nothing calms Musk’s mind more than contemplating the future. At SpaceX every week included a meeting devoted to planning life on Mars. "When Musk gets stressed, he often retreats into the future," Isaacson writes. But the turmoil of 2018 foreshadowed what was to come.
In its final third, the story shifts from epic romance to tragedy. The once-triumphant hero faces a downfall precipitated by a fatal flaw—in this case, a form of hubris all too common among brilliant technologists, especially when they’ve gotten rich. Musk confuses intelligence with knowledge and engineering with, well, pretty much everything. He assumed that his gifts for understanding materials and reworking manufacturing processes qualified him to run an influential but stubbornly unprofitable media company.
With $10 billion burning a hole in his pocket and increasing concerns about the "woke mind virus," he started buying shares in Twitter. He amassed a 9 percent stake, joined the board, then just a day later declared his intention to buy the company. Friends and family warned that Twitter would be a dangerous distraction from his other work. But Musk persisted. How hard could it be? "I don’t think from a cognitive standpoint it’s nearly as hard as SpaceX or Tesla," he told Isaacson. "It’s not like getting to Mars. It’s not as hard as changing the entire industrial base of Earth to sustainable energy." Musk’s grandiose dreams and hard-won successes made him underestimate—and misunderstand—the challenge. Cognitive power wasn’t enough.
One of Musk’s maxims is, "The only rules are the ones dictated by the laws of physics. Everything else is a recommendation." As long as he is dealing with materials and manufacturing, it’s a useful heuristic. But it doesn’t apply to social relations, which means it didn’t apply to Twitter. If you pay way too much, don’t understand advertising, haven’t considered the difficult tradeoffs between freewheeling speech and a platform people want to use, don’t appreciate why people value features like short posts or blue checks, and generally have no clue about human interactions, you might as well be trying to go faster than the speed of light.
Musk was correct that Twitter was dramatically overstaffed. He was correct that it had lost the trust of many people on the right. But he had no idea how to make it work. "Twitter is a tech company, a programming company," Musk told his friend Ari Emanuel, rejecting an offer to have Emanuel’s agency run the place. He was wrong. Good software may be necessary to a successful media platform, but it is neither central nor sufficient. Musk squandered billions on Twitter, renamed X, without improving its credibility or its financial prospects.
Musk’s story isn’t over, of course. In the long run, the Twitter fiasco may prove a mere detour—an expensive learning experience rather than a tragic fall. Information from its feeds may, as he hopes, combine with data from Tesla’s cameras to fuel valuable new forms of artificial intelligence. Or its mercurial owner may decide that, like SolarCity’s rooftop panels, the platform doesn’t interest him after all.
Isaacson ends his book with the test launch of Starship, a huge reusable rocket designed to get 100 passengers to Mars. Musk envisions a fleet of a thousand. "It’s worth keeping in mind as you go through all the tribulations," he told the engineers before liftoff in April, "that the thing you are working on is the coolest fucking thing on Earth. By a lot. What’s the second coolest? This is far cooler than whatever is second coolest." (Certainly not Twitter.) As expected, the rocket exploded before reaching orbit. It was intended as the first of many trials to come—a dramatic but useful failure on the way to eventual success, a symbol of the Elon Musk way.
Isaacson is ambivalent about Musk’s personality, as any honest observer would be. He acknowledges the difficult truth that his protagonist’s achievements and drive are inextricable from his dark side. "Sometimes great innovators are risk-seeking man-children who resist potty training," he concludes. "They can be reckless, cringeworthy, sometimes even toxic. They can also be crazy. Crazy enough to think they can change the world."
That’s as close as he gets to stating the book’s implicit argument: that by tolerating a man who disregards social norms of empathy and balance, allowing him to take enormous risks and reap concordant rewards, we make the world a better place. We might not want to work for Elon Musk, marry him, or be him. But if we’re wise, neither will we try to eliminate opportunities for people like him.
September 25 marks the 250th anniversary of the birth of the most important scientist you’ve never heard of. His name was Agostino Bassi, and he was the first person to identify the specific microorganism that caused a contagious disease—the first to prove the germ theory of disease. How he did it is a remarkable story of scientific passion and persistence. It deserves to be more widely known.
Bassi wasn’t meant to be a scientist. He was born into a well-to-do farming family in a small village in Lombardy in northern Italy. Following his father’s wishes, he studied law at the University of Pavia. But his first love was science. During his university years, he supplemented his official studies by informally taking courses in science, medicine and mathematics. Among the professors whose lectures he attended was Lazzaro Spallanzani, famed for his opposition to the theory of spontaneous generation. Another, with whom Bassi became friends, was Giovanni Rasori, a supporter of the then-unpopular idea that contagious diseases were caused by microorganisms.
After receiving his law degree in 1798, Bassi settled in Lodi, a town about 20 miles southeast of Milan. Plagued by recurring bouts of an eye inflammation that made reading and writing difficult, he moved in and out of bureaucratic posts. On the side, and between positions, he used the family farm as a laboratory. Over the years, he conducted experiments and published treatises on breeding sheep, cultivating potatoes, aging cheese and making wine. His most important—and time-consuming—research was on silkworms.
Lustrous, soft and easy to dye, silk has been Europe’s favorite luxury fabric as far back as ancient Rome, where it arrived from China. It comes from the cocoons of Bombyx mori, a moth domesticated in China thousands of years ago and unable to survive in the wild. By Bassi’s day, sericulture—the raising and harvesting of silkworms—was a major industry in Italy and France.
Sericulture is a precise and demanding process. Cultivators raise silkworms on trays protected from the weather and supply them with fresh mulberry leaves, the only food they will eat. Mulberry orchards are as essential to sericulture as the insects themselves. When the caterpillars are ready to build cocoons, cultivators provide them with sticks and monitor their hibernation. Just before the moths emerge, they harvest the cocoons and heat them to kill the insects before they can break the precious silk. Each intact cocoon is a continuous filament that can be reeled off, combined with others and turned into fine thread. Each sericulture stage requires precision: just the right density of silkworms and leaves, just the right temperatures, just the right timing. Disease can devastate a harvest.
In late 1807, Bassi embarked on what turned out to be 30 years of experiments aimed at identifying and countering the cause of a mysterious ailment that was wiping out silkworms. They would stop eating, become limp and die. Their corpses would then grow stiff, brittle and coated in white. The disease was variously known as mal del segno, muscardine or, in a nod to the white powder, calco, calcino or calcinaccio. Breeders believed that it must be caused by a toxin in the insects’ environment, and Bassi set out to figure out what that was.
His first eight years of experiments proved frustrating and apparently futile. He later wrote: “I used many different methods, subjecting the insects to the cruelest treatments, employing numerous poisons—mineral, plant and animal. I tried simple substances and compounds; irritating, corrosive and caustic; acidic and alkaline; soils and metals; solids, liquids and gases—all the most harmful substances known to be fatal to animal organisms. Everything failed. There was no chemical compound or pest that would generate this terrible disease in the silkworms.”
By 1816, Bassi was deeply discouraged. He had expended enormous effort and nearly all his money on fruitless studies. He was losing his eyesight. “Oppressed by a great melancholy,” he abandoned his research. But a year later, he rallied and resolved to “defy misfortune, turning to interrogate nature in new ways with the firm resolution of never abandoning her until she responded sincerely to my questions.”
A major clue came when Bassi observed that silkworms raised in the same conditions and fed the same food but housed in adjacent rooms had different outcomes. The disease would sweep through one room while its neighbor suffered little or no damage. The difference, he concluded, was that “there was no calcino germ, or very few, in one room and large numbers in the other. The mal del segno or muscadine is never born spontaneously” in reaction to a toxin, as everyone had previously believed.
After more experiments, Bassi realized that living insects wouldn’t infect one another. Rather, the disease was carried by the corpses’ white coating. Introduced into the body of a living insect, whether caterpillar, pupa or moth, the powder would multiply inside, feeding on the insect’s body until it killed it. Only then would it spread. Bassi concluded that the invader was a fungus and the white substance its spores. It was the first experimental proof that a contagious disease would spread as microorganisms traveled from an infected to an uninfected animal.
By placing a dead insect in a warm, humid environment, Bassi found he could cultivate the fungus enough to detect hints of stems with the naked eye. Under a simple microscope, he could see the curves that marked the invader as a living organism rather than a crystal.
Having determined the culprit, Bassi experimented with ways of killing the fungi without harming the silkworms, identifying several effective disinfectants. He advised sanitary measures that included treating all silkworm eggs with disinfecting solutions; boiling instruments; disinfecting trays, tables and workers’ clothing; and requiring everyone tending the silkworms to wash their hands with disinfectants.
As these hospital-style measures suggest, Bassi’s discovery was a breakthrough with implications beyond sericulture. His research anticipated the more famous work of Louis Pasteur and Robert Koch in developing the germ theory of disease. Nine years after Bassi’s death in 1856, the well-funded, publicity-savvy Pasteur turned his own attention to silkworms, conducting his first research on animals. Among the resources he had at his disposal were French translations of Bassi’s work. The provincial lawyer was a scientist ahead of his time.
Draped over a neat mound of rice, the slice of raw salmon glistens. I follow sushi chef Jun Sog’s directions and eat the nigiri in a single large bite. The salmon’s flavor is delicate, not fishy, the texture silky against the grains of the rice. Then the hidden wasabi kicks in, a sharp contrast to the mild fish. I relish the punch while stifling a cough.
Before taking this job, Chef Jun spent three years preparing 14-course offerings at a Michelin-starred San Francisco restaurant. Sophisticated diners paid a couple hundred dollars each for a chef’s choice meal, or omakase, whose inventive dishes featured fish flown in from Tokyo’s Toyosu Market.The nigiri and salmon rolls he’s making today are just as special, but their extraordinary character is harder to discern. The only hint is the shape of the salmon from which Chef Jun slices his elegant portions. It’s a fat rectangular block with rounded edges, like a Milky Way bar. Fish markets don’t sell salmon that looks like that.
We are at Wildtype, a San Francisco startup that grows sushi-grade salmon from cells. The product I’m sampling descends from cells taken from a small fish more than three years ago. “We haven’t had the need to go back to the animal since that time,” says co-founder Aryé Elfenbein, a cardiologist who earned a Ph.D. by researching how blood vessels form.
Wildtype scientists coaxed the original fish cells into becoming what are known as induced pluripotent stem cells. Like early embryonic cells, these stem cells can grow into any type of tissue, depending on the cues they get from the environment. Using the right nutrient mix and a mesh-like scaffold, Wildtype gets them to become muscle, including the connective tissue that forms salmon’s • distinctive white lines. The resulting salmon has no bones, no skin, no blood and guts—no waste. “We only create what we eat,” says Elfenbein.
He grew up in Australia and says his aha moment came on a trip home during his medical residency. He was distressed to see former rainforests converted to raising cattle. “That made me wonder,” he recalls, “Could we eat meat and not eat animals? Can we grow the same thing, just outside of the animal?”
Founded in 2016, Wildtype is one of a host of new companies turning to cutting-edge biological techniques, known collectively as synthetic biology (or synbio), in search of more environmentally friendly, less ethically fraught materials. Some offer alternatives to existing products, such as the popular vegan burgers Impossible Meat introduced in 2016. They get their beefy flavor from heme, the iron-rich molecule in blood. Others, like Wildtype’s salmon or Huue’s indigo dye, provide duplicates of existing substances, created in new ways.
Synthetic biology is a process, not a product. Unlike corn genetically modified to grow faster or repel insects, the DNA tweaks don’t show up in the final product. Impossible Meat gets its heme by giving yeast a soybean gene that makes it produce a heme-rich molecule. It grows the yeast in fermentation vats and separates out the heme.
“What we’re talking about here is a revolution fundamentally changing the way that materials are made,” says Michelle Zhu, the chief executive and co-founder of Huue. She envisions a “future where we eliminate reliance on petroleum and fossil fuels and polluting production processes, instead being able to work in harmony with nature to create nontoxic colors, and other kinds of nontoxic materials.”
Synbio executives talk like nature lovers and environmental activists. “We are a company that makes meat from plants to turn back the clock on climate change and restore biodiversity,” says Jessica Appelgren, vice president of marketing at Impossible Foods. Dan Widmaier, the co-founder and chief executive of Bolt Threads, says, “We see the world as a four-billion-year-running experiment of inventing materials that are perfectly sustainable and circular.” Bolt’s products include a silk protein to replace silicone elastomers in cosmetics and a leather alternative made from mycelium, the tissue forming the roots of mushrooms.
Someday soon, goes the new biological vision, we’ll wear jeans dyed with indigo made using bacteria and walk on flooring formed from mycelium. We’ll dine on cruelty-free beef grown from cow cells and eat ice cream whose flavors and milk proteins were excreted by microorganisms. Corn farmers will replace synthetic fertilizers with soil microbes engineered to convert nitrogen from the air. Instead of animal hides, leather will come from cell cultures—animal cells for traditionalists, mycelium for vegans. Chemical companies will abandon petroleum feedstocks for corn syrup and customized enzymes.
And that’s just the beginning. Who knows what unknown flavors, fibers, or construction materials the new biology might yield? Given a few decades, its enthusiasts imagine, substances grown with biology will be as much a part of our everyday lives as petroleum-derived products are now. Pastureland will return to forest, wild salmon will again swarm the streams, and carbon emissions will fall. The world will enjoy ecologically benign abundance.
“We have spent the last century looking at what can we do with chemistry. And at this point, we’re kind of tapped out in what we can do with chemistry,” says Ena Cratsenburg, the chief business officer at Ginkgo Bioworks Inc., an industry pioneer. People still want the chemical products that improve human life, but without the environmental costs. “We think there’s a better way to do it,” she says. “Biology is a better way.”
That approach represents a significant cultural shift.
Since the first Earth Day in 1970, businesses large and small have grown from the conviction that “natural” foods, fibers, cosmetics, and other products are better for people and the planet. It’s an attitude that harkens back to the 18th- and 19th-century Romantics, who rejected industrialism in favor of sublime landscapes and rural nostalgia: What’s given is good; what’s made is suspicious, especially if it’s mass-produced or of recent origin. The natural is safe and pure, authentic and virtuous. The artificial is tainted and deceptive, a dangerous fake.
That view is still culturally potent, with its own intellectual ecosystem of publications and advocacy groups. They want nothing to do with the new biology, however fired with environmental zeal its advocates may be. “Cell-cultured meats are imitation foods synthesized from animal cells, not meat or poultry that consumers know,” says Jaydee Hanson, the policy director for the Center for Food Safety. The activist group is lobbying the U.S. government to require that lab-grown meat carry off-putting labels like “synthetic protein product made from beef cells.”
“If you are eating ‘animal-free’ dairy or meat products that taste nearly identical to a traditional animal product, you should be asking plenty of questions,” warns organic-food guru Max Goldberg in an essay. “And more often than not, what you will discover is that these foods are anything but ‘natural.’”
He has a point. Ginkgo’s Cratsenburg, who has been in the industry since 2006, defines synthetic biology as “a form of science that takes the engineering principles that one would apply to other engineering disciplines and applies them to biology.” Engineering identifies regularities, establishes repeatable processes, and makes outcomes predictable. Nature, by contrast, is out of control and indifferent to human purposes. Engineering bends nature to human ends. It is a science of the artificial.
Take Brave Robot ice cream from Perfect Day, founded in 2014 by two self-described “struggling new vegans.” Goldberg uses a photo of its booth at a natural foods trade show to illustrate his anti-synbio article. He sees the booth as a misleading abomination. The ice cream is an animal-free dairy product—something that does not exist in nature (Neither, of course, does ice cream itself.) Brave Robot genetically tweaks microflora so they turn out whey protein. It’s the same substance in cow’s milk but without milk’s other ingredients, such as lactose or animal fats. For its ice cream or cream cheese, Perfect Day adds in plant oils. Voilá: animal-free dairy.
Reviewers and my own taste tests confirm that Brave Robot’s ice cream is indistinguishable from the traditional sort. The Perfect Day customer, says company spokeswoman Anne Gerow, is “anyone who loves to eat but really cares. They care about animal cruelty or they care about the future of our planet.” If artificial methods make their goals easier and more delightful to achieve, so much the better. The new biology enables ethical living without sacrifice. Bring on the animal-free mint chocolate chip!
Purists aren’t convinced. One advocate of “clean eating” relentlessly posts links to Goldberg’s warning on the reviews on Brave Robot’s Facebook page. To her, clean eating means eschewing artificial ingredients. Animal-free dairy products are clearly taboo. Like the ancient prohibitions of kashrut, this concept of “clean” draws tribal boundaries, affirms identity, and makes food meaningful. The impurities it shuns are as much spiritual as physical. But while this notion of cleanliness is powerful to adherents, its appeal is limited.
The new biologists counter with their own purity claims. “This is the cleanest salmon you will ever have in your life,” boasts Elfenbein. It contains nothing but fish: no parasites, no mercury, no microplastics. Wildtype can tell the exact amount of omega-3 fatty acids in each portion.
Elfenbein bristles when reminded that the salmon’s purity comes from its artificial nature. He’d rather talk about transparency, a word with nicer connotations, and envisions detailed labels listing everything from the salmon’s carbon footprint to the day it was made. But Wildtype knows everything about the salmon because it grew the tissue in a vat. And it’s the precisely controlled environment of the cell culture that ensures that the raw salmon is free of dangerous worms. (Wild or farmed sushi-grade fish must be frozen to kill parasites.) Nature isn’t clean.
The new biology faces a more suspicious market than the postwar America that embraced the gospel of miracle fabrics, wonder drugs, and convenience foods. That naive message produced a backlash. Our era is more like the economically and technologically tumultuous 19th century. Progress comes with obvious disruptions, giving rise to muckrakers and intellectuals eager to demonstrate its dark side.
In a more affluent world where tolerance for risks has fallen, the predictability of artifice can deliver a sense of security, just as it did around the turn of the 20th century. Americans then began to enjoy “artificial ice.” Instead of blocks cut from frozen lakes and shipped to cities or southern climes, people began to buy ice made from distilled water in factories using ammonia-based refrigeration. At first more expensive than natural ice, factory-made ice nonetheless found a market among customers anxious about impure food and water-borne disease. Both were serious problems in burgeoning industrial cities.
“The demand for artificial ice has been increased by all citizens who are careful to look after the wholesomeness of their food and the general health of their homes,” reported the Fort Wayne, Indiana, newspaper in 1900, noting that “butchers who want no impurities in their ice chests are making a great demand for artificial ice” and “a dutiful mother will have nothing but pure ice for her children.”
People didn’t buy artificial ice because they were wowed by the technology, although it did get some gee-whiz press. They bought it because they wanted to be good mothers and dependable butchers. They wanted to live in big cities without eating rotten food. They wanted to go ice skating, eat ice cream, and enjoy cold beer. Artificial ice made everyday life better. And its story made sense. People understood that ice was frozen water and that pure water made pure ice. They didn’t have to understand the stuff about condensing ammonia.
Wildtype hires sushi chefs so its fish makes sense. While it waits for regulatory approval, the company invites guests to see and taste the product the way they would in a restaurant. The familiar ritual sparks curiosity rather than fear. How long does it take to grow, people want to know, and where do the white stripes come from? Could you make the flavor more intense? Once the product is on the market, Wildtype hopes restaurants can tell its story. Most people don’t, after all, make their own sushi.
Over time, growing meat or silk or leather in a vat could make the “natural” alternatives seem aesthetically and morally repugnant. Eating pond ice sounds repulsive nowadays. Who knows what might be in it? And, as uncomfortable as the thought may be, economics and technology can transform ethical expectations and practices. Infanticide dwindled in Europe as condoms spread and living standards rose. The lower the cost of virtue, the more willing people are to embrace it. Most contemporary diners don’t want to give up meat but also don’t want to see exactly where it comes from. By offering kinder alternatives that don’t sacrifice taste or tradition, synthetic biology can change mores.
Ideals and stories also matter. By making muscle power less essential, steam engines probably helped along the abolition of slavery. But novels, slave narratives, and Christian lessons of common humanity were essential. For a half century we’ve been telling ourselves a story about technology as a fall from grace, about artifice as the source of human suffering and environmental ruin—even as we consumed more and more of its products. The idealistic scientists and entrepreneurs building the new biology tell a different story, a story of life and renewal. If we cherish nature, they suggest, we’ll embrace artifice. In this story, synthetic biology offers a kinder, safer, more planet-friendly way forward.
]]>Pocket gadgets were all the rage in Adam Smith's day. Their popularity inspired one of the most paradoxical, charming, and insightful passages in his work.
The best known are watches. A pocket timepiece was an 18th century man's must-have fashion accessory, its presence indicated by a ribbon or bright steel chain hanging from the owner's waist, bedecked with seals and a watch key. Contemporary art depicts not just affluent people but sailors and farm workers sporting watch chains. One sailor even wears two. "It had been the pride of my life, ever since pride commenced, to wear a watch," wrote a journeyman stocking maker about acquiring his first in 1747.
Laborers could buy watches secondhand and pawn them when they needed cash. A favorite target for pickpockets, "watches were consistently the most valuable item of apparel stolen from working men in the eighteenth century," writes historian John Styles, who analyzed records from several English jurisdictions.
But timepieces were hardly the only gizmos stuffing 18th century pockets, especially among the well-to-do. At a coffeehouse, a gentleman might pull out a silver nutmeg grater to add spice to his drink or a pocket globe to make a geographical point. The scientifically inclined might carry a simple microscope, known as a flea glass, to examine flowers and insects while strolling through gardens or fields. He could gaze through a pocket telescope and then, with a few twists, convert it into a mini-microscope. He could improve his observations with a pocket tripod or camera obscura and could pencil notes in a pocket diary or on an erasable sheet of ivory. (Not content with a single sheet, Thomas Jefferson carried ivory pocket notebooks.)
The coolest of all pocket gadgets were what antiquarians call etuis and Smith referred to as "tweezer cases." A typical 18th century etui looks like a slightly oversized cigarette lighter covered in shagreen, a textured rawhide made from shark or ray skin. The lid opens up to reveal an assortment of miniature tools, each fitting into an appropriately shaped slot. Today's crossword puzzle clues often describe etuis as sewing or needle cases, but that was only one of many varieties. An etui might contain drawing instruments—a compass, ruler, pencil, and set of pen nibs. It could hold surgeon's tools or tiny perfume bottles. Many offered a tool set handy for travelers: a tiny knife, two-pronged fork, and snuff spoon; scissors, tweezers, a razor, and an earwax scraper; a pencil holder and pen nib; perhaps a ruler or bodkin. The cap of a cylindrical etui might separate into a spyglass.
All these "toys," as they were called, kept early manufacturers busy, especially in the British metal-working capital of Birmingham. A 1767 directory listed some 100 Birmingham toy makers, producing everything from buttons and buckles to tweezers and toothpick cases. "For Cheapness, Beauty and Elegance no Place in the world can vie with them," the directory declared. Like Smith's famous pin factory, these preindustrial plants depended on hand tools and the division of labor, not automated machinery.
Ingenious and ostensibly useful, pocket gadgets and other toys epitomized a new culture of consumption that also included tea, tobacco, gin, and printed cotton fabrics. These items were neither the traditional indulgences of the rich nor the necessities of life. Few people needed a pocket watch, let alone a flea glass or an etui. But these gadgets were fashionable, and they tempted buyers from a wide range of incomes.
A fool "cannot withstand the charms of a toyshop; snuff-boxes, watches, heads of canes, etc., are his destruction," the Earl of Chesterfield warned his son in a 1749 letter. He returned to the subject the following year. "There is another sort of expense that I will not allow, only because it is a silly one," he wrote. "I mean the fooling away your money in baubles at toy shops. Have one handsome snuff-box (if you take snuff), and one handsome sword; but then no more pretty and very useless things." A fortune, Chesterfield cautioned, could quickly disappear through impulse purchases.
In The Theory of Moral Sentiments, first published in 1759, Smith examined what made these objects so enticing. Pocket gadgets claimed to have practical functions, but these "trinkets of frivolous utility" struck Smith as more trouble than they were worth. He deemed their appeal less practical than aesthetic and imaginative.
"What pleases these lovers of toys is not so much the utility," Smith wrote, "as the aptness of the machines which are fitted to promote it. All their pockets are stuffed with little conveniences. They contrive new pockets, unknown in the clothes of other people, in order to carry a greater number." Toys embodied aptness, "the beauty of order, of art and contrivance." They were ingenious and precise. They were cool. And they weren't the only objects of desire with these qualities.
The same pattern applied, Smith argued, to the idea of wealth. He portrayed the ambitious son of a poor man, who imagines that servants, coaches, and a large mansion would make his life run smoothly. Pursuing a glamorous vision of wealth and convenience, he experiences anxiety, hardship, and fatigue. Finally, in old age, "he begins at last to find that wealth and greatness are mere trinkets of frivolous utility, no more adapted for procuring ease of body or tranquillity of mind than the tweezer-cases of the lover of toys."
Yet Smith didn't condemn the aspiring poor man or deride the lover of toys. He depicted them with sympathetic bemusement, recognizing their foibles as both common and paradoxically productive. We evaluate such desires as irrational only when we're sick or depressed, he suggested. In a good mood, we care less about the practical costs and benefits than about the joys provided by "the order, the regular and harmonious movement of the system….The pleasures of wealth and greatness, when considered in this complex view, strike the imagination as something grand and beautiful and noble, of which the attainment is well worth all the toil and anxiety."
Besides, Smith suggested, pursuing the false promise of tranquility and convenience had social benefits. It was nothing less than the source of civilization itself: "It is this which first prompted them to cultivate the ground, to build houses, to found cities and commonwealths, and to invent and improve all the sciences and arts, which ennoble and embellish human life; which have entirely changed the whole face of the globe, have turned the rude forests of nature into agreeable and fertile plains, and made the trackless and barren ocean a new fund of subsistence, and the great high road of communication to the different nations of the earth."
Then Smith gave his analysis a twist. The same aesthetic impulse that draws people to ingenious trinkets and leads them to pursue wealth and greatness, he argued, also inspires projects for public improvements, from roads and canals to constitutional reforms. However worthwhile one's preferred policies might be for public welfare, their benefits—like those of a pocket globe—are secondary to the beauty of the system.
"The perfection of police, the extension of trade and manufactures, are noble and magnificent objects," he wrote. "The contemplation of them pleases us, and we are interested in whatever can tend to advance them. They make part of the great system of government, and the wheels of the political machine seem to move with more harmony and ease by means of them. We take pleasure in beholding the perfection of so beautiful and grand a system, and we are uneasy till we remove any obstruction that can in the least disturb or encumber the regularity of its motions." Only the least self-aware policy wonk can fail to see the truth in Smith's claim.
Here, however, the separation of means and end can be more serious than in the case of a trinket of frivolous utility. Buying a gadget you don't need because you like the way it works doesn't hurt anyone but you. Enacting policies because they sound cool can hurt the public they're supposed to benefit. "All constitutions of government," Smith reminded readers, "are valued only in proportion as they tend to promote the happiness of those who live under them. This is their sole use and end." Elsewhere in The Theory of Moral Sentiments, Smith criticized the "man of system" who imposed his ideal order, heedless of the wishes of those he governed.
Smith appreciated the beauty and allure of systems. His life's work was to formulate systematic understandings of social orders, and he concluded his discussion on an optimistic note. Depicting an appealingly intricate system, he suggested, could enlist those otherwise uninterested in beneficial reforms. Seduce them with organization charts!
"You will be more likely to persuade," Smith wrote, "if you describe the great system of public police which procures these advantages, if you explain the connexions and dependencies of its several parts, their mutual subordination to one another, and their general subserviency to the happiness of the society; if you show how this system might be introduced into his own country, what it is that hinders it from taking place there at present, how those obstructions might be removed, and all the several wheels of the machine of government be made to move with more harmony and smoothness, without grating upon one another, or mutually retarding one another's motions."
It's hard not to see this argument as the inspiration for Wealth of Nations. If Smith could make an economy of free exchange seem as cool as the latest intricate gadget, he might just be able to sell it.
]]>On the first Monday in May 2015, Rihanna arrived at the annual gala hosted by the Metropolitan Museum of Art’s Costume Institute in an outfit that created a global sensation. Even for the Met ball, it was extraordinary: a yellow silk cape embroidered in silver and gold, with a lavish collar of fox fur dyed to match. The train stretched sixteen feet and required three attendants to arrange it for photos. Dubbed the “Yellow Empress” by its designer Guo Pei and “the omelette dress” by internet memes, the cape weighed fifty-five pounds. Unlike the model who had shakily walked Guo’s creation down a Chinese runway, Rihanna pulled it off with apparent ease. “Only women who have the confidence of a queen could wear it,” says the designer.
The image of Rihanna in Guo’s creation now seems like a relic from the twilight of a lost time: a sloe-eyed beauty with café au lait skin wearing robes in the color once reserved for Chinese emperors—robes designed by a modern woman steeped in Chinese culture and eager to learn from the world. The increasing estrangement between China and the West, the turning inward of countries including the United States, and tribalist taboos against appropriation make such moments rarer now. But for a time, cultural synthesis seemed like the future. Walls had tumbled and empires receded. Global communication, travel, and exchange were merging the world’s most fertile minds into the first world civilization. High and low, art and science, culture and commerce, East and West—all, it seemed, could flourish in a common human enterprise. Anyone’s heritage could be a font of creativity, every culture honored and shared. No one had a monopoly on excellence or its benefits.
That was the world that gave us Guo Pei, China’s most acclaimed couturier.
Guo developed her art without formal training beyond the basics of making everyday apparel. She studied high-end design from books and later from museums. She thus avoided the ideological assumptions that Western students pick up in design schools. She also came of age when China was opening to the world. With the backing of her husband and business partner Cao Bao Jie, a Taiwanese textile importer who goes by Jack, she built her business before the political tightening under Xi Jinping, affording her artistic freedom. She was free to pursue her personal ideals of beauty and strength, constrained only by the need to find paying customers.
The cultural confidence of Guo’s outsider perspective represents something greater than a single woman’s work. She forces Western audiences to reconsider supposedly enlightened assumptions from the outside. Her art challenges three convictions that increasingly shape—and cripple—fashion, art, and culture in the West.
The first is that comfort, whether physical or psychological, is paramount. As Rihanna’s cape demonstrates, Guo rejects this idea. “I use the weight of the clothes, the height of the shoes, and the unwieldiness of the dress to represent the inner strength and confidence of a woman,” she says. Her runway clothes aren’t designed for daily wear, of course, but their spirit is. In Guo’s art, the most powerful individuals are those who meet challenges with apparent ease. They don’t cower, wobble, or complain. Her opulent luxury is anti-decadent.
The second notion is that adopting motifs from other cultures, particularly religious symbolism taken out of context, constitutes an insult or an outright crime. Among Western gatekeepers, “cultural appropriation” is a sure ticket to controversy and possible cancellation. By looking at Western motifs through foreign eyes, Guo shows how essential such appropriation is to artistic advancement and demonstrates the way an artist can honor the beauty of cultural artifacts without adopting their traditional meanings.
Finally, there is the belief that the crimes of the past negate its accomplishments. To overcome historical evils, zealots condemn the perpetrators’ achievements along with their sins. Guo grew up in a China that had taken this path, and she has emphatically turned against it, valuing excellence from imperial culture without wanting to return to it.
Recently, two California museums have given the public a closer look at Guo’s work. Since November, the Bowers Museum of cultural arts in Orange County has featured about forty works, including bridal ensembles and the blood-red gown worn by then-eighty-five-year-old model Carmen Dell’Orefice at the climax of Guo’s second haute couture show in Paris.
Last year, the Legion of Honor in San Francisco hosted a once-in-a-lifetime extravaganza of Guo’s collections. Like the Met’s blockbuster show of Alexander McQueen’s work in 2011, the San Francisco exhibition helped lay to rest the question of whether fashion belongs in art museums. It included works that were both exquisitely crafted and, especially in recent collections, conceptually intriguing. They were unquestionably art.
Supplementing spaces dedicated to the couturier, curators brought some of Guo’s creations into the museum’s permanent galleries, where they interacted with the surrounding artworks. Miniskirts with vertical pleats resembling book pages flanked the Rubens portraits of an Antwerp silk merchant and his wife wearing similarly pleated ruffs. Deep-blue dresses with elaborate gold embroidery posed among the gilded, blue-brocaded chairs of the neoclassical Salon Doré. A white dress with a three-dimensional surface of embroidery and synthetic gems stood beneath a 15th-century carved and gilded ceiling, its mandorla-like hood recalling devotional portraits of the Virgin Mary.
At the center of another gallery shone Guo’s masterwork, Da Jin (Magnificent Gold), a gold dress that took fifty thousand hours to make. Its bell skirt is constructed of twenty-four vertical panels embroidered with lotus pods and trailing plants, which are Chinese symbols of everlasting exuberance and purity of mind. With its golden color and a panel for each hour of the day, says Guo, “Da Jin represents the sun.” It also symbolizes the rebirth of Chinese culture after Mao’s campaign to obliterate it. “I see this as a kind of reincarnation,” she says, “like the sun rising out of the darkness.”
Born in 1967, Guo grew up in a land where fashion was forbidden. The people’s clothes came in a few politically approved styles and three colors: blue drab, gray drab, and olive drab. Adornment invited persecution. To stay safe from the Cultural Revolution’s Red Guards, Guo’s grandmother threw her jewelry into the river and burned her precious dresses. They represented the ostracized Four Olds: “old ideas, old customs, old habits, old culture.” All she kept was a string of silk flowers she’d worn at her wedding. She couldn’t bear to part with them.
“When I was little, I didn’t know what fashion was,” says Guo. “The word didn’t exist.” At night, her grandmother told her tales of life in the waning days of the Qing dynasty, describing the glorious garments she’d seen and worn. As the little girl drifted off to sleep, she imagined dresses made of the smoothest silk satin, embroidered with brilliantly colored butterflies and flowers. “Although I’d never seen these clothes,” she recalls, “they were, in my mind, the most beautiful things that had ever existed.”
By Guo’s teenage years, China was liberalizing under Deng Xiaoping. At fifteen, Guo entered a trade school, joining the country’s first class of fashion students. After a stint at a state-owned garment maker, she spent a decade churning out thousands of designs for one of China’s new private apparel companies. In 1997, she opened her Rose Studio, selling bespoke garments to Chinese celebrities and the country’s wealthy elite. Her first couture show was in 2006.
“Fashion,” argues the French historian Gilles Lipovetsky, “attests to the human capacity to change, the ability of men and women to invent new modes of appearance. Fashion is one of the faces of modern artifice, of the effort of human beings to make themselves masters of the conditions of their own existence.” Like scientific, technological, and economic innovation, fashion is dynamic and open-ended. It, too, emerges spontaneously from imagination, experimentation, and competition.
But fashion is distinct from progress. Whether in clothing, music, art, or children’s names, fashion pursues novelty for its own sake. Its innovators challenge habituation rather than addressing discontent. The problems they solve are aesthetic and expressive. Fashion doesn’t improve. It renews.
Despite their differences, fashion and progress thrive in similar environments. We rarely find one without the other. Both do best in commercial societies where no one has a monopoly on deciding what works. Both require tolerance for unpredictability and change. Both prosper where ideas flow freely and individuals can follow their own curiosity. Both suffer when knowledge is lost or forbidden.
The violent rupture of the Cultural Revolution meant that to pursue her fashion ambitions, Guo had to become a Renaissance woman. Like a Florentine sculptor or humanist, she had to recover a lost heritage. The knowledge she sought wasn’t written down. It was the tacit understanding of skilled artisans. Although fashion itself doesn’t make progress, it depends on techniques that can improve—or be forgotten.
Guo began with embroidery, scouring Hebei, the province that rings Beijing, for living practitioners. In a rural village, she spotted an embroidered door curtain and asked the woman within whether she had done the work. Did she have more embroidery? The woman brought out pillows and shoes she’d decorated.
“But the embroidery was very rough,” Guo recalls. She was looking for finer work. She pointed to a little green leaf on a pillow. About a centimeter long, it was made up of fifteen stitches. Could the woman do the same thing but with fifty stitches? With finer thread, she said she could. Given the right supplies, she decorated a qipao with the kind of intricate embroidery Guo imagined. “It became the first dress from which I learned embroidery but also a standard I would apply in the future,” says Guo, who hired the embroiderer to teach others. She now employs a team of three hundred, most of them trained at Rose Studio.
Guo also wanted to recover the art of fashioning silk flowers like the ones her grandmother had cherished. Known as “palace flowers,” they were made from extremely fine layers of raw silk, rolled by hand into petals and colored with a smudging technique. Guo sought out artisans who might know anyone who could still create them. “No one makes them anymore,” she was told. “It’s a lost art.” Those who might remember it were too old to teach it.
But a man who’d once run a silk flower factory told her where she might be able to find a forgotten inventory. Following his instructions, she located a tiny warehouse stacked with paper boxes. They contained thousands of flowers. Unfortunately, after decades of storage and neglect the forgotten treasures were in bad shape, flattened and deformed. Guo was crestfallen.
Don’t worry, counseled the old factory manager. Just get a pot of boiling water and steam them. “The instant the flowers came into contact with the steam,” Guo recalls, “they bloomed and came alive. At that moment, it felt as if they are living things. It is because culture is life. It is the life of mankind.” For a 2012 collection titled “Legend of the Dragon,” Guo covered a dress with the rescued blossoms. On display in San Francisco, they burst like a spring field from under a robe embroidered with rivers of silver and gold. The skill of making palace flowers has vanished, but the evidence of it endures.
Guo is both proudly Chinese and a cultural magpie. Wherever she finds beauty and inspiration, she claims it as her own. Her husband remembers once remarking that something she was working on was very Western. “She said to me, ‘No Jack, it’s not. It’s Chinese.’ Because for her, it’s not two separate worlds. It’s one world.”
For all its Chinese resonance, the skirt of Da Jin harks back to the costumes the teenage Guo loved in Gone with the Wind and Sissi. She credits Napoleon’s uniform, which she saw on an early trip to Paris, with inspiring Da Jin’s embroidery. “When I stood in front of his uniform, embroidered with metallic threads, I was especially moved,” she recalls. She didn’t think of Napoleon the way a European would, as a fraught historical figure. She saw universal humanity. “I realized that when a person was facing death, they could still dress in such a fastidious and exquisite way,” she says. “It was a kind of dignity in human life.”
In her 2020 Himalaya collection, Guo used the reverse sides of Japanese obi, the sashes used to tie kimonos. She cut up hundreds of antique obi and recombined them to create abstract patterns with the loose threads behind their embroidery. “Sewing together the various fabrics symbolizes the convergence of different civilizations and cultures,” she says, “weaving into each other like history through the centuries.” Her atelier doesn’t simply resurrect Chinese embroidery techniques. It combines them with those of Europe and India. “Rose Studio’s techniques are from all over the world,” she says.
Guo even engages in, and mostly gets away with, that most forbidden form of cultural appropriation: repurposing religious imagery. The Himalaya collection incorporates embroidered versions of the Tibetan Buddhist paintings known as Thangkas. It sparked a brief social media protest but was featured in the San Francisco exhibit. Guo’s intentions were too poetic and spiritual to sustain ongoing protest. The Himalayas, she says, “are like a gloriously divine image for me. For thousands of years, the place has symbolized the road to truth, the residence of the gods, the temple of the soul. I like the sense of stillness and purity.”
Two of her Paris collections reference the arches, spires, rose windows, and crosses of European churches. In the 2018 documentary Yellow Is Forbidden, we see her asking the European printers whether anyone might be offended by fabrics with images from the frescoes of a Baroque cathedral. “Are angels okay?” she asks. “All religions are okay with angels, right?”
Guo’s church-inspired creations are more uplifting than the self-referential jeweled crosses that Christian LaCroix put on clothes in 1988, which Anna Wintour featured on her first cover of Vogue. But they’re divorced from Christian belief. “I was inspired by Western religion and churches, but I don’t really understand this religion,” she admits. “I was deeply moved by its beauty.”
Such sincere appreciation dissolves the claims that appropriation is an act of aggression. One need not understand a culture from the inside to find its artifacts valuable and moving, or to share them with the world. “Any country or culture is the wealth and treasure of mankind,” Guo maintains.
In her Beijing home, the designer collects kaleidoscopes, calling them “my happy place.” Invented by the 19th-century Scottish physicist David Brewster, kaleidoscopes generate ever-renewing beauty by reflecting and remixing colors and shapes. Guo’s work demonstrates what humanity can gain by ignoring demands for cultural purity, breaking our isolation, and seeing ourselves in others. Both art and progress depend on the freedom to experiment with new combinations. In the beauty of a kaleidoscope, no new arrangement is off-limits.
]]>“This is the cleanest salmon you will ever have in your life,” boasted company co-founder Aryé Elfenbein. Vat-grown salmon contains no mercury, no arsenic, no microplastics, no parasites—nothing but fish cells. The precisely controlled environment of the cell culture ensures that the raw salmon is free of contaminants. Wild or farmed fish intended as sushi, by contrast, must be frozen to kill parasites. Nature isn’t clean.
Wildtype’s salmon is pure because it’s artificial, I suggested. Elfenbein bristled, preferring to emphasize transparency. The company can provide much more labeling information than you usually get on fish, he suggested, citing the amount of omega-3 fatty acids. But when you’re dealing with animals, I responded, every fish is different. To label their contents you’d have to test each individual salmon. Here, you control what’s in the fish. It’s that artifice that makes labeling possible. “So maybe there’s less variability in this process,” Elfenbein acknowledged.
That’s when Dalton Thomas, a sushi chef who works as a food service sales lead for Wildtype, brought up artificial ice. He’d heard the story in How We Got to Now, a TV series based on Steven Johnson’s book by that name. An ice baron who’d made a fortune shipping ice south from lakes and streams in New England found his business threatened when technology was developed to freeze water commercially around 1850. So he sowed fear of “artificial ice,” hurting the new technology’s prospect. I watched the episode on Amazon and, sure enough, Bernard Nagengast from the American Society of Heating, Refrigeration and AC Engineers told the story just as Thomas remembered it.
“They saw a threat to their business by a machine that could actually make the ice,” he said. “And of course, they were the ones who came up with the term ‘artificial ice.’ In other words, fake ice. It’s not real ice.” The ice industry even claimed that artificial ice was likely to carry disease. As Johnson writes, “It was a classic case of a dominant industry disparaging a much more powerful new technology.” Unable to raise money, inventor John Gorrie died broke, without selling any of his refrigeration machines.
Gorrie wasn’t the only one fooling around with refrigeration in the mid-19th century. “The conceptual building blocks were finally in place, and so the idea of creating artificially cold air was suddenly ‘in the air,’” writes Johnson. Boosted by the Civil War, which cut off ice shipments to the South, the refrigeration business eventually took off, despite the naysayers. That’s the story Johnson tells, anyway.
But it’s too simple. And the claim that public suspicion fed by natural-ice magnates seriously hindered artificial ice is simply wrong. When I went through 19th-century archives at Newspaper.com, I found an entirely positive attitude toward artificial ice. The problem was more mundane than bad publicity. It took decades of technical innovation to get the cost of refrigeration low enough to compete with ice cut from ponds.
In 1865, the Times Union in Brooklyn lamented the (natural) “ice monopoly” that the paper claimed was keeping prices high despite warehouses bulging with unsold inventories. Households were paying wholesale prices of $20 a ton while “market men and butchers” bought at $10 a ton. Artificial ice, although a promising concept, wasn’t yet a viable alternative. “Freezing mixtures are so expensive that no artificial ice has ever been made at a less cost than 5-1/2 cents a pound,” or $110 a ton, the paper reported. But the writer clearly deemed artificial ice a good idea: “Something of this kind, made to work with simplicity and safety, at a small cost, would be a desideratum in many parts of the United States where natural ice cannot easily be obtained.”
Three decades later, the industry was finally taking off. An 1897 article reprinted in many U.S. newspapers began by noting that “it is only within the last ten or fifteen years that the making of artificial ice has become a general industry.” A new process, it reported, was lowering the price so fast that natural ice would soon be too expensive to compete. Far from fretting, the article presented artificial ice as a human triumph: “If these figures are realized nature will be driven from the commercial ice business, and the inventive genius of man will have scored another improvement over his clumsy and antiquated ancestor.”
Natural ice wasn’t deemed authentic or superior. It was “clumsy and antiquated.” Far from an insult, calling ice “artificial” made it more desirable to consumers. >
Factory-made ice found a ready market among customers anxious about water-borne diseases and tainted food. Both were serious problems in burgeoning industrial cities. “The demand for artificial ice has been increased by all citizens who are careful to look after the wholesomeness of their food and the general health of their homes,” reported the Fort Wayne News in 1900, noting that “butchers who want no impurities in their ice chests are making a great demand for artificial ice” and “a dutiful mother will have nothing but pure ice for her children.”
In the end, artificial ice triumphed because it was cheaper and more reliable. But the promise of purity and the aura of technological progress gave it an important early allure. To our 19th-century ancestors “nature” was a source of peril rather than purity. That we’re quick to believe they found the label artificial off-putting says more about our culture than about theirs.
]]>In February, as I walked past a drugstore display rack, a greeting card caught my eye. On a bright blue background, a stylized image of black aviator sunglasses sat above the phrase, “COME ON, MAN!!” in white, with the E represented as three red stripes. It was Joe Biden in a graphical nutshell.
The stripes, lifted from Biden’s campaign logos, evoke the American flag. His catchphrase, repeated throughout his debates with Donald Trump, is an Everyman expression of incredulity. The shades are Biden’s signature accessory, the embodiment of American swagger. Aviators are for bad boys who want to do good.
Biden and his team have gone to great pains to make the iconic lenses a central element of his personal brand. The first photo posted on his vice presidential Instagram account in 2014 was a close-up of Ray-Ban Aviators posed on his desk. His 2020 campaign created a “Team Joe Aviators” Instagram filter to let supporters put the glasses on their own photos. For the 80-year-old president, whose age makes even partisan Democrats nervous, the sunglasses offer a reminder that classics go in and out of fashion but never disappear.
Aviator sunglasses date back to the 1930s, when the U.S. Army Air Corps commissioned them for pilots. (Contrary to popular belief, the first versions were made by American Optical, founded in 1833 and still in business, not Ray-Ban.) The slightly curved, teardrop shape blocks peripheral glare. The sunglasses became iconic in World War II, when they were worn by soldiers and sailors as well as airmen. After the war, civilians adopted the style, which was in plentiful supply thanks to military surplus stores.
Aviators remain permanently imbued with the aura of Gen. Douglas MacArthur, who was photographed wearing them as he waded ashore the island of Leyte in 1944, fulfilling his promise to return in victory to the Philippines. In surveys in the 1980s, Ray-Ban found that “macho” and “American” were the words most often associated with its Aviator shades. Those associations haven’t changed much in the intervening years.
“The aviator has a butch quality that few other sunglasses can match,” Financial Times style columnist Nicholas Foulkes wrote in 2002. Women wear the style, but it has a particularly masculine appeal. Suggesting that the sunglasses’ enduring popularity stemmed from their U.S. sensibility, Foulkes quoted a London optician who declared that “aviators meant the very essence of America.”
That essence is closely related to a constant projection of confidence. In mirrored form, aviators intimidate. But they can also reassure. Something about these sunglasses, style journalist Teo van den Broeke wrote in a 2020 column for British GQ, “speaks of mastery and dependability.” Top Gun’s Maverick breaks rules but triumphs in the end. MacArthur did return. Sure, these guys boast, but they aren’t just talk. They get the job done. That’s the image—of America and of himself—that Biden wants us to see in his eyes.
With the swagger comes sex appeal. “[N]o man has ever worn a pair of aviators and looked anything other than sex on a plate,” van den Broeke enthused, pronouncing the aviator-wearing Biden his “personal #ManCrushMonday.” Big enough to cover aging eyes yet graceful enough not to seem like deliberate camouflage, the style offers the suggestion of eternal youth.
Much of the style’s allure emanates from its original purpose. From the days of biplanes and silk scarves, the aviator—a more glamorous term than pilot—has been an archetype of masculine glamour. Whether in a biplane or an F-22, the aviator combines youth, daring, grace, bravery, technical mastery, and forward-looking modernity. World War I aces, historian Robert Wohl writes in his book A Passion for Wings, “exemplified more purely than any other figure of their time what it meant to be a man.” Unlike the grunts in the trenches, they were the masters of their fates, of their machines, of the air itself.
For Biden, the sunglasses also represent loyalty and persistence. He says he has worn Ray-Ban Aviators since his teenage years, regardless of current fashion. During the Obama years, aviator shades weren’t exactly on trend. A 2008 article in Biden’s hometown paper, hopefully titled “Joe Cool,” twitted the vice presidential candidate for “a look that was super-hip when Maverick and Iceman were roaming the deck of an aircraft carrier in ‘Top Gun’ in 1986.” The veep was a throwback. At a 2010 rally, President Barack Obama said, “Joe looks cool in those glasses, too, doesn’t he?” It wasn’t clear whether he was complimenting his vice president or teasing him.
In Vanity Fair in August 2020, Erin Vanderhoof skewered Biden as insufficiently radical, writing that the glasses “stand in as a symbol for why so many young people feel disillusioned by the candidate. Six decades ago, Biden picked an accessory and he has stuck with it ever since … . It seems to reflect his approach to ideas like bipartisanship and respect for norms.”
But that continuity—including the promise of respect for norms—appealed to much of the electorate, which wasn’t ready to write off the United States as an irredeemably awful country or make a virtue of demonizing their fellow citizens. Like Trump’s MAGA hats, Biden’s sunglasses hark back to the triumphs of the 20th century but without the sense of loss. Aviators suggest an America that is feisty, nonconformist, powerful, competent, and ultimately good. Like the classic lenses, that vision of the country goes in and out of fashion but never disappears.
Now Biden’s look is back in style. Late-night TV host Jimmy Kimmel got a laugh last year when he introduced the president: “Our very special guest tonight is to aviator sunglasses what Tom Cruise is to aviator sunglasses.” Maverick was once again at the top of the box office, as cocky as ever in his signature shades.
Biden isn’t Joe Cool. He’s the guy at the bar, the talkative uncle at Thanksgiving, the fan yelling, “C’mon, man!” at the referee. He was born the year MacArthur escaped the Philippines for Australia. He’s older than the oldest baby boomer. But he wears aviator shades for the simple reason that he wants to look like a badass.
This article appears in the Spring 2023 print issue of Foreign Policy magazine.
Behind chicken abundance is the efficient production that critics call factory farming. Bred for maximum meat in minimum time, confined to crowded sheds, and subjected to assembly line slaughter and disassembly, chickens destined for mass consumption endure short, unhappy lives. Cheap chicken also exacts a human toll. Although automation is improving conditions, chicken processing may be the country’s worst job: smelly, noisy, bloody, cold and injury-prone from slippery floors and repetitive motions. Plus the pay is low.
Most Americans aren’t about to give up chicken, but we’d rather not dwell on where it comes from. In the not-too-distant future, however, the trade-off between conscience—or ick factors—and appetite may no longer be relevant. Instead of slaughtering animals, we’ll get our meat from cells grown in brewery-like vats, with no blood and guts. In November, that science-fiction vision came a crucial step closer to reality when the Food and Drug Administration gave its OK to the slaughter-free chicken from Upside Foods, a San Francisco-based startup originally known as Memphis Meats. The company must still work with the Agriculture Department to establish inspection procedures and win labeling approval. It plans to first offer the meat to high-end restaurants.
Upside Foods is one of a host of startups using cutting-edge biological techniques, known collectively as synthetic biology or synbio, in search of more environmentally friendly, less ethically fraught foods and other materials. The customer is “anyone who loves to eat but really cares. They care about animal cruelty, or they care about the future of our planet,” says Anne Gerow, a spokeswoman for Perfect Day, founded in 2014 by two self-described “struggling new vegans.” To make “animal-free dairy” products, Perfect Day genetically tweaks microflora so they excrete whey just like that found in milk.
These vat-grown products are different from the plant-based meat substitutes sold by companies such as Impossible Foods and Beyond Meat. Upside’s chicken is chicken; Perfect Day’s whey is whey. The cells or proteins are the same, just produced in a different way—through human ingenuity rather than natural growth. (Impossible Foods uses synthetic biology to produce heme, the molecule that gives beef its distinctive color and taste, but its meat alternatives are mostly made up of soy proteins and vegetable oils.)
Synbio executives talk like animal lovers and environmental activists. But synbio is still a form of engineering, a science of the artificial. As such, its ethical appeal represents a significant cultural shift. Since the first Earth Day in 1970, businesses large and small have emerged from the conviction that “natural” foods, fibers, cosmetics, and other products are better for people and the planet. It’s an attitude that harks back to the 18th- and 19th-century Romantics: The natural is safe and pure, authentic and virtuous. The artificial is tainted and deceptive, a dangerous fake. Gory details aside, the “factory” in factory farming makes it sound inherently bad.
Synthetic biology upends those assumptions, raising environmental and ethical standards by making them easier and more enjoyable to achieve. It could help reverse what the writer Brink Lindsey has dubbed “the anti-Promethean backlash” that began in the late 1960s, defined as “the broad-based cultural turn away from those forms of technological progress that extend and amplify human mastery over the physical world.” Synthetic biologists are manipulating atoms, not merely bits.
Anti-Promethean attitudes are still culturally potent, of course, with their own intellectual ecosystem of publications and advocacy groups. “Cell-cultured meats are imitation foods synthesized from animal cells, not meat or poultry that consumers know,” pronounces Jaydee Hanson, the policy director for the Center for Food Safety. The activist group is lobbying the U.S. government to require that lab-grown meat carry off-putting labels like “synthetic protein product made from beef cells.” A neutral term like “cultivated meat” should satisfy most people, however; or the industry could push for the tendentious “cruelty-free” favored by cosmetics makers.
Typical consumers care mostly about taste and price, and early taste results are encouraging. I haven’t tried Upside Foods’ chicken, but I’ve sampled Wildtype’s sushi salmon, grown in a similar way, which is now awaiting FDA approval. I’ve also eaten ice cream and flavored cream cheese made with Perfect Day’s whey. All tasted good. Reviews of Upside’s chicken are positive: “The most surprising aspect was that there was no surprise—the chicken tasted just like chicken should, only more so,” wrote Time’s Aryn Baker, noting that supermarket chicken tends to be bland, “more a texture than a taste,” because breeders care more about quick growth than flavor.
Selling cultivated meat at a competitive price poses a tougher challenge. Wildtype’s salmon, which initially cost the equivalent of $400,000 a pound, is down to $20-25 for two pieces of nigiri, or about $250 a pound. That’s still pricey—high-quality salmon can run $150 a usable pound—but the trend is in the right direction. Knowing the difficulties, Wildtype deliberately picked an expensive product to compete with. Matching chicken prices will be much harder, which is surely a reason that Upside Foods is starting with high-end restaurants whose customers aren’t too price-sensitive.
Barring a new backlash, the long-term trajectory seems certain. Within a generation, vat-grown meat may be not merely common but normal. Within two, it could be morally imperative. Economics and technology can transform ethical expectations and practices. The lower the cost of virtue, the more willing people are to embrace it. Infanticide dwindled in Europe as condoms spread and living standards rose. By offering kinder alternatives that don’t sacrifice taste or tradition, synthetic biology enables more ethical living. It reinvigorates the ideals of technological progress in the material world. Bring on the slaughter-free kung pao.
]]>For Americans, shopping is both a popular pastime and a defining social institution. Our propensity for admiring, coveting, pursuing and accumulating consumer goods has attracted righteous condemnation and knowing satire for well over a century. Shopping, quipped editor J.L. Harbison in 1899, is “an endless hunt for the unattainable, with [the] result of not wanting it when secured.” Yet Americans in his day remained “inveterate shoppers.” We still are.
Nearly 197 million Americans, about 60% of the population, went shopping in-person or online during the five-day period beginning on Thanksgiving, according to the National Retail Federation. The number of in-store shoppers jumped 17% from last year, to 122.7 million. (Many people shopped both in-person and online.) On average, shoppers spent more than $325 on holiday-related purchases.
The urban palaces of early department stores, the climate-controlled corridors of suburban malls, the endlessly scrolling pages of Etsy, the utilitarian aisles of Walmart and the chatty reveals of haul videos aren’t merely sites of envy or exchange. They’re places where Americans—both buyers and sellers—work out who we are and who we want to be. Since the mid-19th century, modern retailing has tested the practical meaning of equality and freedom.
When A.T. Stewart opened his multistory dry goods store in 1846, the Manhattan merchant introduced two revolutionary practices that we now take for granted. He let anyone come and browse freely, whether or not intending to buy, and he charged every customer the same price. Both policies changed the everyday meaning of social equality.
At Stewart’s, wrote a journalist in 1871, “you may gaze upon a million dollars’ worth of goods, and no man will interrupt either your meditation or your admiration.” The store and its many emulators established a new social norm. Any well-behaved patron, regardless of class or ethnicity, could freely examine the merchandise without being pestered or pressured to leave. When Black shoppers today object to being shadowed by clerks, as if suspected of shoplifting, they are voicing expectations of freedom and equality rooted in Stewart’s open-door policy.
Not every retailer embraced the equal right to shop, however, with consequences that reverberate to this day. Fifth Avenue merchants lobbied for New York’s landmark 1916 zoning law, which separated different building uses, because they feared the manufacturing lofts encroaching on their neighborhood. “Their specific concern,” says M. Nolan Gray, author of “Arbitrary Lines,” a history of U.S. zoning law, “is that poor Jewish factory girls are coming to window shop along the corridor on their lunch breaks and when they get off from work, and they’re scaring off their elite clientele.” Keeping out lofts was a stealthy way to limit the freedom of unwanted shoppers, undermining the equal right to admire the merchandise.
Stewart’s fixed-price policy, by making each sale of an item equally profitable, eliminated the assumption that a “good deal” meant a loss on one side of the transaction. With haggling eliminated, salespeople were no longer in the business of taking advantage of customers or vice versa. Stewart fired clerks who misrepresented merchandise quality, and he accepted returns. Sales were expected to be mutually beneficial.
If not adversaries, then, what was the relation between customer and clerk? Were they equals? Or mistress and servant? Countless articles in the early 20th century, from advice columns to first-person journalism, propounded the norm that the “woman behind the counter” deserved the same respect as the woman in front of it. Whatever the difference in social class, shoppers should identify with those who showed them goods. “Observe the same courtesy in asking a service of another as you would expect yourself,” advised etiquette expert Florence Kingsland. Like the Golden Rule it echoes, however, this mutual respect was often an ideal rather than a reality.
The high-handed customer now derided as “Karen” is nothing new, nor are the objections to her behavior. In his 1899 essay, Harbison called out the Karens of the day: “The woman with a Grand Duchess style and manner demands attention, which if not instantly forthcoming results in complaint to the head of the department and often very unjustly leads to the dismissal of a ladylike, obliging and competent assistant.”
Both the sales clerk and her customer represented a new kind of freedom. Urban shopping districts were where women claimed the right to dine outside their homes, walk unescorted and take public transportation without loss of reputation. Thousands of female sales clerks flowed out of stores in the evenings, when downtowns had previously been male territory. Department stores provided ladies’ rooms that gave women places to use the toilet and refresh their hair and clothing. They offered female-friendly tearooms. Directly and indirectly, modern shopping enlarged women’s public role.
It also made sexual harassment a more prominent issue. Men known as “mashers” gathered in shopping districts to ogle and chat up women. Some were no more than well-dressed flirts, violating Victorian norms in ways that few today would find objectionable. Many contented themselves with what an outraged clubwoman termed “merciless glances.” Others followed, catcalled and in some cases fondled women as they strolled between stores, paused to look in windows or waited for trams.
“No other feature of city life offers so many opportunities for making life a burden to the woman who for any reason must go about the city alone or with a woman companion,” opined the Chicago Tribune in 1907, leading a crusade against mashers. Outraged society ladies called for hard labor or public flogging as punishment. “Ogling is just as disgusting and offensive to a good woman as any other mode of attack,” declared the president of the Chicago Women’s Club.
When the Chicago police chief suggested that women avoid harassment by staying home and limiting their time in stores, he was roundly denounced by prominent women, business interests and civic leaders. A clergyman declared it “humiliating…that the authorities responsible for the maintenance of public order should feel themselves compelled to refuse the right of the road to any of the city’s citizens.” Americans increasingly assumed that women deserved the same freedom as men to move about in public—a freedom in which retailers and their suppliers had a large economic stake.
“That a city should be ‘livable’ for unescorted women, and not merely for men, was an unprecedented conception that flourished with Chicago’s retail district,” writes historian Emily Remus in “A Shopper’s Paradise.” “Yet just as novel was the notion that the state had an obligation to establish and maintain that environment.”
By the 1920s, American shoppers were affirming a new kind of equality—the democratization of luxury. Rich and poor couldn’t afford the same products, but they could enjoy versions of the same pleasures. While private transportation had historically been the privilege of the rich, a Model T offered a Sunday drive just as a Rolls-Royce did. By using cheaper materials, ready-to-wear manufacturers sold lower-income shoppers the luxury of owning many stylish garments. The salesgirl or office worker, wrote Samuel Strauss, a critic of the new consumerism, “does not want to look like the millionaire’s wife. She wants to have the pleasure the millionaire’s wife has, the pleasure of living luxuriously.” Installment credit, which became widespread in the 1920s, let customers pay for purchases over time, further expanding access to what were once luxuries.
Taking Stewart’s practices to the next level, F.W. Woolworth built a national chain that until 1934 charged no more than 10 cents for any item, using huge orders and rapid turnover to keep prices low. When Woolworth’s offered a gold ring for 10 cents, it sold 6 million of them in a single year. Instead of relying on sales clerks, Woolworth’s shoppers served themselves, taking their items to cashiers. The chain and its imitators appealed especially to new immigrants and Black Southerners migrating into cities. “Immigrants and Blacks formed a ready market for inexpensive products: china and glassware, ribbons and hair accessories, dolls and mechanical toys,” writes historian Regina Lee Blaszczyk in “American Consumer Society, 1865–2005.” “To these marginalized groups, simple luxuries from the five-and-ten symbolized having accomplished something in the new social order.”
Whether shopping at five-and-dimes, dodging segregation through mail-order purchases, or patronizing department stores, Black Americans found in shopping a measure of respite from social inequality. “Consumption offered material, physical, and psychological satisfaction and comfort. Simply put, it provided African Americans with a taste of the good life,” writes historian Traci Parker in “Department Stores and the Black Freedom Movement.” “Additionally, despite being mistreated and underserved in the retail industry, black consumers learned that their dollars were valued and powerful.”
Yet in the mid-20th century, the restrooms in Southern department stores were still segregated. Fitting rooms and returns were limited to whites only. Black customers couldn’t dine in department store restaurants or sit at dime-store lunch counters. Not everyone’s dollars were equal.
In February 1960, four freshmen at what is now North Carolina A&T University in Greensboro decided to challenge segregation at the downtown Woolworth’s. After buying school supplies, the four young men sat down at the lunch counter, ordered coffee and doughnuts, and declined to leave until they were served. They remained until the store closed.
The students chose Woolworth’s because they found its policies particularly galling. “They tell you to come in: ‘Yes, buy the toothpaste; yes, come in and buy the notebook paper…No, we don’t separate your money in this cash register,’” recalled Franklin McCain, one of the four. “The whole system, of course, was unjust, but that just seemed like insult added to injury.” A whites-only lunch counter violated the promise of modern shopping.
The Greensboro sit-in lasted almost six months, drawing hundreds of participants and counterprotesters and spreading to other downtown stores. Finally in July the store gave in, officially desegregating its lunch counter. Student-led sit-ins swept the South, usually with results that included an end to other anti-Black policies.
“Segregated eateries were emblematic of the profound contradictions of American consumer culture: although situated in the ‘democratic space’ of the department store, they were Jim Crow spaces that reinforced white supremacy and black inferiority,” writes Ms. Parker. “Indeed, such lunch counters may well have become targets for civil-rights protests because they rendered America’s racial contradictions so visible in everyday life.”
For shoppers in the 19th and 20th century, equality accompanied mass production, mass distribution and mass marketing—the same standardized goods for everyone. As Andy Warhol put it, “You can be watching TV and see Coca-Cola, and you know that the President drinks Coke, Liz Taylor drinks Coke, and just think, you can drink Coke, too. A Coke is a Coke, and no amount of money can get you a better Coke than the one the bum on the corner is drinking.”
Today Coke has fragmented into multiple versions, and restaurant fountains offer customers the chance to mix custom concoctions. To the uniformity of mass consumption, today’s shopping adds a respect for individual differences—a new kind of equality and freedom. Consumer choices embody political convictions, religious values and cultural affiliations. What Americans believe and what they are likely to buy are increasingly aligned, and we’ve only begun to wrestle with what this shift means for how we conceive of equality and freedom. “The endless hunt for the unattainable” isn’t merely a quest for customer satisfaction but for the realization of social ideals. The quest doesn’t end at the sales counter, but it often begins there.
]]>Mr. Marx, a Tokyo-based writer on culture and fashion, opens “Status and Culture” on the scene of the conflict. Originally adopted by the “lost Beatle” Stu Sutcliffe, the haircut began as an imitation of German art students who were themselves imitating French styles. After first mocking their bandmate, the rest of the Beatles embraced the style to distinguish themselves from other aspiring British musicians. The subsequent hysteria—“They look like girls!” was a common objection—added to the band’s aura. Eventually, even folks with crew cuts got used to moptops, and the Beatles moved on to what Mr. Marx calls “full-length hippie locks.”
The story exemplifies what he calls “The Grand Mystery of Culture”: “Why,” he asks, “do humans collectively prefer certain practices, and then, years later, move on to alternatives for no practical reason?” Where does fashion— in clothes, art, or music—come from? Mr. Marx suggests that the master key for unlocking such mysteries is status.
“Status,” he writes, “denotes a position within a social hierarchy based on respect and perceived importance.” Our status depends on how others see us, and “status positions are best expressed as membership within tiers stacked up from high to low.” Most members of a given group, whether that’s a kindergarten class or a curling team, have “normal status, for which they receive common courtesies and basic privileges—but no special treatment.”
To maintain their positions, the normals conform to social conventions, even as they aspire to higher status. Individuals’ decisions about when to switch from one convention to a new one drive cultural changes. “Those who have either very high or very low status are more likely to try new things,” Mr. Marx writes. The innovators may explore new forms for intrinsic reasons. Then high-status people adopt the avant-garde to distinguish themselves, and the trend begins to spread. Take hip-hop music and style, which came out of ghettos in New York and Los Angeles. “New York’s downtown art scene supported Bronx hip-hop before many African American radio stations took rap seriously,” notes Mr. Marx. Eventually, cultural industries “locate high-status innovations with cachet and adapt their content to the existing tastes of mass audiences.” They dilute complexity to make new forms broadly palatable. MC Hammer turns hip-hop into pop music.
Mr. Marx’s status-based explanation is powerful but simplistic. It ignores the power of pleasure, including the joy of novelty itself. As a white working-class kid in rural Lancaster County, Pennsylvania, my colleague Sean Crockett was an unlikely early adopter of hip-hop. But one night in 1987 he was scrolling through the radio dial when he caught a Philadelphia station playing Newcleus’s “Jam On It.” It was like nothing else he’d ever heard and he loved it. He stayed up all night listening this new music. Rap has been his favorite ever since. Stories like that don’t fit easily into Mr. Marx’s single-variable explanation for trends.
Neither do individuals who may simply be more easily bored than other people. Just as not everyone falls into the same status tier, not everyone has the same craving for novelty—a variable studied in depth by psychologists. This is not only a matter of personality differences: film critics, for example, see too many movies to find many of them appealingly fresh. Jaded and easily bored, they reward newness—a mark of sophistication, perhaps, but not necessarily of status competition.
Early in the book, Mr. Marx makes the astonishing claim that “despite the importance of status, there has been a conspicuous lack of discussion about its influence on human behavior.” The rest of the book, however, including its impressive bibliography, demonstrates that this “conspicuous lack of discussion” is no such thing. Social scientists, journalists and critics have been obsessed with status for at least 250 years. Indeed, Mr. Marx ransacks the status stacks, pulling quotes from Thorstein Veblen, Georg Simmel, and Werner Sombart, as well as such important contemporary scholars as Grant McCracken, Cecilia Ridgeway, and Elizabeth Currid-Halkett alongside a raft of journalists. He is especially fond of the “Style Guy” Glenn O’Brien and relies heavily on Everett Rogers’s theory of how innovations diffuse. He quotes René Girard, whose views on imitation are all the rage among admirers of his admirer Peter Thiel. As a survey of the literature on status, the book is broad if not deep. It could serve as an introductory textbook.
“Status and Culture” is blessedly free of the moralizing that so often mars analyses of status. Mr. Marx recognizes that status and status-seeking are human universals: “All status symbols rely on objects and behaviors with practical or aesthetic value that enrich our lives,” he writes. But the book often feels anchored in the second half of the 20th century, when the Beatles, Pop Art, and preppy style were salient examples and mass media essential to cultural diffusion. It doesn’t reach back to, say, the Italian Renaissance to more fully test its theories. Only in the final chapter does it begin to explore our own “era of vast quantities, deep specificity, and breakneck speed, where few individual artifacts, artworks, or conventions leave a dent in society or bend the curve of history.”
In today’s sea of instantly available, constantly ranked cultural production, Mr. Marx argues, everything and nothing has cachet. The result, he worries, is to “debase cultural capital as an asset, which makes popularity and economic capital even more central in marking status.” In some ways, the world he describes sounds like the 1950s, with the culture of TikTok as the new mass media, and “keeping up with the Joneses” measured in likes.
Now, however, individuals with specific passions and tastes can find the things they value far more easily. “We live in a paradise of options, and the diminished power of gatekeepers has allowed more voices to flourish,” Mr. Marx acknowledges. “The question is simply whether internet content can fulfill our basic human needs for status distinction.”
But neither status hierarchies nor creative products have to be universal to flourish. Individuals can find meaning, esteem and new ways of seeing the world within specific communities online or off. Responding to a Getty Museum pandemic challenge, Peter Brathwaite, a successful British baritone locked out of his normal performances, began recreating historic artworks with materials from around his house. He chose depictions of black people, beginning with an unknown 18th-century artist’s portrait of an English servant. Clever, well-researched and provocative, the resulting Instagram selfies brought Mr. Brathwaite acclaim and highlighted largely forgotten historical images. Internet culture elevated not only hispersonal status but that of his ancestors and the black subjects hidden in European art.
Mr. Marx’s vision of a single-tiered status ranking for an entire society limits the power of his theory. It marks the book as a 20th-century artifact, despite its publication date. But he is a curious cultural observer, asking important questions. If he continues his 21st-century explorations in a sequel to “Status and Culture,” I’ll want to read that book as well.
]]>An anonymous emailer sent links to the relevant documents and local press coverage to me and, apparently, to Jerusalem Demsas of the Atlantic. Her subsequent article concluded that Andreessen’s hypocrisy illustrates why housing reforms have to take place at the state level, where “officials are influenced by NIMBYs, but they have a much larger electorate to worry about and a mandate to address the cost of living and rising home prices, not just respond and implement local desires.”
The incident proves more than that. It demonstrates that California’s state-level housing reforms are working — not as fast as they ideally would, but working nonetheless.
Under a law passed in 1969, two years before Andreessen was born, every eight years California cities have to project the future demand for housing in several income tiers and specify where those homes might be built. The long, complicated and expensive ritual has produced many hearings and documents but not much housing. It offered too many loopholes.
Cities could lowball the numbers. They could identify theoretical sites in their plans but, when later faced with a real development proposal, impose delays and restrictions that required scaling down the project, increasing the sales prices or rents, or abandoning the whole thing.
“Housing element” plans didn’t have to make sure the owners of prospective sites were willing to sell. As long as cities went through the right motions, they faced no consequences for obstructing new housing.
California has toughened its approval process for the housing-element plans, and cities face fines of up to $600,000 a month if they don’t come up with an acceptable plan. The state can review at any time whether the city is complying with its promises. If not, it can require streamlining development permissions to keep those commitments.
Cities that fail to meet their obligations face fines of up to $100,000 a month. They can lose state funding. The state can even suspend their power to regulate land use.
The process is still roundabout. It’s a long way from letting supply meet demand. But as housing advocate Nolan Gray said an email, it’s “a useful kludge for bringing at least a little liberalizing state oversight into a very dysfunctional and restrictive system of local land-use regulation.”
The threat of state punishment gives city officials political cover to loosen housing restrictions — like it or not. "I don't want Atherton to change," city councilwoman Elizabeth Lewis said at a July 21 hearing. "It is just heartbreaking and sickening to think we’re facing this from the state.”
But Atherton has already changed dramatically over the past few decades. The growth of Silicon Valley has made a town just four miles from Stanford University or Facebook headquarters an extremely desirable place to live. The upper-middle-class people who bought houses there decades ago couldn’t afford them today. Loosening housing restrictions would make room for people like them.
That fact seems lost on some longtime residents. Take one couple who filed their objections to allowing local schools to buy adjacent lots and build multiunit housing for their staff members. According to public records, they bought one of the area’s more modest homes, with 1,950 square feet, for $370,000 in 1987. It’s now worth 10 times that much.
Then there’s the woman who wrote that “if the mandate to build 345 units including multi-family low-cost units is implemented, Atherton would be dramatically changed forever.” She bought her place for $255,000 in 1976. Zillow estimates that the 3,820-square-foot house on 1.29 acres would sell for more than $10 million today.
Thanks to Proposition 13, these owners pay about $9,000 a year in property taxes, a fraction of what market-rate assessments would yield. A new $1 million townhouse would yield more property taxes than a single-family home that last sold 40 years ago.
Atherton is “an island of state-enforced mansion zoning in the middle of one of the most productive regions on Earth,” said Gray, who is research director at California YIMBY. “There's enormous demand for housing.” The town is never going to be cheap.
But without zoning, he hypothesized, “you’d likely overwhelmingly get a lot of seven-figure condos. That would be OK — every new unit permitted in a wealthy place like Atherton eases price pressure on housing further down the market.”
In the more constrained reality, California requires Atherton to add 348 new housing units over the next eight years, up from 93 in the current cycle. Between 2015 and 2020, the city gained 90 new units, including 54 inexpensive “accessory dwelling units,” or ADUs, otherwise known as granny flats, guest houses or garage apartments. The town hit its numbers for “very low income” housing but not for any other income category. 1
Atherton has added places for live-in domestic staff, but not for young Stanford professors. If its new plan doesn’t do better, it will face serious penalties. Local officials feel the pressure.
Confronting constituents outraged by the prospect of sharing the neighborhood with townhouse-dwelling riffraff, however, the Atherton city council revised its proposal. It eliminated zones for multiunit housing, including 40 units on a lot whose owner emphatically refuses to sell (shades of the old housing-element flimflam). The new plan still counts on a few multiunit additions on school property.
Mostly, it relies on two recent reforms that encourage single-family homeowners to add to the housing stock. One is the 2016 law requiring cities to approve ADUs, which Atherton says will supply 280 new units. The other is a 2020 law giving single-family homeowners two complementary rights: to build two units on their property and to split the lot in two, for a possible total of four homes. The new construction is exempt from challenges under the California Environmental Quality Act, a tool frequented used to block new housing.
With its large lots, Atherton is well suited to this approach. The plan attributes 96 new units to lot splits, including some already in the works.
In the Atlantic’s version of the story, the NIMBYs won: “As a result of a few hundred ultra-wealthy people, the town will remain exclusively for the elite.” But that’s not exactly the case.
For starters, it’s by no means sure that the state will approve the new plan. It could deem its assumptions unrealistic and require revisions.
More important, Atherton still has to meet the goals. The plan is no longer mere ritual. If the projections don’t line up with reality, the town may have to allow some multiunit developments. When constituents complain, local politicians can blame the state.
And a major zoning transformation has already taken place. Making places like Atherton friendly to ADUs and especially to lot splits is a big deal. City planners identified more than 600 lots larger than an acre where the current house was built before 1970 — the prescription for a teardown. With splits, they could account for more than 1,800 new homes.
In a town with fewer than 7,500 residents, that represents enormous potential. As Gray pointed out about seven-figure condos, even expensive new homes free up supply elsewhere in the region.
California’s modest reforms have already pushed the country’s most expensive town into accepting smaller, cheaper homes. Letting NIMBYs vent is a civic ritual. Once it’s over, it’s time to build.
Virginia Postrel: The lack of affordable housing in major US cities has impeded social mobility, fueled inflation and worsened economic inequality. You’re the author of a new book, “Arbitrary Lines,” which looks at the history of zoning and the role it’s played in the housing crisis. You emphasize that zoning is just one aspect of city planning. So, what is zoning?
M. Nolan Gray, author, “Arbitrary Lines: How Zoning Broke the American City and How to Fix It”: Zoning is trying to do two basic things. The first is to segregate land use into categories: residential, commercial and industrial. And within each of those categories, there are going to be dozens of subcategories. So, for example, in Los Angeles, there are residential districts where you can only have single-family homes; or there are residential districts where you can have small apartment buildings or larger apartment buildings. Within commercial zones, there are areas where you can have offices, others where you can have retail.
The second piece of zoning is regulating density. Zoning places strict constraints on how much housing you can build even in places where housing is allowed, or how much commercial floor area you can have even where retail is allowed.
VP: Is zoning a specifically US phenomenon?
NG: Most developed countries have something resembling zoning. They will say industrial building is not allowed in certain quarters of the city, or certain portions of the metropolitan area are going to be reserved for agriculture. But US zoning is unique in at least two ways. The first is single-family zoning. No other zoning system in the developed world, to my knowledge, demarcates specific areas only for single-family housing.
The second way that US zoning is unique is the complete orientation around the car. It’s often illegal to build an apartment building without a parking garage, or it’s illegal to build a commercial strip without a large parking lot.
VP: What are the costs that we’re paying socially for the zoning regimes that we have?
NG: Zoning has four big costs. First, it increases housing prices. It does so in three ways: by allowing less housing to be built; requiring the housing that is built to be more expensive and generally larger than it might otherwise have been; and slowing down the whole process.
The second big cost of zoning is that it limits mobility into high-opportunity regions. The housing crisis is most advanced in affluent and extremely productive places like Los Angeles, New York, San Francisco, Boston. These are places that, historically, poor or working-class Americans could move to and find opportunity. But because the housing is so incredibly expensive, it’s hard for a normal person to move to, for example, the Bay Area. Now Americans move from rich places to poorer, less productive places. And we’re all poorer as a result.
That third piece is the segregation element. Segregation was a core objective of zoning.
VP: Racial segregation?
NG: Class-based segregation. Zoning says, “In this neighborhood, you can only have a home if it’s on a 10,000-square foot lot. If you can’t afford that, then you don’t get to live there.” Of course, in the US context, class maps onto race, particularly black-white segregation. Zoning to this day maintains a high degree of economic segregation that would not have existed otherwise.
The fourth is the sustainability piece. Not everybody wants to live in an apartment or take a train to work or ride a bike to work. But many millions of Americans do. And most local zoning codes simply don’t allow for this, by limiting residential housing near, say, grocery stores or office buildings. Car ownership is written into law by zoning.
VP: Pushing people out of high-productivity coastal cities often also increases their energy use and environmental impacts.
NG: When you live in L.A., you become very aware of just how pleasant the climate is. Even the mid-Atlantic states, in places like New York and Philadelphia, it’s still temperate. Where energy consumption is most extreme is where we’re both originally from: the South. We’ve basically forced millions of Americans to move to those places where their energy consumption is going to go through the roof.
VP: You write about the origins of zoning in both New York and Berkeley, California. Can you explain what drove it?
NG: Both reflect the “Baptists and bootleggers” coalition that gets us zoning. The “Baptists and bootleggers” idea is that political coalitions will normally have someone who’s cynically invested in the policy — the bootlegger who supports prohibition because he can make money off of it — and then the Baptist who provides the political movement with moral cover.
Start with the “Baptists.” During the Progressive Era there was this notion that cities and markets are too scary and chaotic. Wouldn’t it be great if we got all the smartest people in the room to come up with a big master plan for what’s going to be allowed on every single lot in our city for the next 50 years? Most modern people look back and think that’s a little crazy. But that was the ethos.
The bootleggers were the landlords who — in the Manhattan context — think, “Way too much office supply is being built in lower Manhattan and it’s lowering the value of my assets.” In the Berkeley case, if you read the zoning promotional materials, one paragraph will say, “We need to adopt zoning so we can keep industry out of residential neighborhoods.” With modern eyes, you read that and think, Yeah, that makes sense. You don’t want an oil refinery next to your house. But then the next paragraph explains what industries they’re concerned about. It’s Chinese laundries. Or dance halls that are bringing African Americans into the neighborhood.
In New York City, shopkeepers on Fifth Avenue were worried about loft manufacturing moving closer to the shopping district. Again, you read that with modern eyes and think, OK, factories. There must have been smoke or noise or vibrations. But the shopkeepers’ specific concern was that poor Jewish factory girls are coming to window-shop along the corridor, and they’re scaring off our elite clientele. Zoning is much more of a social project than it is a good-government process.
VP: You repeatedly make the point that zoning “cannot build a building. It can only ever stop something from being built.” Why is that an important distinction?
NG: When Minneapolis abolished single-family zoning recently, some of the media coverage said that it was banning new single-family homes. But that’s not what they did. They got rid of single-family zoning, which was just a prohibition on apartments. They were getting rid of a prohibition.
In L.A., there are a lot of conversations about getting rid of minimum parking requirements. And people say, “Come on, you’ve got to have somewhere to park.” But getting rid of minimum parking requirements isn’t saying to developers that you’re not allowed to build any more parking. It’s saying that we’re not going to force you to build any parking. We’re not going to mandate things that you wouldn’t otherwise have done. It’s a really important difference.
VP: Sometimes Republicans portray zoning liberalization as an attack on homeownership or suburbs. But one of the points that you make in your book is that it’s happening in conservative places, such as Arkansas.
NG: Northwest Arkansas has done a lot of reforms. Fayetteville shows that these are important issues for midsize and small cities. You can get rid of minimum parking requirements for commercial properties and it’ll be easier to redevelop your main streets and fill some of those empty storefronts. If you allow accessory dwelling units on every residential property, your town is not going to turn into Kowloon Walled City. But a few more seniors will be able to stay in their homes.
VP: Houston is the great American un-zoned city. Why doesn’t Houston have zoning and why isn’t it a disaster, with tanneries next to bungalows?
NG: Houston made basically every planning mistake you could have made in the 20th century. They built the giant freeways. They did some ill-conceived urban renewal. They maybe weren’t sensitive enough to environmental planning. But they avoided one really, really big mistake: they were the only major US city that didn’t adopt zoning.
The reason is that they were also the only major city that actually put it to a referendum. They put it to a referendum three times, and voters in every case turned them down.
The sky is not falling in Houston. It’s America’s most affordable and most diverse city. It has an extremely low rate of homelessness, because when there are a lot of cheap apartments, even people who might be struggling with mental illness or drug addiction can still keep themselves housed.
There are a few things that make Houston work. One is that there are certain mechanisms that naturally separate the most incompatible uses. Take the leather tanning facility. Yeah, I don’t want one of those next to my home. And guess what? The leather tanning people don’t want to be next to my home either. Industrial facilities want to be near the port, near main rail lines, near freeway interchanges. They don’t want to be next to someone who’s going to complain and launch a nuisance suit and just generally be a headache for them. Commercial buildings generally want to be on major corridors. Homes generally want to be on quiet side streets. The big offices want to be at central locations. So there’s some natural self-sorting.
Houston also has a private system of land-use regulation. If a group of homeowners wants to have much stricter land-use regulation, they can opt into it. They have to convince their neighbors to sign on to a deed restriction. They can control development within their little tiny bubble. But it’s quite different from zoning. It leaves the vast majority of the city free to reinvent itself and adapt to changing needs.
VP: The title of your book is “Arbitrary Lines.” Why that emphasis on arbitrariness?
NG: Things like “floor area ratios” feel very scientific. Things like parking spaces-per-unit feel very scientific. And they’re presented in zoning ordinances, or by planners who are still drinking the Kool-Aid, as if these are authoritative measures that represent some reality intrinsic to the universe. But you scratch ever so slightly at the surface, you realize that these were completely arbitrary standards pulled out of a hat. And they dramatically limit where and how Americans can live their lives. Once you appreciate that, the idea that they should be abolished becomes significantly less extreme.
VP: I’m like you: I like dynamism in cities. But some of us tend to downplay the emotional attachment that people have to their homes, to the feel of their neighborhoods, to the landscapes that trigger their memories, to neighbors that they identify with. These concerns aren’t the exclusive territory of rich white people. How do you address them?
NG: People do have an intuitive conservatism about what happens around their community. I don’t think that’s worth mocking or dismissing. It’s appropriate to say there should be some preservation of outstanding monuments or culturally important places. People absolutely should have a say in what their community looks like. But we’ve spent the last 50 or so years really privileging people who have this extremely sentimental, extremely conservative view of their communities and their neighborhoods — this idea of, I want everything to stay the same, I like it the way it is. It’s weird when people are living in the heart of a city like New York City or Los Angeles, places that are only great because they can remain dynamic and because they can change. It’s a fundamental misunderstanding of what a city is: a living, dynamic thing that is maintained and kept great by the individual plans of the millions of people who inhabit it and engage with it. The more you try to put that in a straitjacket, the more dysfunctional cities will become.
]]>When a 22-pound bag of dog food bag replaces a 30-pound bag, Buddy’s appetite doesn’t shrink accordingly. If the 12-pack of K-cups becomes a 10-pack, coffee drinkers have to replenish supplies more often to keep the caffeine coming. When tomato cans go from 32 ounces to 28 ounces, sauces no longer match up with one-pound pasta boxes. Even inattentive shoppers quickly notice they’re getting less for their money.
Smaller packages don’t fool the folks who compile inflation statistics, either.
If a 16-ounce box contracts to 14 ounces and the price stays the same, I asked Bureau of Labor Statistics economist Jonathan Church, how is that recorded? “Price increase,” he said quickly. You just divide the price by 14 instead of 16 and get the price per ounce. Correcting for shrinkflation is straightforward.
New service charges for things that used to be included in the price, from rice at a Thai restaurant to delivery of topsoil, also rarely sneak past the inflation tallies any more than they fool consumers.
But a stealthier shrinkflation is plaguing today’s economy: declines in quality rather than quantity. Often intangible, the lost value is difficult to capture in price indexes.
Faced with labor shortages, for example, many hotels have eliminated daily housekeeping. For the same room price, guests get less service. It’s not conceptually different from shrinking a bag of potato chips. But would the consumer price index pick up the change?
Probably not, Church said.
Measuring inflation is hard. The goal is to capture changes in the overall price level — the numbers on every price tag — not swings in relative prices. You want to see how much the cost of living, represented by the price of a consistent basket of goods and services, has changed.
That’s different from how many hamburgers equal a gallon of gas or how many apples equal a movie ticket. Relative prices can and do shift all the time without constituting inflation. But for today’s cost-of-living basket to measure the same thing as yesterday’s, the goods and services themselves need to stay the same. And they, too, tend to change, especially when technology is involved.
BLS economists have numerous ways of correcting for those changes, but they err on the side of conservatism. They look for specific, quantifiable differences that customers can reasonably be expected to notice and care about — and that can be plugged into statistical models. A bigger TV screen with a higher-definition picture is probably more valuable to consumers. But what about a hotel room with nicer sheets? That’s trickier. The less measurable the change, the harder it is to correct for.
During the 2000s and 2010s, inflation was probably overstated because of unmeasured quality increases. Now there’s the opposite phenomenon. Quality reductions have become so pervasive that even today’s scary inflation numbers are almost certainly understated.
The paper napkins at a local smoothie shop, a friend points out, have gotten so thin and small that they are “almost useless.” But price trackers would look only at the smoothie itself without accounting for the lousy napkins.
A Starbucks latte may cost the same, but if the store now shuts at 4:00 p.m. instead of 6:00 p.m. — or varies its hours unpredictably depending on who’s available to work — customers are getting less for their money. Shorter, less predictable hours have become common in many service businesses.
Take my recent trip to Bank of America. Arriving around 3:15 to make an inquiry for my condo association, I found the metal security gate down and only the lobby ATMs available. A sign said the branch was open until 4:00 — already a reduction in hours from the old schedule — but it wasn’t. At a second branch, where I was turned away because closing time was approaching, I met a mother and adult daughter who were on their third branch visit, with no success. Coming back another day, they said, meant taking more time off from work. That cost wouldn’t make it into any official inflation measurements.
Similarly, airline ticket prices dropped slightly in June, but that doesn’t mean the cost of air travel is down. Bags are five times more likely to go astray than a year ago, and security lines are insanely long. On July 2, I spent four hours waiting with thousands of other passengers to get through security at Amsterdam’s Schiphol Airport. Whether or not the ticket price was greater than a year earlier, the intangible cost certainly was. While this example is extreme, longer waits for any kind of service, from doctors’ appointments to sandwich orders, are becoming common.
Even if incomes keep up with inflation, quality declines are aggravating and depressing. They make consumers feel powerless and taken for granted, which can lead to angry confrontations. They create a pervasive sense that the world is getting worse. They exact a toll that economic statistics can’t capture.
]]>Many high-profile business leaders and academics now condemn the idea Milton Friedman laid out in a famous 1970 New York Times Magazine essay:
a corporate executive is an employe of the owners of the business. He has direct responsibility to his employers. That responsibility is to conduct the business in accordance with their desires, which generally will be to make as much money as possible while conforming to the basic rules of the society, both those embodied in law and those embodied in ethical custom.
If only this wicked notion of maximizing profits hadn’t come along, some seem to think, we’d live in a progressive utopia of high wages, racial harmony, economic equality and environmental purity.
In 2019, the Business Roundtable repudiated the Friedman doctrine, changing its “Principles of Corporate Governance” to embrace the goal of managing companies for the “benefit of all stakeholders — customers, employees, suppliers, communities and shareholders.” It didn’t specify how businesses would resolve conflicts between constituencies.
The usual response is that conflicts are illusory. Do good and you’ll do well.
“In today’s globally interconnected world, a company must create value for and be valued by its full range of stakeholders in order to deliver long-term value for its shareholders,” wrote BlackRock’s Larry Fink in his 2022 letter to CEOs, reiterating the message that made news in 2018: “To prosper over time, every company must not only deliver financial performance, but also show how it makes a positive contribution to society.”
Of course, if stakeholder-pleasing endeavors and the creation of economic value always went hand in hand, stakeholder and shareholder capitalism would be identical. But life isn’t that simple.
Contrary to what you may have heard, letting stakeholders take precedence over business objectives is anything but nice. Stakeholder capitalism isn’t just a temptation for managers to pursue their pet interests. It’s a prescription for culture wars, political backlash, managerial paralysis and human-resources nightmares.
Every effective enterprise has to consider the interests of employees, suppliers, customers and other stakeholders. But not everyone wants the same thing, and sometimes organizations have to make tradeoffs between goals everyone might agree are good. The question is what to do when faced with a conflict. Without an eye on value maximization, it’s too easy for managers to dissipate company resources by pursuing their personal interests.
The great value of the Friedman doctrine is that it establishes a coherent standard for making tradeoffs. Maximizing economic value tells you to “spend an additional dollar on any constituency provided the long-term value added to the firm from such expenditure is a dollar or more,” as Harvard Business School economist Michael Jensen put it in a 2010 article.
Stakeholder theory, by contrast, tells you nothing. It assumes you just make everybody happy. And, as Jensen wrote, “Without the clarity of mission provided by a single-valued objective function, companies embracing stakeholder theory will experience managerial confusion, conflict, inefficiency, and perhaps even competitive failure.” Jensen’s article is the best articulation of why what he calls “enlightened value maximization” is indispensable.
But neither he nor Friedman fully imagined the chaos that could ensue without it.
Look at the backlash when Walt Disney Co. tried to placate vocal employees by opposing Florida legislation to ban discussing sexual orientation with younger children in schools. The political pushback — like the original protests — reflected a sense of betrayal by a beloved company whose fans see their dreams and values reflected in its characters and stories.
Appeasing one group of stakeholders alienated others. It wasn’t hard for conservatives to find opposing voices within the company. Disney has nearly 200,000 employees and countless customers. All are stakeholders, and they represent every conceivable viewpoint.
By focusing on business goals, by contrast, Netflix, despite other challenges, has more successfully weathered its own controversies, from conservative uproar over the French film “Cuties” to more recent protests about Dave Chappelle’s jokes about trans women. To make expectations clear, the company revised its cultural guidelines for employees to explicitly say, “As employees we support the principle that Netflix offers a diversity of stories, even if we find some titles counter to our own personal values.”
Stakeholder capitalism implicitly assumes a cultural consensus identical to whatever its advocates believe. It harks back to the mid-20th century, when big US companies enjoyed little competition, mass media marginalized all but a narrow range of political, religious and social views, and hierarchy and security dominated worker expectations. It pretends social media, Slack channels and “bringing your whole self to work” don’t exist.
For a purer version of what stakeholder-oriented management can engender, forget profits and political disagreements. Look at the turmoil roiling all sorts of left-wing nonprofits. In a report in the Intercept, Ryan Grim details why Washington D.C.-based groups have spent the past few years engaged in “knock-down, drag-out fights between competing factions of their organizations, most often breaking down along staff-versus-management lines.”
He writes that:
Instead of fueling a groundswell of public support to reinvigorate the [Democratic] party’s ambitious agenda, most of the foundation-backed organizations that make up the backbone of the party’s ideological infrastructure were still spending their time locked in virtual retreats, Slack wars, and healing sessions, grappling with tensions over hierarchy, patriarchy, race, gender, and power….
Grim quotes the executive director of one such group, anonymously:
“A lot of staff that work for me, they expect the organization to be all the things: a movement, OK, get out the vote, OK, healing, OK, take care of you when you’re sick, OK. It’s all the things,” said one executive director. “Can you get your love and healing at home, please? But I can’t say that, they would crucify me.”
Despite their commitments to making the world a better place — and general agreement on what that means — trying to please every vocal stakeholder is wreaking such widespread organizational havoc that one group’s head, also quoted anonymously, told Grim “you couldn’t conceive of a better right-wing plot to paralyze progressive leaders.”
The problem isn’t that the groups are leftist. It’s that their missions and decisions are constantly up for internal debate. New attitudes and forms of communication have destroyed the legitimacy of their managerial hierarchies. (For a deeper dive into this phenomenon, drawing on Mary Douglas’s anthropological studies, see my Substack essay “Purity, Sorcery, and Cancel Culture.”)
Compared with mission-oriented nonprofits, businesses that focus on creating economic value are fortunate to have a clear definition of success. If they want to make the world a nicer place, they should strive to defend that standard.
]]>These are all perfectly normal reunion activities. They could happen anywhere. What makes Princeton reunions distinctive is their mastery of ritual, heritage and myth — all wrapped in ridiculous orange and black.
Princeton has no business, law or medical school. It is no football power. Yet it boasts the largest endowment per student of any US university. Its roughly $37 billion endowment benefits from the university’s longevity (founded in 1746), the affluence of its graduates and savvy long-term management. To turn those advantages into billions, however, you need alumni devotion.
An endowment connects past, present and future. It represents a financial commitment to institutional continuity despite institutional change. It requires faith, hope and love. Princeton is exceptionally adept at instilling those qualities in its alums by creating a feeling of family and tribe.
Take the costumes. Every class has its own, which changes every five years until the 25th reunion. For my fifth reunion, we wore a pith helmet (still in the front closet) and a shirt printed with palm trees and tigers. For the 10th, there was a kind of Flintstones theme, with a shirt of triangular orange stones outlined in black, accompanied by “Butt Fur” brand fleece shorts with tiger stripes. (I opted for black trousers.) For the 25th, each class chooses a distinctive sports coat to be worn at all future reunions. Ours is a harlequin print with diamonds in orange, black, white and gray. The lining lists classmates’ names.
The crazy clothes identify classmates and give the campus a festive air. The novelist Anne Rivers Siddons, who attended her husband’s 25th reunion in 1973, described the atmosphere they create as “absurdly like Disney World, moved lock, stock and barrel into the Cathedral of Notre Dame. Perhaps even more absurdly, it didn’t look absurd.” One of my classmates, who wore his reunion jacket to travel, reports that his husband didn’t even notice the oddity. He’d been conditioned by decades of reunions to see it as perfectly normal.
The ritual climax of reunions is the P-rade, held on Saturday afternoon. Beginning with the “Old Guard” who’ve passed their 65th reunions and ending with the new graduating class, classes line up along the route, joining the line as their immediate predecessor passes. Many of the Old Guard ride in golf carts. Retired psychiatrist Joe Schein ’37, typically walks, leaning on the engraved cane honoring the oldest returning alum. High school marching bands provide music, live tigers have been known to show up, and some classes feature vintage cars.
But the real attraction is the embodied history. “We saw 1865 march,” read a sign I saw at one reunion. (I don’t actually remember the precise year, but it was definitely well back in the 19th century.) At my early reunions, classes from at least as long ago as 1909 were represented. The sight of the Old Guard inevitably moves the attending crowd.
“As they turned the corner onto Prospect Street,” wrote Siddons, “the crowd rose spontaneously to its feet, sun hats off, and all along the street they rose till the entire street was lined with standing people as the Old Guard went marching by.” Reflecting on her essay, Marilyn Marks, editor of the alumni magazine, wrote in 2016, “After 15 P-rades, the Old Guard still makes me cry.”
But without the rest of the P-rade, the Old Guard would just be a bunch of aged men. The P-rade is a powerful ritual because it represents continuity amid change. The presence of the old reflects belief in the young. Whatever their disagreements or differences, in their orange and black they are all one tribe, from the good old boys’ network to the meritocratic strivers, the rowdy drinkers to the quiet nerds.
Princeton’s rituals enact the conviction that the Princeton of today descends from the Princeton of yesterday, that the many eras of Princeton belong to one another, and that, whatever their differences and flaws, they are all beloved and good. We are here because they were here first, and they take pride and pleasure in their successors. It’s the kind of myth easily discarded yet desperately needed in our divided culture.
My husband and I planned to be at Princeton this weekend, celebrating our 40th. Our harlequin sports coats are still spread out on the guest room bed, waiting to be packed. Life, in the form of a sudden injury, intervened. And maybe it’s just as well. We’re currently out of sorts with our alma mater, concerned that it has sacrificed its previously outstanding support of free speech to the political climate of the moment. We didn’t contribute to annual giving for that reason, despite the pressures of a major reunion campaign.
But we do still love the place. We will go back, again and again. And if we are lucky, we may someday join the Old Guard.
]]>Before World War II, studio-era photographers like George Hurrell employed large-format cameras, dramatic lighting and heavy retouching to turn their subjects into otherworldly ideals. Stars were portrayed not as down-to-earth pals but as screen gods and goddesses: languid seductress Marlene Dietrich in a white tuxedo suit; blonde bombshell Jean Harlow with a fur slipping off her bare shoulders; Joan Crawford, her face emerging from deep shadows, gazing downward through impossibly long eyelashes.
The goal was glamour — a word that originally meant a literal magic spell — without pretense to realism. “The movie itself was only a passing story,” writes Hollywood historian Tom Zimmerman, “while the great studio portraits were romanticized ideals caught frozen in time: lasting objects of perfection to hold in your hands.”
After the war, the cameras, screens and stars got smaller. Using 35-millimeter film and minimal retouching, celebrity photographers portrayed their subjects as not that different from their fans, just richer and better looking. Photos depicted celebrities overseeing barbecues, helping kids with their homework or taking their families on Disneyland rides. It was image-making for the television age: friendly, cozy and domesticated.
This was the environment that Galella worked both within and against. Scorning staged photo ops, he stalked famous people to capture the candid moments he called “no appointment” portraits. “Expressions on the human face are much more infinite when the person is caught unawares,” Galella maintained. Instead of looking through the viewfinder, he held his pre-focused camera at chest level so he could make eye contact.
He photographed Woody Allen and Diane Keaton striding along a New York sidewalk, the Duke and Duchess of Windsor laughing at a gallery opening, and Sharon Tate adjusting her shoe in the back seat of a car. He poked through a hedge to catch Doris Day in her bikini. In his most famous photo, “Windblown Jackie,” his favorite subject turns toward the camera with a Mona Lisa smile veiled by her blowing hair. Galella was more interested in beauty than scandal, and his published shots were usually flattering.
But he was relentless, obnoxious and annoying. Marlon Brando punched him in the face, knocking out five teeth. Jackie took him to court, winning an order that he stay at least 25 feet away from her. “He didn’t see anything wrong with pursuing somebody, hounding somebody, not being respectful of a person’s privacy,” gallery owner Etheleen Staley, who showed his work, told Town and Country in 2020. “It just didn’t go into his head that you shouldn’t do that.”
For a brief period in the early 2000s, this heedless ethic dominated celebrity photography. Digital cameras and online gossip sites combined to produce a frenzy of invasive paparazzi activity, much of it by ambitious upstarts. In a 2008 Atlantic article about the photographers chasing Britney Spears, David Samuels pronounced paparazzi “one of the most powerful and lucrative forces driving the American news-gathering industry.”
Within a few years, their power had dissipated.
Photography and photo sharing turned out to be the smartphone’s killer app, ushering in a new era. “I think that communicating via images is one of these mediums that you’re going to see take off over the next few years because of a fundamental shift in the enabling technology,” Instagram founder Kevin Systrom said in 2010, the year his app launched. To say he was right is an understatement. Propelled by smartphone cameras and social media sharing, the number of photos taken worldwide soared. In 2015, it crossed the 1 trillion mark, and the sharp upward trend continues, with next year’s total projected to top 1.6 trillion.
Savvy celebrities quickly realized that what Systrom called a “life-sharing app” could change the relationship between photography and fame. By curating their own Instagram feeds, stars could offer fans previously unheard-of access to their private lives while simultaneously gaining greater control over their public images.
They began “inviting fans into their homes through the app, showing us their closets, bedrooms and how they were getting ready for a glamorous event hours before they were photographed on the red carpet or by the paparazzi,” writes ET Online’s Desiree Murphy, looking back on Instagram’s first decade. “No longer did we have to rely on the tabloids or the internet to get our celebrity fix; the stars were showing us themselves.” However intimate the photographs might appear, the celebrities are in charge. They, not photographers, editors, studio bosses or record labels, decide what we see.
Paparazzi are still part of the business, but their shots now tend to be “mediated candids” rather than ambushes. A star or publicist tells a trusted photographer where the subject will show up suitably styled, and the celebrity gives the photographer the eye contact or smile that will makes the photo pop. Like an Instagram post, the result is calculated candor.
Two enduring facts inform these different eras in celebrity photography. The first is that fans crave images of familiar strangers. (Before photography, prints served the same purpose.) Photos provide connection, inspiration and validation. Whether fans identify with the stars, long to be like them, admire their talent, looks, or lifestyle — or enjoy pronouncing judgment on their bad behavior — the camera offers access to otherwise distant lives. They feel immediate and real.
But they aren’t. A photograph is always art, not life. “Ron would shoot a whole roll maybe to get one frame that he really liked and he could sell,” a friend told Town and Country. The public doesn’t get to see every shot — all the more so in the digital age, when there are no contact sheets to preserve.
And a photo always leaves things out. Even the most candid image captures only a single instant in a limited frame. “Windblown Jackie” is a perfect moment. Galella heightened its allure by cropping away a distracting pole, an expanse of sidewalk and the bottom of Jackie’s rumpled jeans. The editing was critical to the photo’s composition and effect. On the street, we might not notice the pole or the wrinkles. We perceive a still image differently from the way we see a living, moving person. Its flaws are more apparent. Hence the wisdom of Andy Warhol, a frequent and willing subject of Galella’s lens: “Always omit the blemishes — they’re not part of the good picture you want.”
Social media turns everyone into a celebrity, curating images for audiences of judgmental viewers, including ourselves. Food, home decor, travel destinations and human faces are deemed “Instagram worthy” or not. But you can’t live in an unchanging, two-dimensional space. Photos are not experiences, merely souvenirs.
]]>For scientists and adventurers, it was an exciting time. But not everyone had the leisure, resources, or inclination to conduct experiments or invest in far-flung trading enterprises. Did the spirit of the age also fire the imagination of a London shopkeeper or a provincial clergyman?
A pastel portrait sold at auction in 2020 suggests it did. Dating to the 18th century, it shows a man clad in modest black, with a white shirt and shoulder-length powdered hair. He is perhaps a clergyman, middle class rather than aristocratic, likely a villager rather than an urbanite. Held lightly in the fingers of his left hand is an object that embodies the spirit of the age: a pocket globe.
From the Renaissance onward, globes often appear in paintings of rulers, diplomats, and scientists. Think of Elizabeth I’s Armada portrait, Hans Holbein the Younger’s The Ambassadors, or Vermeer’s The Geographer. Jan Verkolje’s portrait of Leeuwenhoek includes a globe.
But the globes in traditional portraits are big. Most require stands. Although it might serve merely as an expensive status symbol, a large globe could function as a useful scientific instrument. It was a piece of capital equipment. You couldn’t balance one on your finger tips or pull it out of your pocket to illustrate a point in a coffee house conversation.
A pocket globe was a different sort of object, “new and ingenious” but not primarily functional. “Its only Use was to keep in memory the situation of Countries, and order of the Constellations and particular Stars,” acknowledged Joseph Moxon, the London printer who popularized the pocket globe. Three inches in diameter, Moxon’s pocket globe came in a case lined with a map of the heavens, making it two globes in one.
Pocket globes exemplified what Adam Smith called “trinkets of frivolous utility,” otherwise known as consumer gadgets. Their appeal lay less in their function than in their cool factor. “What pleases these lovers of toys,” wrote Smith, “is not so much the utility, as the aptness of the machines which are fitted to promote it. All their pockets are stuffed with little conveniencies.” Like nutmeg grinders, étuis (“tweezer cases” to Smith), and tiny microscopes for examining flowers, pocket globes offered ingenuity you could carry around.
And they were affordable. A Moxon pocket globe with its case sold for 15 shillings, about ten days’ labor for a building craftsman. An eight-inch globe, by contrast, cost 2 pounds (40 shillings), and double that if the buyer wanted the complementary celestial globe. Moxon’s innovation brought geographical knowledge within reach of the aspiring class.
Owning a globe, even a miniature one, signaled an expansive view of the world. “Interest in other countries connected to a larger, more general engagement with natural philosophy to show refinement, learning, and politeness—in short, geography enabled one to be considered a respectable member of a commercial society,” writes historian Katherine Parker. As geographical knowledge changed, globe makers—and owners—had to keep up.
A Moxon pocket globe from around 1675, sold at auction last year, traces the 16th-century voyages of Sir Francis Drake and Thomas Cavendish, sources of English pride. It shows the western coast of Australia but suggests a much larger continent extending vaguely to the east. California appears as an island. A century later, a pocket globe with a cartouche proclaiming “A Correct Globe with the New Discoveries” incorporates Cook’s voyages to fill in Australia and New Zealand and adds some South Pacific islands. California has become a peninsula.
Although neither as expensive nor as flamboyantly crafted as a silver étui, a pocket globe was also a delightful piece of decorative art that required considerable skill to produce. The printer first had to design and engrave plates to create flat gores that could later be assembled over a sphere. The plates were the critical intellectual property, which could be sold or bequeathed to form the basis of a new business.
To construct the globe, the printer coated a wooden or copper mold with papier mâché, then sliced the resulting sphere in half at the equator to remove it. For larger globes, this was the stage where the axle and any interior supports would be added. The two hemispheres were rejoined and covered with a smooth surface of plaster. Only then would the gores be meticulously applied to the surface, making the map again three-dimensional. The final step, requiring an especially delicate touch on a pocket globe, was to hand paint color to highlight important features.
Moxon himself represented the class of consumers for whom such affordable symbols of curiosity and worldliness held particular appeal. A printer like his father, he was a successful tradesman, without higher education. While the elder Moxon was a passionate Puritan, Joseph inclined toward science. After a short stint in the family business, he set off for Amsterdam, where he learned how to make maps and globes. He returned with a newly published handbook on globes, which he translated into English and published in 1654 as the first work from his independent enterprise. A few years later, he wrote his own handbook. It proved a hit, continuing to sell in new editions through the end of the 17th century.
Unlike his father’s religious publishing business, Joseph’s enterprise focused on technical works: books of mathematical tables, handbooks on geography and astronomy, guides to architecture, and, of course, maps and globes. He even sold astronomical playing cards, promoted as “Very Useful, Pleasant, and Delightful for all Lovers of INGENIETY.” Robert Hooke, the pathbreaking polymath who served as the Royal Society’s curator of experiments, bought a set.
Moxon traveled in scientific circles. In 1662, with the backing of prominent mathematicians, he was appointed Royal Hydrographer, charged with making globes, maps, and sea charts. To further his observations, the Royal Society lent him a telescope. He hung out with Hooke, reading him drafts of his manuscripts and visiting his house to drink claret and watch unsuccessfully for the 1677 comet. He befriended the schoolboy Edmund Halley. In 1677. Single-handedly and nearly a century before the famed Encyclopédie, Moxon wrote and published a serialized work called Mechanick exercises, or, The doctrine of handy-works. In it, he documented the techniques of major crafts, beginning with metalworking. In the preface, he defended the worthiness of its subject matter:
The Lord Bacon, in his Natural History, reckons that Philosophy would be improv'd, by having the Secrets of all Trades lye open; not only because much Experimental Philosophy, is Coutcht amongst them; but also that the Trades themselves might, by a Philosopher, be improv’d. Besides, I find, that one Trade may borrow many Eminent Helps in Work of another Trade.
In 1678, at the relatively advanced age of 51, Moxon was elected a fellow of the Royal Society, despite his status as a tradesman (to which the members who cast the four dissenting votes probably objected). He exemplified what economic historian Joel Mokyr calls the Industrial Enlightenment, in which scientific theorists and practical craftsmen informed one another’s understanding and made both kinds of knowledge more widely accessible, leading to greater inventiveness and economic growth.
Like their maker, Moxon’s pocket globes embodied that cooperation. They combined art and science in an innovation that appealed to a new market. Well into the 19th century, globe-makers carried on his legacy of little worlds “made portable for the pocket.”
As charming objects and reminders of an extraordinary scientific age, pocket globes maintain their allure to this day. In 2021, 330 years after Moxon’s death, a collection of 14 pocket globes went on display at Bonhams auction house in London. A dozen dated to the 18th century, a couple to the early 19th. The star of the show was one of Moxon’s. Once priced at 15 shillings, it sold for £187,750.
]]>Until they didn’t. A decade later, polyester was the faux pas fiber. It pilled and snagged. [1] It didn’t breathe. It stank from sweat. And it represented bad taste. ‘It became associated with people of low socioeconomic status who didn’t have any style’, an advertising executive told the Wall Street Journal in 1982.
That year, prices fell by more than 10 percent, as polyester fiber consumption dropped to its lowest level since 1974. Profits plummeted. Plants closed. Industry polls showed a quarter of Americans wouldn’t touch the stuff – with resistance fiercest among the young, the affluent, and the fashion-conscious. For polyester makers, the miracle threatened to become a disaster.
The industry tried to turn things around with marketing efforts. In the U.S., fiber makers pooled $1 million for a pro-polyester ad campaign – even as individual companies hid the p word behind brand names like Dacron, Fortrel, and Trevira. When publicists pitched stories to papers like the Journal, the resulting articles inevitably included as many humorous gibes as designer names endorsing the synthetic’s value. Reporters always balanced the positives with quotes from haters. ‘We don’t use anything that isn’t natural’, a Ralph Lauren spokeswoman sniffed to the Associated Press. Marketing didn’t redeem polyester.
But something did.
Four decades later, polyester rules the textile world. It accounts for more than half of global fiber consumption, about twice that of second-place cotton. Output stands at nearly 58 million tons a year, more than 10 times what it was in the early ’80s. And nobody complains about polyester’s look and feel. If there’s a problem today, it’s that people like polyester too much. It’s everywhere, even at the bottom of the ocean.
By 1982 an innovation revolution was already underway that would change how consumers thought about polyester and how companies produced it. But neither journalists nor marketers noticed. They were still imagining synthetic fibers the old-fashioned way: as something chemists cooked up and marketers found a use for. That model wasn’t much different from the way wool or cotton had worked. The fiber existed and people figured out things to do with it. The technical challenges were equally ancient. How do you lower costs and speed up production? How do you keep fabrics colorful, clean, and in good repair? You pleased consumers by holding down prices, minimizing domestic labor, and staying abreast of fashion.
The new model turned the questions around. It started with a problem and asked textile makers to solve it. The problem wouldn’t be about the cloth but about the wearer’s body. The fabric had to be more than color-fast, clean, or cheap. It had to keep the user cool or warm or dry, undistracted by physical discomfort and the energy toll of weight. The imagined customer wasn’t a housewife tired of laundry or a fashionista looking for the next big thing. It was a skier, a jogger, or a basketball player. Polyester triumphed by becoming a performance textile. ‘It moved from being disco to sporty’, says Amanda Briggs, a designer and trend consultant who spent three decades at Nike. By answering the demands of outdoor enthusiasts and athletes, polyester developed attributes that pleased just about everyone.
Once again, the fiber rode cultural trends. By the 1980s, large numbers of baby boomers had started running, working out in gyms, climbing mountains, and hiking rugged trails. ‘Polyester gained acceptance because of its ideal application to performance textiles, when people were beginning to recognize that, if you’re going to climb a mountain, you don’t wear a wool jacket. Or if you’re going to run a marathon, you don’t wear a cotton T-shirt’, says David Parkes, whose New Jersey company, Concept III, sources materials for the outdoor industry.
As Parkes tells the story, the polyester revolution started with a failure. Around 1981, he was working in product development for a Massachusetts company called Malden Mills, which was big in the faux-fur business. It also made cushy baby bunting and sweatshirt fleece. Malden created its pile textiles by first knitting a fabric with loops like those on a terry-cloth towel. It then brushed the surface to break the loops, making the material fuzzy – a process called napping.
Foreseeing a fashion trend, Parkes asked the production team to apply the same process to make an imitation mohair-alpaca for women’s coats. After some experiments, they came up with a polyester fabric that had the right look but wasn’t quite good enough to bring to market. Malden Mills moved on to other products.
Within a few years, the failed experiment would evolve into a company- and industry-defining hit.
How exactly that happened depends on who you ask. Memories fade over 40 years and everyone, whether company or individual, is the hero of their own story. But Mary Ellen Smith’s name comes up repeatedly in other people’s accounts of polyester’s evolution. ‘She’s a treasure’, says one industry veteran. Smith joined the fledgling outdoor-apparel maker Patagonia around 1983. She set up its textile-testing lab, then established a materials research program. Unusual at the time for an apparel maker, the in-house departments heralded the new approach to textile innovation.
Smith remembers things this way: Patagonia founder Yvon Chouinard issued his first materials challenge to Smith by handing her a piece of ‘ugly, ugly’ brown fabric. The hideous cloth was what’s known as a sliver knit, in which untwisted fibers are pulled up through a knitted base. Sliver knits are warm but they shed like crazy, dripping what Patagonia employees irreverently dubbed ‘snot balls’. In this case, someone had brushed both sides of the fabric so that the fibers stood upright, enhancing the material’s heat-holding abilities but also its tendency to shed. Smith’s assignment was to find a manufacturer to make something similar.
She traveled from California to Malden Mills and told them what she was looking for. ‘This is what we want’, she recalls saying as she took out the sample. ‘But it can’t pill and it needs to be a denser construction’ to block the wind. After several iterations, the mill came through with a warm, lightweight double-velour fleece. Unbeknownst to Smith, it was derived from the failed fake alpaca. Made from polyester, the new material could keep mountaineers warm for long periods, and it dried quickly. Unlike wool or cotton, polyester resists rather than absorbs water. The fabric was also soft to the touch and easy to dye bright colors. ‘That was what I loved about materials research,’ says Smith. ‘You blend science and aesthetics together’.
Naming the fleece Synchilla, Patagonia enjoyed an exclusive license for the first two years. After that, Malden Mills sold the material to others in the outdoor apparel industry, branding it PolarFleece, a term that eventually became generic. It was amazing stuff – incredibly light and warm, with a great feel and a reasonable price. Fleece was soon everywhere. It made Patagonia’s name and popularized performance apparel beyond hardcore enthusiasts, largely replacing wool for winter wear. ‘Polyester really did emerge with polar fleece’, says Parkes. Around the same time, cheaper, more compact insulation made from crimped polyester was supplanting the duck and goose down that absorbed water and made coat wearers look like the Michelin Man.
But it took more than outerwear to make polyester ubiquitous, let alone admired. Its real triumph began when the fiber conquered the critical territory next to the skin. To do that, polyester had to adapt.
A polyester textile is the same PET material (polyethylene terephthalate) as a plastic soda bottle, only extruded into a filament rather than molded into a container. Like the bottle, the fiber repels water. It’s hydrophobic. That’s a nice quality in a fleece jacket but a sweat-trapping horror against the skin. To reach its performance potential, polyester needed not simply to keep out moisture but to move it.
‘The body is really fussy. It doesn’t like hot, humid conditions right at skin level. Move that humidity a millimeter away and it’s a whole different ballgame’, explains Randy Harward, who spent more than 40 years developing products and materials in the outdoor apparel industry. Once off the skin, moisture becomes a valuable buffer against wind and chill. In hot weather, it can evaporate and keep you cool. The trick is getting it to that sweet spot.
In the early 1980s, hikers, campers, and mountaineers had two main options for what the outdoor industry calls base layers. They could wear cotton (or sometimes silk) garments with a waffle-knit construction that lifted the fabric away from the skin, ameliorating the discomfort when they got soggy. Or they could buy synthetic underwear made of polypropylene, which was quick to dry – essential in cold environments – but absorbed oil and body odors. After a long expedition, polypropylene garments were disgusting. And they tended to melt in the dryer.
For Patagonia, Smith wanted a polyester alternative. To find it, she went to South Carolina, presenting the problem to scientists at Milliken & Co., a textile firm renowned for its research lab. She said she was looking for a version of polyester that would ‘move moisture but absorb nothing’. Researchers spent months attacking the problem, eventually developing a chemical treatment that made moisture move along the fiber’s surface.
With that technology in hand, Patagonia developed a line of base layers that Smith dubbed Capilene to suggest capillary action. In fall 1985, the same season Synchilla hit the market, Capilene completely replaced the company’s polypropylene underwear. ‘Those two innovations – base layer and fleece – completely changed the world’s opinion of polyester, not just the outdoor industry’, says Harward. ‘It became seen as the high-end performance comfort fiber. Over time, polyester’s success as a performance fiber allowed it to reclaim its fashion luster.
‘People began saying, “Oh, we can apply that to Haggar slacks and to shirts and athletic wear”’, he says. Performance-driven innovations made polyester better for everyday wear: softer, more comfortable, more durable, less likely to hold odor, and less obviously synthetic.
In 1986, DuPont, the original maker of polyester in the U.S., introduced its own moisture-wicking fiber, employing a strategy that would become increasingly important over the coming decades. It changed the shape of the filament. Instead of pushing viscous polyester through round holes to extrude cylindrical strands, it used cross-shaped holes to create filaments with channels. It called the resulting fiber Coolmax, promoting it for athletic attire.
From then on, ‘the world understood that, working on the shape of the filaments, it’s possible to achieve very high performance’, says Giovanni Pingani, owner of VB Soluzioni & Tecnologie, a textile-machinery maker near Milan. “You can make the fiber with three lobes, with five lobes, with eight lobes, with a hole inside each filament. Each of these fibers can allow to make the fabric lighter, or to allow the sweat to move away.” His company creates the specialized spinnerets that give the filament its shape. Polyester cross-sections can look like stars or kidneys. They can mimic the ruffles of lotus leaves or flatten into ribbons. The malleable material can adopt just about any shape.
It can also get very, very thin. Textile fibers, including polyester filaments, are measured in denier (pronounced den-e-er in the U.K. and den-yer in the U.S.). The higher the denier, the coarser the fiber. Silk comes in around 1 denier per filament (about 10 microns, or a hundredth of a millimeter), while human hair is at least 20. The standard polyester used in early performance fabrics like polar fleece was a 2 or 3.
But back in 1970, Miyoshi Okamoto, a scientist at Toray Industries in Japan, invented a way to produce polyester fibers that were finer than silk. Toray introduced the new microfibres in a smooth, luxurious fabric called Ultrasuede, which designer Halston made famous in a 1972 shirtwaist dress. Lighter than regular suede, it draped nicely and was machine washable. After its early fashion fame, Ultrasuede became a mainstay of upholstery and auto interiors. Toray popularized the term ‘microfibre’ among consumers when it introduced a cleaning cloth in 1987. Most people who bought it had no idea that the high-tech fibers were good old polyester.
The trick to making microfibres is a process known as ‘islands-in-the-sea’. Polyester and another polymer with a different viscosity go through the spinneret together. The polyester is carefully metered out so that it forms many separate strands – the islands – surrounded by the other polymer, the sea. Together they make up a single extruded filament, typically about one to three denier. ‘So each island can be very, very tiny’, says Arnold Wilkie, president of Hills, Inc., a Florida company that specializes in making the equipment. The sea is dissolved away, leaving the polyester microfibers. Although it started with pretty nasty solvents, the process now uses polymers designed to be washed away with benign chemicals, in some cases water, and then reused.
To create the spin packs that channel the polymers into place, Hills uses photochemical etching similar to making printed circuit boards. For technical applications like filters, its machines can make fibers smaller than a half micron. ‘We’ve made them with thousands of islands’, says Wilkie. But textiles aren’t as demanding. For fabrics, machines can turn out islands in the sea at about the same rate as pure polyester. ‘So we can make these fancy fibers and they’re relatively inexpensive – not commodity yet, but relatively inexpensive’, says Wilkie.
Whether silk, cashmere, or polyester, fine fibers make cloth soft and supple. Combined with polyester’s hydrophobic properties, the greater surface area created by microfibres helps keep the area above the skin a comfortable temperature. In cold weather, microfibres trap warm air from the body. One of their early uses was in a down substitute called Thinsulate, which 3M introduced in 1978. Microfibres can also be packed into a tighter knit to block wind. When it’s hot, on the other hand, they channel sweat and encourage evaporative cooling.
In 1989, Nike decided to get serious about apparel, replacing logo-plastered generic shirts and hoodies with apparel designed to let athletes perform without worrying about their clothes. ‘We talked a lot about zero distractions’, says John Notar, who headed the company’s apparel design and development. Materials, he believed, would be critical. So Nike set out to recruit Mary Ellen Smith. ‘We really had to convince her that Nike Apparel was serious about innovation’, he recalls. ‘She did not buy it’. With a lot of persuasion and a salary offer of twice what she was making at Patagonia, Nike finally lured her to Oregon.
Smith advised her new colleagues not to rely on DuPont’s popular performance fabrics but to try for something more effective and original. The key innovation turned out to be a knitted structure called differential denier, which used two sizes of polyester fiber. The larger fibers stayed next to the skin, where they pushed sweat outward. Then microfibres picked up the moisture, distributing it along their greater surface area so it quickly evaporated. Nike named the new apparel line F.I.T., for ‘Functional Innovative Technology’, emphasizing its performance aspirations.
Debuting in 1991, it quickly took off. Although the line included several specialized versions, the basic sweat-wicking apparel called Dri-Fit – Nike eventually dropped the periods – accounted for 80 percent of sales. It displaced cotton in hot weather sports like golf and tennis.
‘I’ll never forget seeing tennis star Andre Agassi rocking a royal blue long-sleeve zip polo at the US Open during a night match’, writes Drew Hammell, a New Jersey–based athletic-wear enthusiast. ‘I couldn’t believe it – it was 80 degrees and he was wearing long-sleeves? What was he thinking? But that was the point. Dri-F.I.T. fabric was moisture-wicking, unlike your standard cotton tee shirts. It worked so well, you could stay cool like Andre on a hot, humid night in New York. I was hooked’.
For fans like Hammell who grew up with performance fabrics in the 1990s, athletic wear was cool everyday fashion. It was comfortable and your heroes wore it. You might not even know it was made of polyester and, if you did, you didn’t mind. The synthetic regained the public’s affection. Once mammoth Chinese factories started churning out the stuff in the 2000s, pushing prices way down, polyester spread everywhere. In 2002, it surpassed cotton in global sales.
Of course, saying anything positive about polyester immediately triggers pushback. For many people, including many whose closets are full of polyester garments, the polymer is the worst form of planetary pollution since oil spills. Some of that attitude is cultural snobbery, a marker of class allegiance not that different from Ralph Lauren eschewing synthetics in the early ’80s. But other environmental concerns are practical problems like keeping microplastic particles out of marine life and reducing greenhouse gas emissions. Outdoor enthusiasts in particular want to know that their purchases aren’t hurting the planet, and brands have to heed their customers.
In response, polyester innovators are working to solve the old performance problems with a new constraint: keeping environmental impacts to a minimum. About 15 percent of polyester fiber now comes from recycled rather than virgin material, and the proportion is rising. Reusing polyester reduces greenhouse emissions and makes the textile less dependent on new petroleum production.
Many who work with polyester see it as a potential resource, almost infinitely recyclable. They cringe at the thought of potentially valuable material going down the drain or into the landfill. To trap fibers coming off in the wash, inventors are working on laundry filtration systems, ideally made of PET and recyclable along with the polyester they catch.
But today most recycled polyester currently comes from bottles, not textiles, because PET containers are far easier to collect and the material doesn’t have other substances mixed in. A 100 percent polyester shirt can be chopped up, melted, and turned into new fibers. For a cotton-poly blend, recycling is cumbersome and likely not economical.
‘And if there is spandex inside forget it’, says Pingani, whose company’s wares include systems for recycling polyester. Spandex, or elastane, ‘is the enemy for the recycling’. But customers love the stuff. If you want to keep polyester out of the landfill, you have to find ways to make 100 percent polyester clothing stretch and pop back into shape the way it does with a bit of spandex. Researchers are working on the problem. The worst possible approach is to ban or penalize 100 percent polyester in favor of blends.
On the microplastic front, the good news is that by its nature, polyester doesn’t shed much. It comes out of the spinneret as a continuous filament that’s hard to break. But it’s often chopped up and spun like (and often with) cotton, destroying that advantage. Eco-conscious designers are turning to yarn made from continuous filaments and reworking fabric structures to prevent shedding.
Some apparel brands are doing away with microfibres altogether. ‘There’s a zillion ways to make polyester soft’, maintains Harward. ‘And the cheap, easy way of doing it is with microfiber’. He hates the stuff and demands that designers working for him find alternative solutions. The constraint, he believes, leads to new constructions that perform better. ‘Constraints are great. Really they are’, he says. Polyester’s environmental challenges are solvable, he believes. They just require creativity and effort.
Despite its detractors, polyester is unlikely to disappear, barring a major technological breakthrough like commercially viable protein polymers from bioengineered yeast. It’s simply too valuable. ‘There is no other fiber that has the same flexibility, the same potential, today in the market’, says Pingani. ‘You can do everything with polyester. You can imitate any other kind of fiber or filament’.
Polyester makes it possible to clothe a world population of nearly 8 billion people at a much lower toll on land and water than cotton or wool would exact. And it’s practically free – an important factor in places like India, where the per capita income is less than $2,000 a year. ‘We have also to remember’, says Pingani, ‘that sustainability means that we should allow the poor people to get to buy a shirt without spending a fortune’.
We aren’t going back to a world without polyester. The challenge is to find the best ways to go forward.
]]>Here, Bloomberg Opinion columnists Virginia Postrel and Adam Minter discuss these implications. Both are longtime observers of the apparel trade, with Postrel focusing on the history of textiles and Minter on the secondhand trade.
Adam Minter: In the fast fashion world, a brand like Zara might ask manufacturers to turn around an order for 2,000 pieces in 30 days. To keep up with fast-changing demand created by TikTok and Instagram, Shein developed software that enables it to be profitable asking manufacturers for as few as 100 pieces in 10 days. When you look back at the history of textile technologies, where does this kind of technology fit in? Is it really something new?
Virginia Postrel: Cheaper materials and faster fashion cycles go back at least as far as "new draperies" that replaced heavy broadcloths and velvet brocades beginning in the late 15th century. These wools and silks drove a huge expansion of European commerce and funded the Renaissance, but they had their detractors. A Venetian ambassador complained in 1546 that these fabrics "cost little and last less."
In the 19th century, the big disruptor was the department store, as chronicled in Émile Zola's novel “The Ladies’ Paradise.” Department stores used buying power to offer fabrics and ready-to-wear items in an unprecedented combination of abundance and affordability. Behind this organization innovation were textile technologies including power looms, synthetic dyes and sewing machines — not to mention railroads and steamships.
But, as you suggest, Shein has done something altogether new. Its model isn't based on mass production but on smaller batches, still made the old-fashioned way in low-wage cut-and-sew operations. But it seems to have achieved the Holy Grail of apparel makers — consistently finding fashion trends before they're widely recognized. Thanks to social media, this doesn't require sending cool hunters out to the streets or buyers to runway shows. Shein does it with software.
What interests me about their model is how directly it challenges the conventional wisdom that young consumers care intensely about sustainability. Environmental concerns drive a huge amount of product development in today's textile industry and account for an overwhelming amount of fashion and apparel press coverage. But Shein’s instantly disposable fashion suggests that actual apparel consumers couldn't care less. What's your take?
AM: Apparel industry surveys are quite clear that young consumers care about sustainability. But I think they care even more about value.
For example, Patagonia has a very public commitment to sustainable sourcing, but it comes at a price: It sells a simple cotton T-shirt for $39. Shein sells T-shirts for as little as $6. That's one reason Shein sells $10 billion in product per year, and Patagonia, a niche retailer with a decades-long history, does around $1 billion.
Patagonia isn't the only way to shop sustainably. There's also thrifting. But a 16-year-old doesn't forsake a commitment to value when she shops at Goodwill. She's likely there in search of it. Goodwill executives tell me their stiffest competition comes from low-cost retailers of new apparel like Ross Stores Inc. and Walmart Inc. So at Goodwill, at least, they operate on the assumption that a consumer deciding between a $6 new T-shirt, and a $3.99 used one, will opt for the new one. Shein, with its up-to-the-minute fashion, will almost always win that battle.
Where does that leave sustainable fashion? This morning I found Shein on the racks at a suburban Minneapolis Goodwill. Most was in excellent condition, likely worn only once or twice, and then replaced. In that sense, it's “disposable fast fashion.”
Yet whoever donated Shein felt compelled to avoid the trash, so perhaps the sustainable fashion journalism and marketing had an impact. Will that consumer ever take the next step and avoid buying fast fashion altogether? Not until the price gap between sustainable fashion and Shein narrows considerably.
Nonetheless, as we both know, the history of apparel is filled with unexpected cost-saving innovations. And with Shein still reliant on traditional cut-and-sew operations, I wonder if it's vulnerable to some of the innovations you've explored elsewhere, from AI to recycled fibers.
VP: In other words, young consumers care about sustainability as long they don't have to pay for it. It's cheap talk. So any environmental advances will come from production cost savings, niche markets like Patagonia's or government regulations, which would likely raise prices whether consumers like it or not.
As you suggest, the industry's pursuit of cost savings is driving some innovations that could be twofers, giving consumers more of the small-batch production Shein has shown they like while reducing resource use. My favorite, because it's already happening, is the development of increasingly sophisticated computer models for textile and fashion design.
We're starting to see simulations that start with specific yarns to show how a textile will behave once it's made into a garment. These systems allow apparel makers to eliminate samples that would otherwise consume materials and require shipping. There are some software compatibility issues, but it's also possible to go from the computer screen to seamless 3D knitting, whether for garments or sneakers. That works well for small batch or on-demand production.
The big barriers to major fiber recycling efforts are collection and mixed-fiber garments. Spandex is a particular problem, and it's everywhere. Stretchable polyester, which people are working on, would go a long way to making fiber recycling easier. Polyester, which accounts for more than half of global textile fiber, lends itself to a circular approach if — and it's a big if — you can collect and separate it at a low enough cost.
On another subject, I wondered whether Shein's business might be hit hard by container shipping delays, especially with Shanghai's port seriously restricted because of the Covid outbreak. But it turns out Shein’s shipments rely on postal services, both in China and customer destinations.
It reminds me of visiting Amazon in the early days, when their warehouse and headquarters were located next to a major postal depot and their book sales were big in Guam because postage everywhere in the U.S. cost the same. And, of course, subsidized postal deliveries, especially in rural areas, were critical to the growth of mail-order houses in the 19th and early 20th century.
You see complaints from brick-and-mortar retailers, especially small-town general stores, in turn-of-the-century issues of trade magazines like Dry Goods Reporter. Any thoughts on this angle?
AM: One of the more overlooked achievements of the Donald Trump administration was an agreement with the Universal Postal Union allowing the U.S. to boost international shipping fees, beginning in 2021. The goal was to level the playing field — or, at least, stop subsidizing Chinese e-commerce logistics to the U.S. As a result, the costs of exporting small parcels from China to the U.S. are expected to increase 164% between 2020 and 2025.
Shein's other costs are low enough that it can manage that 164% just fine. But I don't think they can be too comfortable, either. The pressure for the U.S. and Chinese economies to decouple is intensifying.
A future presidential administration might look at the continued success of Shein and other Chinese e-commerce players, and decide that postal rates — and other measures — still aren't high enough. A big enough hike could decelerate at least some of the speed associated with the newest fast-fashion champion.