CHM Live Archives - CHM https://computerhistory.org/blog/category/chm-live/ Computer History Museum Fri, 05 Dec 2025 19:32:02 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.2 Pixar’s True Story https://computerhistory.org/blog/pixars-true-story/ Fri, 05 Dec 2025 19:32:02 +0000 https://computerhistory.org/?p=32951 The true story of Pixar's IPO and the Silicon Valley investment bankers who took a chance on Steve Jobs' passion project.

The post Pixar’s True Story appeared first on CHM.

]]>
In a world where we’ve gotten more cynical about technology there’s something pure about Pixar that people trust, says former CFO Lawrence Levy. With 29 films over 30 years, the company has never compromised in striving to entertain families in a wholesome way. But, in the early days, Pixar almost didn’t make it.

On stage for CHM Live on November 20, 2025, insiders told the behind-the-scenes story of how Silicon Valley investment bankers rallied around the struggling company next door. They wrangled founder Steve Jobs and manufactured an improbable IPO that rescued Pixar and delivered the first feature-length, computer animated film—the beloved Toy Story. The program was made possible by the generous support of J.P. Morgan.

Moderator Paul Noglows, formerly of Hambrecht & Quist, is cowriting a book with JP Mark, formerly of Robertson Stephens, on the two companies, which, along with Cowen, were the investment banks behind the Pixar IPO (initial public offering). He opened the discussion by asking Levy what it was like at Pixar in the spring of 1995, less than a year before the IPO.

The Setting

Levy had arrived at Pixar in late 1994 and quickly realized the company was doomed. It was facing three major challenges. The first was Steve Jobs, who was at a low point in his career and as difficult as ever. The second was that Pixar had no business, profits, or money. Despite their groundbreaking RenderMan graphics software, Jobs was covering payroll with personal checks. The third problem was that the company had signed a crippling contract with Disney.

Lawrence Levy unpacks Pixar’s contract with Disney.

Although Pixar was in dire straits, Steve Jobs had aspirations for it to go public, and he wanted Morgan Stanley and Goldman Sachs to underwrite the IPO. But the investment banking behemoths immediately saw that the company did not have “up and to the right” growth potential and declined to invest.

So, with Jobs’ begrudging agreement, Levy took the deal to his “local heroes at Robertson Stephens.” Former President and CEO Michael McCaffery remembered that it was hard to figure out who on staff could check out a company that wasn’t like anything on their typical list of semiconductors, software, computing systems, and communications. They, too, realized the numbers weren’t there, but that didn’t scare them. And when they saw what Pixar was doing, they were excited.

Cristina Morgan, the head of technology investment banking at Hambrecht & Quist at the time, also went down to see Pixar. As a Board member of Steve Jobs’ NeXT, CEO Dan Case had told Jobs that H&Q would play any role he wanted them to in an IPO. Like H&Q, she, too, was impressed with what she saw at the Pixar studio.

Cristina Morgan describes her first visit to Pixar.

The bankers knew they were taking a risk with Pixar, but they believed that Pixar’s first movie, Toy Story, was worth betting on.

The Plot

With the investment banks on board, the Pixar team had to finish Toy Story, and that was a nearly impossible task from a technical standpoint. Everything in the movie was set in rooms inside a house because computer graphics could do boxes. They didn’t know if they could even make an outdoor scene. And they only had a matter of months before the film’s scheduled Thanksgiving release to figure it out.

Then there was the challenge of deciding when the IPO should happen. If they did it after a successful movie release, they could be accused of hyping the stock. If they did it after, and the movie was a flop, they could be accused of duping investors. And, of course, if the movie flopped, Pixar was dead.

Plot Twist

They decided to move forward with the IPO, and Steve Jobs set out on a three-week “road show” to pitch the company to potential investors. Cristina Morgan and Mike McCaffery went along. Picky about everything—from the hotels to the food and every detail in between—Jobs created plenty of difficult moments.

In New York City, potential investors were invited to a rented theater in the Upper East Side and told to bring their families to view Toy Story. To sweeten the pot, they offered free candy. The events were designed to, in Mike McCaffery’s words, “create the sugar high of all time.” After New York, the road show was supposed to go on to Boston for a breakfast meeting with investors. But there was trouble.

Mike McCaffery tackles a snowstorm for Steve Jobs.

While the investment bankers knew that Pixar’s future depended on Toy Story’s opening box office success, Levy says that he and Jobs worried about beating the stock price that had been set. No one knew if investors would pay $22 per share, and if Pixar wasn’t “oversubscribed,” the IPO could be deemed a failure.

And, of course, Jobs felt that Disney was not doing enough marketing and everything they did do was terrible. He was on the phone telling a company that had been releasing movies for 50 years how it should be done. The stress was getting to everyone.

Point of No Return

Toy Story opened on Wednesday, November 29, 1995, on the night before Thanksgiving. It made $29 million its opening weekend and went on to become the #1 film in the US. It was the first non-Disney animated film that was a blockbuster.

The IPO happened a week later, and shares closed at $39, up 78% from the offering price. Jobs’s 80% stake was worth over $1 billion. Everyone involved could enjoy the success. Morgan recalled the incredible talent, and the artistry of the revolutionary graphics and technology. She said that it was striking how different and compelling Toy Story was and that without the movie’s magic there would have been no IPO.

Happy Ending

Although the stock price had dropped to $12 three weeks later, Pixar’s IPO had been a success as well as something of a miracle. Morgan credits the investors for their long-term vision in seeing the company’s potential. And Toy Story’s success allowed Levy to renegotiate the terrible Disney contract.

Twelve years after Levy arrived at a company with a negative retained earnings of $50 million, Pixar was sold to Disney for $7.6 billion. He recalled “walks and talks” with Jobs to make decisions and appreciated that Jobs was always more interested in getting to the right answer than in being right. After Pixar, Jobs returned to Apple in a remarkable comeback story that resulted in the revolutionary iPod and iPhone.

Watch the Full Conversation

To Infinity and Beyond | CHM Live, November 20, 2025

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post Pixar’s True Story appeared first on CHM.

]]>
Taiwan Rising https://computerhistory.org/blog/taiwan-rising/ Thu, 13 Nov 2025 17:03:45 +0000 https://computerhistory.org/?p=32883 Honghong Tinn, author of Island Tinkerers, shares the fascinating history of how hobbyists and enthusiasts in Taiwan helped transform the country through innovative and creative computer use.

The post Taiwan Rising appeared first on CHM.

]]>
The Origins of a High-Tech Industry

In college, Honghong Tinn built her own computers, using parts from electronic stores at her local shopping mall. While pursuing a PhD, she decided to research other Taiwanese “tinkerers,” uncovering how in the 1960s, ‘70s, and ‘80s they gained the skills and laid the groundwork for global tech giants like Acer, Asus, Quanta, and TSMC.

On November 4, 2025, Tinn, an assistant professor at the University of Minnesota, was on stage at CHM Live to share insights from her book Island Tinkerers: Innovation and Transformation in the Making of Taiwan’s Computing Industry. CHM Curator Hansen Hsu moderated the discussion.

Foundations

Tinn first provided a helpful summary of Taiwanese history. After World War II and the Communist takeover of China, Nationalist leader Chiang Kai Shek moved to Taiwan with 1.2 million followers. One thousand were alumni of National Chiao-Tung University, an engineering school dubbed the “MIT of the Orient.” They worked together to lobby the government to reopen the university in Taiwan, arguing that electrical engineering was critical for both the economy and the military during the Cold War. They succeeded, and the university opened in 1958, enabling a new generation of engineers.

A United Nations technical aid program allowed the National Chiao-Tung University to install the first two mainframe computers in Taiwan. They are IBM 650 and 1620 computers. Technicians, visiting professors, and other computer users had the opportunity to tinker with the mainframe computers. Soon, students in Taiwan began to build minicomputers and calculators from scratch. Many of the parts were not available, said Tinn, and they had to source recycled items, work with factories to custom make some components, or else import expensive parts. Future business leaders, like Barry Lam, the founder of Quanta Computer, was one of those students.

Honghong Tinn describes how tinkering inspired Barry Lam’s career.

Factories

Taiwan became an important components manufacturing center in the mid-1960s, when the government encouraged multinational corporations to set up factories with tax breaks and inexpensive labor. American, European, and Japanese companies like Wang Laboratories, Philips, General Instrument, and Philco-Ford signed on. Women factory workers soldered IC chips, assembled transistor radios, black and white TVs, and wove copper wire into magnetic core memory units, sometimes under a microscope.

Honghong Tinn explores the experience of women factory workers in Taiwan.

In 1972, just $200 US dollars could enable a tinkerer to buy a microprocessor and build a calculator, creating many entrepreneurial opportunities, and by 1978, 20% of calculators in the global market were made by Taiwanese companies. Those companies often transitioned to building computers in the 1980s. Entrepreneurs could choose to build one-of-a-kind computers and find customers, create an Apple or IBM compatible computer, or make a counterfeit knockoff.

Companies that built compatible machines for the export market had to make sure they weren’t running afoul of copyright infringement or risk being labeled as a counterfeiter. Apple, in particular aggressively pushed back against compatible computers with lawsuits claiming unfair foreign trade practices, working with US Customs and Congress to bolster their position. Tinn related how Taiwanese products and entrepreneurs were often stereotyped as counterfeiters.

Honghong Tinn unpacks counterfeiting and stereotypes.

Tinn used CHM oral histories to explore computer company Multitech (later renamed Acer), whose founder, Stan Shih, worked with engineers to ensure that his compatible computers did not copy Apple. As a franchisee for US companies like Texas Instruments, Zilog, and Intel, it was important that he was not seen as a counterfeiter. In fact, his computers had a unique feature to display Chinese characters, missing from US computers.

Unlike Apple, IBM allowed compatible computers until 1987, when they began to charge royalties for patents and licensing. Each company, including Compaq and Acer, negotiated their own rates. In the 1980s, those two companies, one American and the other Taiwanese, were the first to produce IBM PC compatible computers using Intel’s new 32-bit 386 chip amid a global competition. Doing so was a great technical accomplishment, and the companies also demonstrated their strong manufacturing capabilities and even marketing skills.

By around 2011, Taiwan had 90% of the global market share for laptops. Desktop market share was also growing, and if components made in Taiwan were counted, the numbers would be much higher. When a huge earthquake rocked Taiwan in 1999, CNN interviewed Steve Jobs, who noted that the whole industry gets components from Taiwan and implied that it could cause significant supply chain delays for people building computers.

Foundries

Tinn believes that tinkering activities prepared Taiwanese entrepreneurs and skilled labor that could advance computing technologies. For example, in addition to engineers, companies developed strong quality control and equipment maintenance roles and processes. In fact, an entire ecosystem of universities, factories, startups, and hobbyists were all interested in engaging with hardware and tinkering with technology.

This entrepreneurial ecosystem was evident in the case of global giant TSMC, founded by Morris Chang, who combined governmental and non-governmental support to create a company dedicated to fabricating chips for designers in a “foundry” model.

Honghong Tinn explores the origins of TSMC.

Founded in 1987, TSMC grew along with ASML, a Dutch spinoff of Philips that supplied lithograph machines for TSMC’s integrated circuit, or IC, wafer manufacturing. By 1995–96, 60% of TSMC’s revenue came from IC design houses, and Nvidia began to work with TSMC around 1998. And, in 2014, the company reached a turning point when Apple became a client and they began making chips for iPhones. Looking back on his long career, Morris Chang was most proud of his contribution in advancing the evolution of smartphones.

Gone were the days where Taiwanese tinkerers were seen as counterfeiters.

Watch the Full Conversation

Taiwan Rising | CHM Live, November 4, 2025

FacebookTwitterCopy Link

The post Taiwan Rising appeared first on CHM.

]]>
Is Today’s AI Boom Different? https://computerhistory.org/blog/is-todays-ai-boom-different/ Fri, 17 Oct 2025 15:02:25 +0000 https://computerhistory.org/?p=32730 AI company founders from three different eras of artificial intelligence booms (and busts) share their experiences and insights into the future of AI technology.

The post Is Today’s AI Boom Different? appeared first on CHM.

]]>

What’s important is that the public begins to develop some AI literacy.

— Daniela Rus, Cofounder, Liquid AI

We’re living through a boom in artificial intelligence. But, many people may not realize that there have been AI booms—and busts—before. Is it different this time? CHM sought to find out by inviting three AI pioneers who have each navigated distinct eras of AI innovation for a discussion on October 7, 2025. The CHM Live event, “This Time It’s Different: AI Startups Across Three Generations,” was made possible by the generous support of Mark and Mary Stevens.

Marc Weber, a CHM curator and director of the Internet History Program moderated the discussion. He noted that AI busts in the past made the term itself toxic even as startups and companies were still using the technology, and only with more recent hype has AI become popular again.

AI Company Founders

Jerry Kaplan (center) makes a point. Marc Weber is at left and Adam Cheyer at right.

AI Company Founders Jerry Kaplan, cofounder of Teknowledge, an AI company that helped ignite the 1980s expert systems boom, was one of the first to earn a PhD in artificial intelligence, and the company he founded took the work he and others were doing in expert systems at Stanford and sold it to corporations to solve problems. But, he notes, the business model was based on incorrect assumptions—just like the AI industry today.

Representing the next generation, Adam Cheyer, cofounder of Siri, explained that they didn’t pitch their company as AI. They called Siri a “do engine” to distinguish it from a search engine. It could combine knowledge and action to serve as a virtual assistant. With its “ecosystem” of businesses partners, for example, Siri could call you a cab or buy movie tickets. Cheyer credits Steve Jobs, who bought Siri for Apple in 2011 and called it an AI company publicly, with reinvigorating the field.

Director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) Daniela Rus is also the cofounder of AI startup Liquid AI. The company’s AI model operates on physical laws rather than the statistical basis of LLMs (large language models). It’s inspired by nature—a worm with a relatively small number of neurons and synapses that are remarkably powerful and energy efficient. Where a classical AI model needs about 100,000 neurons to keep an autonomous car in a lane, for example, with Liquid AI, you need only 19. And the model can also explain how the vehicle makes decisions and gains skills, allowing for generalization and application to different environments, which classical AI transformers are not good at. Further, the energy savings is significant because it is 1,000 times more efficient. It could, Rus said, democratize AI.

Daniela Rus explains how a new model can open AI tools to everyone. 

AI Problem Solvers

Jerry Kaplan believes that expert systems technology was not good enough to do the things it promised to do and that’s why it collapsed. He sees the ambitions of his era’s academics in AI as part of a continuum of people’s fascination with using technology to enhance or serve humanity, from Frankenstein to today’s dream of artificial general intelligence.

Cheyer was looking at the problem of how to make language practical. He says that Siri shocked the world when it first came out.

Adam Cheyer describes how Siri had to make sense of language.

One of the big innovations with Siri was doing language well in a conversational way and executing on actions—what we would call agentic AI today.

Boom Effects

Both Cheyer and Kaplan believe that the timing of a tech innovation is what makes or breaks it. Rus argued for the value of technological entrepreneurship that brings new ideas and new capabilities to the world over timing. But, when she recalled that it’s often very hard to get the rest of the world to back new ideas she came around to agree with her co-panelists that timing is critical.

So, is today’s AI boom different than previous booms? Kaplan says, “It’s different just like last time.” He believes this bubble will burst, with dire consequences for the entire society.

Jerry Kaplan predicts the AI bust.

What effect will today’s AI revolution have on jobs? Kaplan believes it will have the same effects as other technological advances that substitute capital for labor—it will change the nature of work. Some will lose their jobs, but new kinds of jobs will arise. Rus noted that her MIT colleague, economist David Otter, has done research showing that more than 60% of what people do today did not exist before 1948. No one foretold the rise of the service industry. She believes there will be a flurry of economic activity around AI that will draw all kinds of people with different skills and talents into the industry. It’s critical, she says, that the public begin to develop some AI literacy and empower themselves by understanding what aspects will affect your role.

Adam Cheyer describes the power of AI.

Adam Cheyer notes that AI is already solving problems, including 50-year-old math problems and exponentially advancing our understanding of protein folding, critical for drug discovery. He acknowledges that AI will also cause problems, but overall he’s optimistic it will solve a broad range of challenges. It is, he believes, the most powerful tool humanity has ever created.

For young people aspiring to be founders of new AI companies, Cheyer and Kaplan urge caution. Don’t do it, they say, just because you want to be an entrepreneur or to make money. Do it because you’re passionate about what you’re creating. Hopefully, aspiring AI founders will take Daniela Rus’ words to heart and recognize both the extraordinary opportunity AI provides to enrich our lives and look after humanity and the planet, and create responsible AI tools that serve the greater good.

Watch the Full Conversation

This Time It’s Different | CHM Live, October 7, 2025

Main Image: From left to right, Marc Weber, Jerry Kaplan, Adam Cheyer, Daniela Rus.

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post Is Today’s AI Boom Different? appeared first on CHM.

]]>
Cold War Computing https://computerhistory.org/blog/cold-war-computing/ Mon, 29 Sep 2025 18:09:45 +0000 https://computerhistory.org/?p=32638 Historian and author Victor Petrov explores the rise of Bulgaria's powerhouse computing industry during the Cold War and how it evolved.

The post Cold War Computing appeared first on CHM.

]]>
Did you know that Bulgaria became an electronics powerhouse during the Cold War? In an illuminating lecture for CHM Live on September 18, 2025, historian Victor Petrov shared insights from his new book, Balkan Cyberia, a CHM Book Prize winner.

Petrov joked that few people can find Bulgaria on a map. But, in 1981, the small country in Southeastern Europe launched itself into the world of computing with a satellite. Commemorating the 1,300th anniversary of the Bulgarian state, the satellite sent a nationalist message that the country was one of the oldest in Europe and that its Communist leadership was controlling a highly technical society.

Apples to Electronics

In 1944, when the Communists took over, Bulgaria was an agricultural country with few cities and almost no heavy industry. But, by the 1970s and ‘80s, 47% of electronics exports from the Eastern Bloc were Bulgarian, and the industry employed about 13% of the country’s workforce—215,000 people out of a population of under 9 million.

By the late 1950s, Bulgaria’s fast-paced industrialization triggered a debt crisis, and the ruling Communist Party realized it needed a cash cow. For a country without many natural resources, an industry that needed only capital and labor was ideal. Electrical engineer Ivan Popov, who was pursuing a PhD in East Germany, convinced the Party leadership that mass producing computers was the answer.

The first Bulgarian computer was the Vitosha. Built in 1962 with vacuum tubes and lamps, development of the machine was rushed in order to present it in an exhibit in Moscow. It apparently needed so much power that a Russian engineer had to be bribed with brandy to steal power from the Indian delegation’s pavilion. In any case, the Vitosha was a key step forward.

Victor Petrov explains early mass production of electronics in Bulgaria.

The success of ELKA calculators wasn’t enough for the ruling party, who wanted billions in profits. Through his connections with the head of Fujitsu in Japan, Ivan Popov secured the first license to mass produce computers in the Eastern Bloc—a functional copy of the Fujitsu FACOM, called the ZIT in Bulgaria. From 1965 to 1969, hundreds of of engineers trained in Japan to produce 20 of the machines. When they returned home, they brought with them ideas about the Japanese style of management and work that differed sharply from socialism.

Pirates and Pravetz

By the 1970s, the computing industry was bringing in billions. Bloc countries cooperated to build IBM 360 compatible computers, and Bulgaria produced the processors. The conglomerate also reverse-engineered minicomputers when they came on the scene and then the personal computer, all with components from Eastern Bloc countries. Bulgaria’s Pravetz PC was introduced in 1979 and mass produced and exported. At a cost of more than a year’s salary, it wasn’t accessible to the average Bulgarian.

Unique machines were made in addition to IBM copies, like the MIK-16 that operated on the Russian Mir space station. For Bulgaria, the most lucrative products were memory devices, bought in large quantities and at high prices by the Soviet military, among other customers. Electronic secrets became the focus of spy games during the Cold War.

Victor Petrov shares stories of tech espionage.

Exports and Impacts

Exploring the computing industry outside of the Cold War framework, Petrov found that Bulgaria exported to 54 countries and that its biggest market in Asia was India. That relationship, he believes, became a conduit for capitalist thinking.

Victor Petrov explains how Indian customers required new thinking.

Petrov also examined how computerization impacted Bulgarian society through the prism of socialism. He found that everything bad that happened on factories and farms was blamed on workers rather than the automated machines that had been introduced by the 1980s. Workers at the time experienced anxiety and physical strain similar to that of people in the industry today. There is evidence that they sabotaged machines.

Engineering and computing also became embedded with creative pursuits and had cultural and gender dimensions. For example, the vast majority of factory workers were women, who, far from receiving the promised benefits of socialism, like three years off to care for a child, instead were expected to breastfeed while programming.

Bulgarian children had a chance to take computer classes, with the home-grown Pravetz computers provided to schools and computer clubs. The last socialist generation, steeped in sci-fi as well as computing, made a memorable contribution to tech history—in the early ‘90s, the vast majority of the world’s computer viruses came from Bulgaria.

But today, the computing industry that Petrov studied no longer exists.

Watch the Full Conversation

Cold War Computing | CHM Live, September 18, 2025

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post Cold War Computing appeared first on CHM.

]]>
Decoding Ancient History With AI https://computerhistory.org/blog/decoding-ancient-history-with-ai/ Mon, 16 Jun 2025 21:20:00 +0000 https://computerhistory.org/?p=32357 Experts in ancient history, computer science, and technology team up to use artificial intelligence to virtually unroll and decipher papyrus scrolls burned in the same volcanic eruption that destroyed Pompeii.

The post Decoding Ancient History With AI appeared first on CHM.

]]>
The Herculaneum Scrolls

Innovations in artificial intelligence are not only changing the present and supercharging a whole new future, they’re also revolutionizing the study of history. On stage at CHM Live, an expert panel shared groundbreaking work deciphering the Herculaneum scrolls, fragile ancient Greek texts burned in the same volcanic eruption that destroyed nearby Pompeii and were thought to have been lost forever.

A burnt scroll, still rolled up, from Herculaneum.

CHM Senior Producer and Manager of Programming Russell Ihrig moderated the fascinating discussion with investor and entrepreneur Nat Friedman, who co-launched the Vesuvius Challenge, Federica Nicolardi, assistant professor of papyrology at the University of Naples Federico II, and Brent Seales, the Stanley and Karen Pigman Chair of Heritage Science and professor of computer science at the University of Kentucky. The program was made possible by the generous support of the Patrick J. McGovern Foundation.

Preserved By Destruction

In 79 CE, Herculaneum was a vibrant Roman city, says Federica Nicolardi. It had shops and tavernas, homes, public buildings, and even ancient fast food. The populace was used to being shaken by frequent earthquakes. Everything changed on August 24, when Mount Vesuvius erupted. The devastating effects of the volcano were different in the neighboring cities of Pompeii and Herculaneum—and key to the survival of the scrolls.

Federica Nicolardi describes the eruption of Mount Vesuvius. 

Buried in 60 feet of thick mud, the city was lost for 1,700 years, until Italian farmers digging wells began to find ancient statues and marbles. An official excavation began in 1738, but it was not conducted in the top-down method used by modern archaeologists. Instead, exploration was done by tunneling, which was difficult and dangerous, as the tunnels could collapse at any time.

Unrolling the Scrolls

When the Herculaneum scrolls were discovered in the ruins of a villa, it wasn’t clear what the compact, irregular, black shapes were. Then, when pieces began to come off and ink became visible, people tried to open them. Seales noted that over a 50-year span, various methods were used to attempt to unroll the scrolls, including with a specialized machine. The results varied widely.

Friedman tried to replicate the process at home with papyrus he bought on Amazon and cooked in a Dutch oven. The result was a flakey, light, very delicate object. Trying to cut it with a knife, soaking it in water, and pouring mercury into it—all methods tried in the 1700s—did not work very well and gave him an appreciation for the challenge.

A scroll that was unrolled physically.

The advent of photography starting in the 1860s helped make the contrast of the writing—essentially black on black—more readable, but since the early 2000s, there’s been a moratorium on further attempts to unroll the scrolls to prevent damage.

Seales had the idea to virtually unwrap those scrolls that hadn’t been opened at all by using a scanner. His team developed software to trace the surface of the scrolls and reconstruct where the glued sheets overlapped. Then they had to find the ink. They’d had a little success with machine learning computing models when Seales received a cold call from Friedman. The two hit it off, Friedman suggested they “open source” the ink challenge, and the Vesuvius Challenge was born.

Nat Friedman describes the Vesuvius Challenge.

The team hired a dozen people to look at cross section X-Rays of the papyrus to follow the spiral so they could provide flattened segments to the community to help make the challenge of finding the ink easier. While some contestants ran machine learning models, one took an unusual approach—he just looked at the X-Rays for hours until he began to identify patterns of cracks and realized it might be dried ink. That revelation was used by another contestant to train an ink detection model. Seales explains how critical AI has been to the project.

Brent Seales explains AI’s role in deciphering the scrolls.

Making History

The first word deciphered was “porphyra,” which means purple in Greek. Nicolardi notes that it’s an interesting word and hard to understand without context. Soon, however, pieces of five or six columns were deciphered and progress was rapid. Today, there are around 15 columns that are readable out of 160, and two thirds of the upper parts of those columns are decipherable. The Greek texts are likely from a specialized part of the Italian villa’s library and relate to Epicurean philosophy. Occurrences of the words “music” and “pleasure” are key.

A scroll that has been digitally “unscrolled.”

There are hundreds of scrolls still to be examined, and many more are likely buried in the vast unexcavated areas of the Herculaneum site. The chance to restore entire works of ancient Greek and Latin texts rather than the fragments scholars usually find is a compelling challenge. And it’s exciting to imagine how the tech of the future is bringing the past into the present and could help to solve the mysteries that remain. In fact, it’s enough to make anyone “scroll obsessed.”

Main image: From left to right, Russell Ihrig, Federica Nicolardi, Brent Seales, Nat Friedman.

 

Watch the Full Conversation

AI Decodes Ancient History | CHM Live, June 10, 2025

FacebookTwitterCopy Link

The post Decoding Ancient History With AI appeared first on CHM.

]]>
Encoding Language https://computerhistory.org/blog/encoding-language/ Fri, 23 May 2025 19:37:43 +0000 https://computerhistory.org/?p=32278 What do you do if your language is not available on devices you want to use to communicate, like computers and smart phones? Experts discuss how Unicode works to make our digital world inclusive.

The post Encoding Language appeared first on CHM.

]]>
How can we ensure that every language—and the communities that speak them—can fully participate in the digital world? That was the question explored at the CHM Live event Character Building: Bridging Code and Culture through Unicode. With over 7,000 modern languages in use today, it’s a difficult task, but the Unicode Consortium, a nonprofit organization that establishes and maintains standards for representing written language, is trying.

An expert panel decoded how Unicode works for the audience and included Roy Boney, Jr., Cherokee language revitalization manager at Cherokee Film, Mark Davis, cofounder and CTO of the Unicode Consortium, and Anushah Hossain, research director of the Script Encoding Initiative. The moderator was Teresa Marshall, vice president of Globalization & Localization at Salesforce.

Equality

In a video clip from a recent CHM oral history interview with Unicode cofounders, Lee Collins and Mark Davis made the point that Unicode aims to enable people everywhere to communicate digitally in their own language. That means Unicode is always evolving. For example, new Chinese ideographs are often added, and additional levels of support like being able to read or type in a particular language are provided.

Hossain added that it’s hard to overstate how important it is that Unicode found a common way to treat the wide variety of writing systems we have in the world.

Anushah Hossain explains the difference between language and script.

Globally, there are close to 350 writing systems and 170 are currently in Unicode. The CLDR (Common Locale Data Repository) project at Unicode deals with language-specific issues, with the goal to customize everything so that the specifics of a language work, like how dates, times, numbers, and currency formats are portrayed in a particular location. Unicode also produces code libraries that can be taken into any product and used so that programmers don’t have to manage all the data that handles the character properties.

Inclusivity

The first step in getting a new script into Unicode is to submit a proposal to a subcommittee called the Script Encoding Working Group, explained Hossain. The 15 or so experts on the committee have linguistic backgrounds or a deep interest in language as well as a programming background. They meet once a month to review all the proposals for new characters or scripts and discuss how the script works, if the proposal adequately explains all the characters, and the reach and legitimacy of the script.

Successful proposals often go back and forth with the authors two or three times before being approved, and then they advance to the Unicode Technical Committee that meets once a quarter. The ISO (International Standards Organization) also has a specific working group dedicated to a universal character code, and they review the same proposals. It’s a complex, multistakeholder process. Davis added that Unicode also hopes to make it easier for individuals and organizations to contribute to fleshing out their own language in Unicode.

Mark Davis explains how Unicode tries to be inclusive.

Like most indigenous languages in the US, Cherokee is endangered, says Roy Boney, so for last 40 years the tribe has been trying to preserve and revitalize it. But getting people to shift away from the fonts they had created to the Unicode script has not been easy. There’s been a lot of education in the community about what the tools are and what they can be used for. Originally, they needed a font and keyboard and operating systems that would support the language, and then they began working with companies in Silicon Valley to make sure the language was supported on all their products.

Sometimes the process of adding a language to Unicode can become controversial if different groups disagree on what the script actually looks like, noted Hossain. Old Hungarian, for instance, went through 13 proposals because social and political tensions around a few characters stalled the process. Boney described how a team that included scholars, font designers, historians of the language, and community members worked together to research and craft a proposal that still required revisions.

Davis noted that occasionally characters are fast-tracked, like when a Japanese emperor died, requiring a new era character to be used in dates. Chinese ideographs are the largest part of Unicode, outnumbering all other characters. Tranches come out regularly and involve very large data sets. It’s an involved process because they have to verify that an ideograph is actually new and not a variation on an existing one.

Access

Many of us may take for granted that our language is supported on the devices we use every day—like our computer or smart phone. When it’s not, says Boney, you realize very quickly how limited you are in what you can do.

Roy Boney describes the impact of Unicode Cherokee.

Now that it’s common for people in the Cherokee community to have access to their language on digital devices, more and more people are making their own content. Access gives you confidence to do things in your language and pursue your dreams, says Boney, and he’s thankful for Unicode.

While most people’s languages are in Unicode and it has fairly full support for about 100, many languages don’t have enough to help them get to the same level as Cherokee. And there are still a lot of historical works, like those in hieroglyphics, that are not yet able to have digital representation, notes Davis. And, as people find more things they want to do on computers, Unicode has to adapt to meet product requirements.

Hossain says it’s important for Unicode to maintain what’s already there and respond to reported bugs. Arabic has Unicode but it doesn’t work great and there’s a lot still to do to make it functional for people. If there is even a little friction, it’s easy for people to just switch to Latin script or come up with a hack. That’s a problem, because text won’t be processed properly by search engines or anything on the internet.

These are big challenges for a small organization that has more work than people. While everyone benefits from Unicode’s vital work, it’s easy to use their tools without contributing to help it survive. But perhaps telling real-life stories about the positive impacts and the challenges of language inclusivity can help inspire and motivate stakeholders to continue to invest in Unicode and our collective digital future.

Main image: From left to right, Teresa Marshall, Roy Boney, Jr., Mark Davis, Anushah Hossain.

Watch the Full Conversation

Character Building | CHM Live, May 13, 2025

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post Encoding Language appeared first on CHM.

]]>
New News Tech https://computerhistory.org/blog/new-news-tech/ Fri, 25 Apr 2025 16:18:44 +0000 https://computerhistory.org/?p=32178 CHM convened experts to explain the state of journalism and the opportunities and challenges presented by new technologies and tools.

The post New News Tech appeared first on CHM.

]]>
Technology and Journalism Today

When there is risk, there is also opportunity.

— Marian Chia-Ming Liu

From AI to data analysis and visualization, technology is reshaping the news. CHM convened experts to explain the state of journalism and the opportunities and challenges presented by new technologies. Moderated by David Yarnold, Pulitzer Prize-winning former executive editor of the San Jose Mercury News, the panel included Marian Chia-Ming Liu, The Washington Post’s projects editor of special newsroom initiatives and partnerships, Cofounder of 404 Media Jason Koebler, and Alex Reed, lead data analyst for Mapping Black California.

Jason Koebler says that what happens on the internet impacts the real world. The audience seemed to find that lack of differentiation scary. The results of a live poll asking: What word comes to mind when you think of the impacts of Ai on the news? yielded not just “Scary,” but also “Caution,” “Untrustworthy,” “Propaganda,” and “Fake.”

404 Media

How are the panelists navigating this new world? Koebler has become obsessed with AI, and how it is leading to what he terms a collapse of discoverability on the internet. It’s so easy to make effective spam designed to provoke a response that it’s crowding out journalists.

Jason Koebler explains how spam crowds out journalists.

Koebler described working for months on a story about how a popular large language model was trained using open-source material that included illegal child sex material. He and his colleagues were careful to handle the topic ethically and legally, but within minutes of publishing it, the story had been scraped and republished by dozens of AI content farms. Now, those articles are ranking above the original on platforms like Google search. The experience was eye-opening. 

The Washington Post

Marian Chia-Ming Liu says you have to meet your audience where they are. Concerned that people might not find a new article about January 6 for The Washington Post that would be published a year after the event, she checked the origins of traffic for similar stories. People were not finding them in search, and she realized that meant the Post needed to be on all platforms.

Marian Chia-Ming Liu shares her audience strategy.

When Marian was tasked with redoing the iconic Style section of the Post, in addition to testing taglines and colors, she facilitated researchers and readers working to create the new look together. She also “curated” her ideal audience, reaching out to younger readers at universities, high schools, and film clubs and even creating a zine for them.

Mapping Black California

Alex Reed explained that Mapping Black California pairs spatial data from GIS (geographic information systems) with both quantitative and qualitative data to build maps and other types of visualizations that convey people’s experiences. This involves approaching the audience from both journalistic and storytelling aspects.

Alex Reed describes how data visualization works.

Providing visualizations of sterile information where people can see their own lived experience helps them feel directly connected to their communities, says Reed.

Working with Tech

The panelists described different experiences using technology for their work. Koebler says he and his colleagues treat themselves as guinea pigs for new tech tools, probing to see where the holes might be for the safety and security of users. Often, he says, when they bring these problems to the companies’ attention, they fix them. Marian wants to connect with young people, and so she needs to understand how the social platforms they are spending time on work. For platforms that won’t allow links out from posts, her strategy is awareness, hoping to encourage new visitors to explore more deeply. Alex feels pressure to use AI tools and works to integrate them for efficiency in handling large datasets while also considering privacy and protection.

It can be exciting to use new technologies, as long as caution is built in. Koebler noted that if they make mistakes because they relied too much on faulty AI, they will lose hard-earned credibility fast. Chia-Ming Liu cautioned that it’s important to be careful about what information you’re feeding the AI, and to consider where it will be used. Reed is concerned that AI can make mistakes that people don’t notice and those can become feedback loops that snowball. She advises using AI to help you see your own mistakes and biases so you can create a better product.

Optimism and Action

Yarnold’s final question to the panelists was: What are you optimistic about? Their answers might be summarized as more informed and critical audiences.

The panelists describe whey they’re optimistic.

What can individual people do to make things better for the future of news? The panelists had plenty of ideas. Read outside your echo chamber. Subscribe to legitimate and ethical news sites. Recommend those sites to your friends. Don’t just post or share an article on Facebook and walk away. Be aware of the sources where information originated in what you’re reading.

And, perhaps, they could have added to be optimistic in knowing that you are not alone if you want news you can trust.

Main image: From left to right, David Yarnold, Marian Chia-Ming Liu, Jason Koebler, Alex Reed.

Watch the Full Conversation

Making News With Data | CHM Live, April 16, 2025

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post New News Tech appeared first on CHM.

]]>
Parrots vs. Sparks https://computerhistory.org/blog/parrots-vs-sparks/ Fri, 04 Apr 2025 15:20:49 +0000 https://computerhistory.org/?p=32116 Experts debate the question of whether or not the large language models that drive artificial intelligence chatbots really understand.

The post Parrots vs. Sparks appeared first on CHM.

]]>
The Great Chatbot Debate

Do chatbots really understand? Or, are the large language models that power them to answer sophisticated questions, analyze texts, and generate poems and computer programs just a mass of data and calculations that simulates true understanding?

CHM aimed to find out, staging a debate between University of Washington computational linguist Emily M. Bender—who, with her coauthors, established the term “stochastic parrot” in a major 2021 paper and is coauthor of the forthcoming book The AI Con—and OpenAI’s Sébastien Bubeck, former VP for AI and distinguished scientist at Microsoft, and the lead author of an influential 2023 paper about LLMs, “Sparks of Artificial General Intelligence.” Coordinated in partnership with IEEE Spectrum, whose Senior Editor Eliza Strickland served as moderator, this event was made possible by the generous support of the Patrick J. McGovern Foundation.

To provide a baseline understanding for the audience, Strickland offered a brief history and description of AI neural networks and the large language models that help chatbots reproduce human-like text. She noted that they can generate relevant responses because they’ve basically “read the entire internet” and so can predict what likely comes next in a sentence.

Opening Statements

The debaters each offered opening remarks on the question of “Do LLMs really understand, or are they just mimicking training data?”

Emily Bender represented the position that “No, LLMs do not really understand.” She won’t use the term “artificial intelligence” but rather refers to what the systems do as “automation.” Bender explained that when humans use language, understanding includes much more than words, such as context and cues from the speaker. LLMs are trained only to look at the form of the words themselves, not how they are being used in a particular context. She argues that a chatbot only makes sense when we talk to it because we ourselves are making sense of it.

Emily Bender argues that LLMs do not understand.

Bender noted that it’s an extraordinary claim to say that LLMs understand, and extraordinary evidence is needed to prove that claim. The data that would help to verify it must not be hidden. Getting text out when you put text in may look like reasoning, but it is really only testing how closely the system is modeling training data.

For the “Yes” side of the debate, Sébastien Bubeck noted that “understanding is in the eye of the beholder.” In the world of AI, benchmarks are used to assess the rate of progress, which has been remarkable over the past couple of years as models advanced from solving high school level mathematics questions to grappling with problems that no human can solve alone. However, Bubeck believes that benchmarks do not show understanding, which can only be judged by interacting with the system and probing it to see how deeply it can go.

Sébastien Bubeck argues that LLMs can push understanding.

At the end of the day, says Bubeck, understanding is a human journey. So, perhaps ask yourself if the chatbot helped you to understand more things rather than asking if the chatbot itself understood them. We may see breakthroughs in math by LLMs, but they will not be accepted until humans can fully grasp what the chatbot has revealed.

Debate Questions

The debaters fielded a question from the moderator about whether or not the hype around artificial general intelligence (AGI) is justified by its current functionality. While Bubeck believes it’s plausible to reach AGI, Bender objects to how people necessarily assume that it exists in the future.

A new benchmark measure called ARC, which stands for Abstraction and Reasoning Corpus, professes to be able to measure AGI. Bender notes that benchmarks always include only a selection and there is a “whole wide world” outside of it. We don’t benchmark people, she says, we create licensing exams and academic exams to measure understanding, not how well a person has been developed to perform some task. Bubek agreed … to some extent.

Bubek and Bender disagree on deploying AGI. 

Strickland asked the debaters if there’s a danger in letting people believe there’s a mind on the other side of chatbot technology. She used an example of an article that noted how chatbot therapists became “stressed” after hearing about humans’ trauma.

Bender said unequivocally that this is a problem that sets people up not to make good decisions. Bubek remarked that anthropomorphizing chatbots is not great and we need to do more work to develop the right vocabulary to talk about these things. For example, he doesn’t like the term “AI” because LLMs are intelligent but not in same way as humans.

Closing Statements

After addressing audience questions, the debaters offered final remarks. Bender would like people to know that nothing is inevitable. Refusal is important, especially in systems that are already creaking, like education, healthcare, and the legal system. In all those places where synthetic text looks like a quick solution, we need to say “no,” she says, because it is worse than nothing.

Bubek advises people to decide for themselves when to interact with these tools and to see if they provide value. These topics are complex and subtle, and no one knows how far AI is going to go. The growth rate is astonishing, and he’s excited to see what the next three years will bring.

Bubek says that the question of LLM’s understanding is both parrot and spark. Understanding is a continuum and the balance is shifting.

Watch the Full Conversation

The Great Chatbot Debate | CHM Live, March 25, 2025

 

SUPPORT CHM’S MISSION

Free events like these would not be possible without the generous support of people like you who care deeply about decoding technology for everyone. Please consider making a donation.

FacebookTwitterCopy Link

The post Parrots vs. Sparks appeared first on CHM.

]]>