Feeds:
Posts
Comments

Posts Tagged ‘artificial intelligence’

Photo: R.L. Easton, K. Knox, and W. Christens-Barry/Propietario del Palimpsesto de Arquímedes.
AI was used to translate this palimpsest with texts by Archimedes.

In general, I am wary of artificial intelligence, which one of its first developers has warned is dangerous. I use it to ask Google questions, but it’s a real nuisance in the English as a Second Language classes where I volunteer. Some students are tempted by the ease of using AI to do the homework, but of course, they learn nothing if they do that.

There’s another kind of translation, however, that AI seems good for: otherwise unreadable ancient texts.

Raúl Limón writes at El País, “In 1229, the priest Johannes Myronas found no better medium for writing his prayers than a 300-year-old parchment filled with Greek texts and formulations that meant nothing to him. At the time, any writing material was a luxury. He erased the content — which had been written by an anonymous scribe in present-day Istanbul — trimmed the pages, folded them in half and added them to other parchments to write down his prayers.

“In the year 2000, a team of more than 80 experts from the Walters Art Museum in Baltimore set out to decipher what was originally inscribed on this palimpsest — an ancient manuscript with traces of writing that have been erased. And, after five years of effort, they revealed a copy of Archimedes’ treatises, including The Method of Mechanical Theorems, which is fundamental to classical and modern mathematics.

“A Spanish study — now published in the peer-reviewed journal Mathematics — provides a formula for reading altered original manuscripts by using artificial intelligence. …

“Science hasn’t been the only other field to experience the effects of this practice. The Vatican Library houses a text by a Christian theologian who erased biblical fragments — which were more than 1,500-years-old — just to express his thoughts. Several Greek medical treatises have been deciphered behind the letters of a Byzantine liturgy. The list is extensive, but could be extended if the process of recovering these originals wasn’t so complex.

“According to the authors of the research published in Mathematics — José Luis Salmerón and Eva Fernández Palop — the primary texts within the palimpsests exhibit mechanical, chemical and optical alterations. These require sophisticated techniques — such as multispectral imaging, computational analysis, X-ray fluorescence and tomography — so that the original writing can be recovered. But even these expensive techniques yield partial and limited results. …

“The researchers’ model allows for the generation of synthetic data to accurately model key degradation processes and overcome the scarcity of information contained in the cultural object. It also yields better results than traditional models, based on multispectral images, while enabling research with conventional digital images.

“Salmerón — a professor of AI at CUNEF University in Madrid, a researcher at the Autonomous University of Chile and director of Stealth AI Startup — explains that this research arose from a proposal by Eva Fernández Palop, who was working on a thesis about palimpsests. At the time, the researcher was considering the possibility of applying new computational techniques to manuscripts of this sort.

“ ‘The advantage of our system is that we can control every aspect [of it], such as the level of degradation, colors, languages… and this allows us to generate a tailored database, with all the possibilities [considered],’ Salmerón explains.

“The team has worked with texts in Syriac, Caucasic, Albanian and Latin, achieving results that are superior to those produced by classical systems. The findings also include the development of the algorithm, so that it can be used by any researcher.

“This development isn’t limited to historical documents. ‘This dual-network framework is especially well-suited for tasks involving [cluttered], partially visible, or overlapping data patterns,’ the researcher clarifies. These conditions are found in medical imaging, remote sensing, biological microscopy and industrial inspection systems, as well as in the forensic investigation of images and documents. …

“The researchers themselves admit that there are limitations to their proposed method for examining palimpsests: ‘The approach shows degraded performance when processing extremely faded texts with contrast levels below 5%, where essential stroke information becomes indistinguishable from crumbling parchment. Additionally, the model’s effectiveness depends on careful script balancing during the training phase, as unequal representation of writing systems can make the deep-learning features biased toward more frequent scripts.’ ”

More at El País, here. What is your view of AI? All good? Dangerous? OK sometimes? I can’t stop thinking about the warning from Geoffrey Hinton, the ‘godfather of AI,’ that it could wipe out humanity altogether. 

Read Full Post »

Today is my second online ESL (English as a Second Language) class for the ’25-’26 school year. I assist a more experienced teacher once a week — have been doing so for nearly ten years. One task she likes me to do is to go over the writing homework that students put on an edublog.

Lately, it feels like these otherwise highly motivated adults may not be learning much about writing English. Often they seem to have copied from Google Translate or another AI program. What I want to see is a few mistakes in their answers. At the same time, I am wary of accusing anyone of not doing their own work.

Today’s article didn’t give me a clear answer to my ESL situation, but I was intrigued to learn about programs that help identify who the real writer of a book was or whether AI was used in a journal article.

Roger J. Kreuz, associate dean and professor of psychology, University of Memphis, writes at the Conversation that although it’s common to use chatbots “to write computer codesummarize articles and books, or solicit advice … chatbots are also employed to quickly generate text from scratch, with some users passing off the words as their own.

“This has, not surprisingly, created headaches for teachers tasked with evaluating their students’ written work. It’s also created issues for people seeking advice on forums like Reddit, or consulting product reviews before making a purchase.

“Over the past few years, researchers have been exploring whether it’s even possible to distinguish human writing from artificial intelligence-generated text. … Research participants recruited for a 2021 online study, for example, were unable to distinguish between human- and ChatGPT-generated stories, news articles and recipes.

“Language experts fare no better. In a 2023 study, editorial board members for top linguistics journals were unable to determine which article abstracts had been written by humans and which were generated by ChatGPT. And a 2024 study found that 94% of undergraduate exams written by ChatGPT went undetected by graders at a British university. …

“A commonly held belief is that rare or unusual words can serve as ‘tells’ regarding authorship, just as a poker player might somehow give away that they hold a winning hand.

“Researchers have, in fact, documented a dramatic increase in relatively uncommon words, such as ‘delves’ or ‘crucial,’ in articles published in scientific journals over the past couple of years. This suggests that unusual terms could serve as tells that generative AI has been used. It also implies that some researchers are actively using bots to write or edit parts of their submissions to academic journals. …

“In another study, researchers asked people about characteristics they associate with chatbot-generated text. Many participants pointed to the excessive use of em dashes – an elongated dash used to set off text or serve as a break in thought – as one marker of computer-generated output. But even in this study, the participants’ rate of AI detection was only marginally better than chance.

“Given such poor performance, why do so many people believe that em dashes are a clear tell for chatbots? Perhaps it’s because this form of punctuation is primarily employed by experienced writers. In other words, people may believe that writing that is ‘too good’ must be artificially generated.

“But if people can’t intuitively tell the difference, perhaps there are other methods for determining human versus artificial authorship.

“Some answers may be found in the field of stylometry, in which researchers employ statistical methods to detect variations in the writing styles of authors.

“I’m a cognitive scientist who authored a book on the history of stylometric techniques. In it, I document how researchers developed methods to establish authorship in contested cases, or to determine who may have written anonymous texts.

“One tool for determining authorship was proposed by the Australian scholar John Burrows. He developed Burrows’ Delta, a computerized technique that examines the relative frequency of common words, as opposed to rare ones, that appear in different texts.

“It may seem counterintuitive to think that someone’s use of words like ‘the,’ ‘and’ or ‘to’ can determine authorship, but the technique has been impressively effective.

“Burrows’ Delta, for example, was used to establish that Ruth Plumly Thompson, L. Frank Baum’s successor, was the author of a disputed book in the Wizard of Oz series. It was also used to determine that love letters attributed to Confederate Gen. George Pickett were actually the inventions of his widow, LaSalle Corbell Pickett.

“A major drawback of Burrows’ Delta and similar techniques is that they require a fairly large amount of text to reliably distinguish between authors. A 2016 study found that at least 1,000 words from each author may be required. A relatively short student essay, therefore, wouldn’t provide enough input for a statistical technique to work its attribution magic.

“More recent work has made use of what are known as BERT language models, which are trained on large amounts of human- and chatbot-generated text. The models learn the patterns that are common in each type of writing, and they can be much more discriminating than people: The best ones are between 80% and 98% accurate.

“However, these machine-learning models are ‘black boxes’ – that is, we don’t really know which features of texts are responsible for their impressive abilities. Researchers are actively trying to find ways to make sense of them, but for now, it isn’t clear whether the models are detecting specific, reliable signals that humans can look for on their own.

“Another challenge for identifying bot-generated text is that the models themselves are constantly changing – sometimes in major ways.

“Early in 2025, for example, users began to express concerns that ChatGPT had become overly obsequious, with mundane queries deemed ‘amazing’ or ‘fantastic.’ OpenAI addressed the issue by rolling back some changes it had made.

“Of course, the writing style of a human author may change over time as well, but it typically does so more gradually.

“At some point, I wondered what the bots had to say for themselves. I asked ChatGPT-4o: ‘How can I tell if some prose was generated by ChatGPT? Does it have any “tells,” such as characteristic word choice or punctuation?’

“[It provided] me with a 10-item list, replete with examples. These included the use of hedges – words like ‘often’ and ‘generally’ – as well as redundancy, an overreliance on lists and a ‘polished, neutral tone.’ It did mention ‘predictable vocabulary,’ which included certain adjectives such as ‘significant’ and ‘notable,’ along with academic terms like ‘implication’ and ‘complexity.’ However, though it noted that these features of chatbot-generated text are common, it concluded that ‘none are definitive on their own.’ ” More at the Conversation, here.

If I were in the room with students, I could more or less stand over them and see how they go about writing. But these are adults, after all, and they want to learn, so the goal is to persuade them how learning is more likely to happen. Let me know if you have ideas that could help me.

Read Full Post »

Photo:The Metropolitan Museum of Art.
Google’s Aeneas AI program proposes words to fill the gaps in worn and damaged artifacts. 

Whenever I start to worry that Google has too much power, it does something useful. Today’s story is about its artificial intelligence program Aeneas, which can make a guess about half-obliterated letters in ancient inscriptions.

Ian Sample, science editor at the Guardian, writes, “In addition to sanitation, medicine, education, wine, public order, irrigation, roads, a freshwater system and public health, the Romans also produced a lot of inscriptions.

“Making sense of the ancient texts can be a slog for scholars, but a new artificial intelligence tool from Google DeepMind aims to ease the process. Named Aeneas after the mythical Trojan hero, the program predicts where and when inscriptions were made and makes suggestions where words are missing.

“Historians who put the program through its paces said it transformed their work by helping them identify similar inscriptions to those they were studying, a crucial step for setting the texts in context, and proposing words to fill the inevitable gaps in worn and damaged artifacts.

” ‘Aeneas helps historians interpret, attribute and restore fragmentary Latin texts,’ said Dr Thea Sommerschield, a historian at the University of Nottingham who developed Aeneas with the tech firm. …

“Inscriptions are among the most important records of life in the ancient world. The most elaborate can cover monument walls, but many more take the form of decrees from emperors, political graffiti, love poems, business records, epitaphs on tombs and writings on everyday life. Scholars estimate that about 1,500 new inscriptions are found every year. …

“But there is a problem. The texts are often broken into pieces or so ravaged by time that parts are illegible. And many inscribed objects have been scattered over the years, making their origins uncertain.

“The Google team led by Yannis Assael worked with historians to create an AI tool that would aid the research process. The program is trained on an enormous database of nearly 200,000 known inscriptions, amounting to 16m characters.

“Aeneas takes text, and in some cases images, from the inscription being studied and draws on its training to build a list of related inscriptions from 7th century BC to 8th century AD. Rather than merely searching for similar words, the AI identifies and links inscriptions through deeper historical connections. …

“The AI can assign study texts to one of 62 Roman provinces and estimate when it was written to within 13 years. It also provides potential words to fill in any gaps, though this has only been tested on known inscriptions where text is blocked out.

“In a test … Aeneas analyzed inscriptions on a votive altar from Mogontiacum, now Mainz in Germany, and revealed through subtle linguistic similarities how it had been influenced by an older votive altar in the region. ‘Those were jaw-dropping moments for us,’ said Sommerschield. Details are published in Nature. …

“In a collaboration, 23 historians used Aeneas to analyze Latin inscriptions. The context provided by the tool was helpful in 90% of cases. “’t promises to be transformative,’ said Mary Beard, a professor of classics at the University of Cambridge.

“Jonathan Prag, a co-author and professor of ancient history at the University of Oxford, said Aeneas could be run on the existing corpus of inscriptions to see if the interpretations could be improved. He added that Aeneas would enable a wider range of people to work on the texts.

“ ‘The only way you can do it without a tool like this is by building up an enormous personal knowledge or having access to an enormous library,’ he said. ‘But you do need to be able to use it critically.’ “

More at the Guardian, here. Please remember that this free news outlet needs donations.

Read Full Post »

Photo: MinnPost.
Cynthia Tu of Sahan Journal is using Chat GPT to improve revenue streams.

A few times in the past, I’ve had reason to link to a story at Sahan Journal, a nonprofit newsroom serving immigrants and communities of color in Minnesota. Now NiemanLab, a website about journalism, links to an article on a surprising development at the small publisher.

Lev Gringauz, reporting at MinnPost via NiemanLab, writes “As journalists around the world experiment with artificial intelligence, many newsrooms have common, often audience-facing, ideas for what to try.

“They range from letting readers talk to chatbots trained on reporting, to turning written stories into audio, creating story summaries and, infamously, generating entire articles using AI — a use case vehemently rejected by many journalists.

“But Sahan Journal, the nonprofit newsroom serving immigrants and communities of color in Minnesota, wanted to try something different.

“ ‘We’re less enthusiastic, more skeptical, about using AI to generate editorial content,’ said Cynthia Tu, Sahan Journal’s data journalist and AI specialist.

“Instead, the outlet has been working on ways to support internal workflows with AI. Now, it’s even testing a custom ChatGPT bot to help pitch Sahan Journal to prospective advertisers and sponsors. …

“While AI has plenty of ethical and technical issues, Tu’s work highlights another important aspect: The intended users — in this case, the Sahan Journal team.

“ ‘A lot of … this experiment is less of a technical challenge,’ Tu said. ‘It’s more like, how do you make [AI] fit in the human system more flawlessly? And how do you train the human to use this tool in a way that it was intended?’

Sahan Journal’s AI experimentation, and Tu’s job, are supported by a partnership between the American Journalism Project, a national nonprofit helping local newsrooms, and ChatGPT creator OpenAI. …

Liam Andrew, technology lead for the AJP’s Project & AI Studio, sees part of his job as helping newsrooms overcome hesitancy around AI. …

“Tu joined Sahan Journal fresh from a Columbia Journalism School master’s program in data journalism. She had played a little with chatbots, but otherwise didn’t have much experience working with AI. …

“For one investigation, Tu used a Google AI tool to process the financial data of charter schools in Minnesota. Thinking about how to save time on backend workflows, Tu then helped Sahan Journal generate story summaries, tailored for Instagram carousels, with ChatGPT. …

“ ‘You need to know what the workflow of the organization looks like…[and how] you push for change within a department when they’ve already been doing [something] for the past five years using a manual or human labor way.’

“That knowledge came in handy when finally tackling Tu’s core AI project: improving Sahan Journal’s revenue.

“The project stemmed from an anonymized database of audience insights, which included demographic information and interests. While an important resource, Sahan Journal’s small revenue team didn’t have the time to figure out how to leverage it. …

” ‘What if AI could feed two birds with one scone? A custom ChatGPT bot could process the audience data and personalize a media kit for clients. But it needed to work without being an extra burden on the revenue staff. …

“The magic of AI chatbots like ChatGPT is that you don’t need to know how to code to use them. Just type in a prompt and get rolling. …

“Less magically, AI chatbots can be hard to keep in line for specific tasks. Designed to be eager helpers, they hallucinate false results and stubbornly twist instructions in an attempt to please.

“Troubleshooting those issues was no simple task for Tu.

“The custom revenue chatbot struggled to keep Tu’s preferred formatting, and hallucinated audience data. The bot would also intermix results from the internet that Tu had not asked for. None of that was ideal for a tool that should work reliably for the revenue team.

“ ‘I was kind of jumping through hoops and telling it multiple times, “Please do not reference anything else on the internet,” ‘ Tu said. …

“Working with chatbots is an exercise in prompt engineering — mostly a trial-and-error process of figuring out what specific instructions will get the preferred result. As Tu said, ‘lazy questions lead to lazy answers.’ … Eventually, Tu settled on a reliable set of prompts.

“The custom chatbot takes about 20 seconds to find relevant data from the audience database — for example, pulling up how much of Sahan Journal’s audience cares about public transportation. Then it creates a summary for a media kit tailored to potential clients.

“The chatbot also double-checks its work by referencing the database again, making sure its output matches reality. And part of the database is shown for users to manually see the chatbot isn’t hallucinating. …

“Earlier this year, Tu introduced the final version of the revenue bot to Sahan Journal’s team. …

“By mid-April, the Sahan Journal revenue team had used the custom chatbot on six sales pitches, with three successfully leading to ads placed on the site. …

“But there’s a larger question hanging over this work: Is it sustainable? In a way, newsroom experiments with AI exist in a bubble.

“ ‘Everything is kind of tied to a grant,’ Tu said, referencing the AJP-OpenAI partnership that supports her work. But grants come and go as donor interests (and financials) change.”

The other unknowns are weighed at NiemanLabs, here.

Read Full Post »

Photo: Garcés de Seta Bonet Arquitectes/Marvel.
Barcelona is transforming its skyline’s biggest eyestore into a beautiful tech hub.

I have a dear friend who is so keen on the possibilities of artificial Intelligence that she doesn’t seem to care how much energy it takes from other purposes — or whether the energy is clean. She says China uses coal; China is ahead.

I, on the other hand, rejoice to see coal going by the wayside and creative uses for the coal plants that once stained the landscape.

Jesus Diaz has a story about that at Fast Company.

“Tres Xemeneies (Three Chimneys) is a former coal-fired power plant in Sant Adrià de Besòs. … Barcelona’s plant is set to undergo a radical transformation into the new Catalunya Media City — a cutting-edge hub for digital arts, technology, and education. 

“The winning design is called E la nave va, a nod to Federico Fellini’s film of the same name, which translates to And the Ship Sails On, a reference to how this long-dead structure that resembles a three-mast ship will keep cruising history in a new era. According to its creators — Barcelona-based Garcés de Seta Bonet Arquitectes and New York-Barcelona firm Marvel — the project promises to honor the site’s industrial legacy while propelling it into a sustainable, community-centric future. The project is slated to break ground in late 2025 and be completed by 2028.

“Three Chimneys looks exactly how it sounds: a gigantic structure dominated by three 650-foot-tall chimneys. The brutalist plant was built in the 1970s and faced controversy even before its opening. Many of the residents of Badalona and Barcelona hated it both for the aesthetics and the environmental implications. Its problems continued in 1973, when workers building the station went on strike. … The company that ran the station was also sued because of the pollution it caused, and the plant eventually shuttered.

“The structure is imposing. Its giant concrete vaults, labyrinthine floors, and towering chimneys presented a unique challenge to preserving its industrial DNA while adapting it for the 21st century. … Rather than force modern elements onto the existing framework, the team used the building’s features to organize its function.

“For instance, the lower floors — with their enclosed, cavernous spaces — will host incubators and exhibition halls, while the airy upper levels with their panoramic coastal views will house vocational training classrooms and research labs.

“ ‘We kept the existing structure largely unaltered,’ [Guido Hartray, founding partner of Marvel] says, ‘retaining its experiential qualities and limiting modifications.’ This approach ensures that the power plant’s raw, industrial essence remains palpable, even as it accommodates immersive media studios and a modern, 5,600-square-meter exhibition hall likened to London’s Tate Modern Turbine Hall. …

“The architects leveraged the building’s robust concrete skeleton — a relic of its industrial past — as a sustainability asset. Barcelona’s mild climate allows the thermal mass of the concrete to passively regulate temperatures, reducing reliance on mechanical systems. Spaces requiring precise climate control, such as recording studios and laboratories, are nested in a ‘building within a building,’ insulated from external fluctuations, according to the studios.

“The rooftop will double as a public terrace and energy hub, with 4,500 square meters [~48,438 square feet] of solar panels generating renewable power. This dual function not only offsets the energy demands of lighting and HVAC systems but also creates a communal vantage point connecting Barcelona, Sant Adrià de Besòs, and Badalona. ‘The rooftop’s role as both infrastructure and gathering space embodies our vision of sustainability as a social and environmental practice,’ Hartray says.

“The project’s most striking intervention — the ‘transversal cuts’ that slice through the turbine hall — emerged from a meticulous study of the building’s anatomy. Marvel and Garcés de Seta Bonet identified natural breaks in the long, warehouse-like structure, using these to carve openings that link the interior to the outdoors. These cuts create fluid transitions between the industrial hall and the surrounding landscape. …

“The north facade’s new balcony, overlooking the Badalona coastline, epitomizes this connectivity. Jordi Garcés, cofounder of Garcés de Seta Bonet Arquitectes, tells me via email that they have designed a proposal that plays with connections and knots — temporal, landscape, and territorial. … ‘The architectural elements at different heights will offer new landscape perspectives, as if it were a land art piece.’ In this ‘shared communal space,’ he says, residents and visitors alike can engage with the Mediterranean horizon.

“The building is the core of Catalunya Media City, which is a project that the regional government says will democratize access to technology and creativity. It claims that it will house educational programs for more than 2,500 students annually, including vocational training; research incubators partnering with universities and corporations; immersive installations and performances in a monumental hall with 56-foot-tall ceilings; and production studios, including an auditorium, soundstages, and UX labs.”

More at Fast Company, here.

Read Full Post »

Photo: Everett Collection.
A de-aged version of actors Tom Hanks and Robin Wright, created by artificial intelligence for the 2024 film Here.

We are well into the age of AI, and I certainly hope that doesn’t mean we’re going to realize the dire warnings of one of its pioneers but just use it in relatively harmless ways.

Today’s story is about using AI to “de-age” actors in a movie covering 60 years.

Benj Edwards writes at Wired, “Here, a $50 million Robert Zemeckis–directed film [used] real-time generative AI face transformation techniques to portray actors Tom Hanks and Robin Wright across a 60-year span, marking one of Hollywood’s first full-length features built around AI-powered visual effects.

“The film adapts a 2014 graphic novel set primarily in a New Jersey living room across multiple time periods. Rather than cast different actors for various ages, the production used AI to modify Hanks’s and Wright’s appearances throughout.

“The de-aging technology comes from Metaphysic, a visual effects company that creates real time face swapping and aging effects. During filming, the crew watched two monitors simultaneously: one showing the actors’ actual appearances and another displaying them at whatever age the scene required.

“Metaphysic developed the facial modification system by training custom machine-learning models on frames of Hanks’ and Wright’s previous films. This included a large dataset of facial movements, skin textures, and appearances under varied lighting conditions and camera angles. …

“Unlike previous aging effects that relied on frame-by-frame manipulation, Metaphysic’s approach generates transformations instantly by analyzing facial landmarks and mapping them to trained age variations. … Traditional visual effects for this level of face modification would reportedly require hundreds of artists and a substantially larger budget closer to standard Marvel movie costs.

“This isn’t the first film that has used AI techniques to de-age actors. ILM’s approach to de-aging Harrison Ford in 2023’s Indiana Jones and the Dial of Destiny used a proprietary system called Flux with infrared cameras to capture facial data during filming, then old images of Ford to de-age him in postproduction. By contrast, Metaphysic’s AI models process transformations without additional hardware and show results during filming. …

“Meanwhile, as we saw with the SAG-AFTRA union strike [in 2023], Hollywood studios and unions continue to hotly debate AI’s role in filmmaking. While the Screen Actors Guild and Writers Guild secured some AI limitations in recent contracts, many industry veterans see the technology as inevitable. …

“Even so, the New York Times says that Metaphysic’s technology has already found use in two other 2024 releases. Furiosa: A Mad Max Saga employed it to re-create deceased actor Richard Carter’s character, while Alien: Romulus brought back Ian Holm’s android character from the 1979 original. Both implementations required estate approval under new California legislation governing AI recreations of performers, often called deepfakes. …

“Robert Downey Jr. recently said in an interview that he would instruct his estate to sue anyone attempting to digitally bring him back from the dead for another film appearance. But even with controversies, Hollywood still seems to find a way to make death-defying (and age-defying) visual feats take place onscreen — especially if there is enough money involved.”

What could go wrong?

The first thing I think of is fewer job opportunities for actors who play younger versions of stars. Still, I’d love to see an AI child version of the actress who plays Astrid in the French crime show of the same name, because I think it would look more natural than the mimicking girl they’ve got. (Awesome tv, by the way. Check it out on PBS Passport.)

More at Wired, here. This story originally appeared on Ars Technica.

Read Full Post »

Photo: Instituto Universitario Yamagata de Nazca.
Some of the new geoglyphs found in Nazca. With their lines eroded by the passage of time, AI has achieved in months what used to take decades.

Let’s have kind word for scary old artificial intelligence and how it has, for example, helped to uncover 303 new geoglyphs in the Nazca desert. (By which I don’t mean to say AI doesn’t have serious potential dangers.)

In an El País archaeological article from Peru, Miguel Ángel Criado reports, “With the help of an artificial intelligence (AI) system, a group of archaeologists has uncovered in just a few months almost as many geoglyphs in the Nazca Desert (Peru) as those found in all of the last century. The large number of new figures has allowed the researchers to differentiate between two main types, and to offer an explanation of the possible reasons or functions that led their creators to draw them on the ground more than 2,000 years ago.

“The Nazca desert, with an area of about 1,900 square miles and an average altitude of 500 meters above sea level, has very special climatic conditions. It hardly ever rains, the hot air blocks the wind and the dry land has prevented the development of agriculture or livestock. Combined, all this has allowed a series of lines and figures, formed by stacking and aligning pebbles and stones, to be preserved for centuries

“The first layer of soil is made up of a blanket of small reddish stones that, when lifted, reveal a second yellowish layer. This difference in color is the basis of the geoglyphs and is what was used to create them by the ancient Nazca civilization. Some are straight lines stretching several miles. Others are geometric shapes or rectilinear figures, also huge in size.

“The other major category includes the so-called relief-type geoglyphs, which are smaller. In the 1930s, Peruvian aviators discovered the first ones, and by the end of the century more than a hundred had been identified, such as the hummingbird, the frog and the whale. Since 2004, supported by high-resolution satellite images, Japanese archaeologists have discovered 318 more, almost all of them high-profile geoglyphs. The same team, led by Masato Sakai, a scientist from Yamagata University (Japan), has discovered 303 new geoglyphs in a single campaign, supported by artificial intelligence. …

“ ‘The Nazca Pampa is a vast area covering more than 400 square kilometres and no exhaustive study has been carried out,’ the Japanese scientist recalls. Only the northern part, where the large linear geoglyphs are concentrated, ‘has been studied relatively intensively.’ … But scattered throughout the rest of the desert are many relief-type figures that are smaller and that the passage of time has made more difficult to detect.

“Convinced that there were many more, Sakai and his team contacted IBM’s artificial intelligence division. … They had high-resolution images obtained from airplanes or satellites of all of Nazca, but with a resolution of up to a few centimeters per pixel, the human eye would have needed years, if not decades, to analyze all the data. They left that job to the AI system. Although it was not easy to train its artificial vision … with so few previous images and so different from each other, the machine proposed 1,309 candidates. The figure came from a previous selection also made by the AI with 36 images for each candidate. With this selection, the researchers carried out a field expedition between September 2022 and February 2023. The result, as reported in the scientific journal PNAS, is 303 new geoglyphs added to this cultural heritage of humanity. All are relief-type geoglyphs.

“The newly discovered shapes bring the total number found in Nazca to 50 line-type and 683 relief-type geoglyphs, some geometric and others forming figures. The large amount has allowed the authors of this work to detect patterns and differences. Almost all of the former (the monkey, the condor, the cactus…) represent wild animals or plants. However, among the latter, almost 82% show human elements or elements modified by humans. ‘[There] are scenes of human sacrifice,’ says Sakai. …

“The accumulation of data that has made this work possible brings to light a double connection. On the one hand, these relief-type forms are found a few meters from one of the many paths that cross the desert … paths created by the passage of people until a path is created. According to the authors of the study, these creations were made to be seen by travelers.

“On the other hand, the large linear figures appear very close, also meters away, from one of the many straight lines that cut through the pampas. Here, according to Sakai, the symbolic value rules: ‘The line-type geoglyphs are drawn at the start and end points of the pilgrimage route to the Cahuachi ceremonial center. They were ceremonial spaces with shapes of animals and other figures. Meanwhile, the relief-type geoglyphs can be observed when walking along the paths.’

“Cahuachi was the seat of spiritual power of the Nazca culture between from around 100 BC to 500 AD and, for the authors, the large forms could be ceremonial stops on the pilgrimage to or from there.

“These explanations do not necessarily rule out, according to the authors, other possible functions that have been attributed to the Nazca lines and figures, such as being calendars, astronomical maps or even systems for capturing the little water that fell.”

Things do get fuzzy when we start to interpret ancient signs. Read more at El Pais, here. No firewall.

Read Full Post »

Photo: Sam Odgden via Chamber Music America.
Composer Tod Machover.

With all the furor about artificial intelligence, Rebecca Schmid decided to check in with MIT’s Tod Machover, “a pioneer of the connections between classical music and computers.” Their conversation about how AI applies to music appears on the Chamber Music America website.

“Sitting at his home in Waltham, Massachusetts, the composer Tod Machover speaks with the energy of someone half his 69 years as he reflects on the evolution of digital technology toward the current boom in artificial intelligence.

“ ‘I think the other time when things moved really quickly was 1984,’ he says — the year when the personal computer came out. Yet he sees this moment as distinct. ‘What’s going on in A.I. is like a major, major difference, conceptually, in how we think about music and who can make it.’

“Perhaps no other figure is better poised than Machover to analyze A.I.’s practical and ethical challenges. The son of a pianist and computer graphics pioneer, he has been probing the interface of classical music and computer programming since the 1970s.

“As the first Director of Musical Research at the then freshly opened Institut de Recherche et Coordination Acoustique/Musique (I.R.C.A.M.) in Paris, he was charged with exploring the possibilities of what became the first digital synthesizer while working closely alongside Pierre Boulez.

“In 1987, Machover introduced Hyperinstruments for the first time in his chamber opera VALIS, a commission from the Pompidou Center in Paris. This technology incorporates innovative sensors and A.I. software to analyze the expression of performers, allowing changes in articulation and phrasing to turn, in the case of VALIS, keyboard and percussion soloists into multiple layers of carefully controlled sound.

“Machover had helped to launch the M.I.T. Media Lab two years earlier in 1985, and now serves as both Muriel R. Cooper Professor of Music and Media and director of the Lab’s Opera of the Future group. …

“Machover emphasizes the need to blend the capabilities of [AI] technology with the human hand. For his new stage work, Overstory Overture, which premiered last March at Lincoln Center, he used A.I. as a multiplier of handmade recordings to recreate the sounds of forest trees ‘in underground communication with one another.’

“Machover’s ongoing series of ‘City Symphonies,’ for which he involves the citizens of a given location as he creates a sonic portrait of their hometown, also uses A.I. to organize sound samples. Another recent piece, Resolve Remote, for violin and electronics, deployed specially designed algorithms to create variations on acoustic violin. …

“Machover has long pursued his interest in using technology to involve amateurs in musical processes. His 2002 Toy Symphony allows children to shape a composition, among other things, by means of ‘beat bugs’ that generate rhythms. This work, in turn, spawned the Fisher-Price toy Symphony Painter and has been customized to help the disabled imagine their own compositions. …

“Rebecca Schmid: How is the use of A.I. a natural development from what you began back in the 1970s, and what is different?
“Tod Machover: There are lots of things that could only be done with physical instruments 30 years ago that are now done in software: you can create amazing things on a laptop. But what’s going on in A.I. is like a major, major difference, conceptually, in how we think about music and who can make it.

“One of my mentors and heroes is Marvin Minsky, who was one of the founders of A.I., and a kind of music prodigy. And his dream for A.I. was to really figure out how the mind works. He wrote a famous book called The Society of Mind in the mid-eighties based on an incredibly radical, really beautiful theory: that your mind is a group of committees that get together to solve simple problems, with a very precise description of how that works. He wanted a full explanation of how we feel, how we think, how we create — and to build computers modeled on that.

“Little by little, A.I. moved away from that dream, and instead of actually modeling what people do, started looking for techniques that create what people do without following the processes at all. A lot of systems in the 1980 and 1990s were based on pretty simple rules for a particular kind of problem, like medical diagnosis. You could do a pretty good job of finding out some similarities in pathology in order to diagnose something. But that system could never figure out how to walk across the street without getting hit by a car. It had no general knowledge of the world.

“We spent a lot of time in the seventies, eighties, and nineties trying to figure out how we listen — what goes on in the brain when you hear music, how you can have a machine listen to an instrument — to know how to respond. A lot of the systems which are coming out now don’t do that at all. They don’t pretend to be brains. Some of the most kind of powerful systems right now, especially ones generating really crazy and interesting stuff, look at pictures of the sound — a spectrogram, a kind of image processing. I think it’s going to reach a limit because it doesn’t have any real knowledge of what’s there. So, there’s a question of, what does it mean and how is it making these decisions?

What systems have you used successfully in your work?
“One is R.A.V.E., which comes from I.R.C.A.M. and was originally developed to analyze audio, especially live audio, so that you can reconstruct and manipulate it. The voice is a really good example. Ever since the 1950s, people have been doing live processing of singing. The problem is that it’s really hard to analyze everything that’s in the voice: The pitch and spectrum are changing all the time.

“What you really want to do is be able to understand what’s in the voice, pull it apart and then have all the separate elements so that you can tune and tweak things differently on the other side. And that’s what R.A.V.E. was invented to do. It’s an A.I. analysis of an acoustic signal. It reconstructs it in some form, and then ideally it comes out the other side sounding exactly like it did originally, but now it’s got all these handles so that I can change the pitch without changing the timbre. And it works pretty well for that. You can have it as an accompanist, or your own voice can accompany you. It can change pitch and sing along. And it can sing things that you never sang because it understands your voice. …

“The great thing about A.I. models now is that you can use them not just to make a variation in the sound, but also a variation in what’s being played. So, if you think about early electronic music serving to kind of color a sound — or add a kind of texture around the sound, but being fairly static — with this, if you tweak it properly, it’s a kind of complex variation closely connected to what comes in but not exactly the same. And it changes all the time, because every second the A.I. is trying to figure out, How am I going to match this? How far am I going to go? Where in the space am I? You can think of it as a really rich way of transforming something or creating a kind of dialogue with the performer.” Lots more at Chamber Music America, here. No firewall.

I myself have posted about the composer a few times: for example, here (“Tod Machover,” 2012); here (“Stanford’s Laptop Orchestra,” 2018); and here (“Symphony of the Street,” 2017).

“AI Finished My Story. Does It Matter?” at Wired, here, offers additional insight.

Read Full Post »

Photo: Goban1.
The Chinese game called Go is more than 2,500 years old.

This past summer, I blogged about a new board game called Wingspan. It sounded wonderful, especially for bird lovers, like those in my family. I bought it.

Well, I think it is going to be wonderful, but the rules are really hard. Recommended for people over 14, it is still too “buch for be, “as Rudyard Kipling’s Elephant’s Child says.

Fortunately, it’s not too much for my 9-year-old grandson, who is gradually figuring it out and explaining it. Otherwise, I might have had to call on artificial intelligence experts, like those described in today’s story.

Samantha HuiQi Yow explains at Wired: “In 1901, on an excavation trip to Crete, British archaeologist Arthur Evans unearthed items he believed belonged to a royal game dating back millennia: a board fashioned out of ivory, gold, silver, and rock crystals, and four conical pieces nearby, assumed to be the tokens. Playing it, however, stumped Evans, and many others after him who took a stab at it. There was no rulebook, no hints, and no other copies have ever been found. Games need instructions for players to follow. Without any, the Greek board’s function remained unresolved—that is, until recently.

“Enter artificial intelligence and a group of researchers from Maastricht University in the Netherlands. Thanks to an algorithm the team used to analyze the playability of one suggested ruleset, the century-old guesswork could soon be taken out of the Knossos game. Today, not only can its recognition as a game be further assessed, with hopes of a clearer answer in future, a version of it is also playable online.  And for the first time, so are hundreds of other games thought to have been lost to history.

“Board games go back a long way. Centuries ago, before the chess we know today, there was Chaturanga in India, Shogi in Japan, and Xiangqi in China. And long before them was Senet, one of the earliest known games, which, along with others played in ancient Egypt, may have ultimately inspired backgammon. ‘Games are social lubricants,’ explains Cameron Browne, a computer scientist at the university who received his PhD in AI and game design. ‘Even if two cultures don’t speak the same language, they can exchange play. This happened throughout history. Wherever people spread to, wherever soldiers were stationed, wherever merchants were trading. Anyone who had time to kill would often teach those around them the games they knew.’ …

“[But] the rules were typically passed on by word of mouth instead of being written down. The little that is known is left open to modern interpretation.

“It’s these lapses in board game history that gave legs to the five-year Digital Ludeme Project, which Browne leads. ‘Games are a great cultural resource that’s been largely underutilized. We don’t even know how so many of them were played, especially when you go farther back in time,’ he says. ‘So the question for me was, can we use modern AI techniques to shed insight into how these ancient games were played and, together with the evidence available, help reconstruct them?’

“As it turns out, the answer is a resounding yes. It’s been three years since Browne and his colleagues set to work, and already they have brought nearly a thousand board games online, ranging from across three time periods and nine regions. Thanks to them, games once popular in the second and first millennia BC, like 58 holes, are now just a few clicks away for anyone on the internet.

“Interestingly enough, this reconstruction process begins with the opposite. Games are first broken down into fundamental units of information called ludemes, which refers to elements of play such as the number of players, movement of pieces, or criteria to win. Once a game is codified in this manner, the team then fills in the missing pages of its rulebook with the help of relevant historical information, like when it or another game with similar ludemes was played and by whom.

“The riddle however is only partly solved at this stage. Others who do similar work–manually–usually hit a dead end here. It’s because what looks good on paper might not translate as well in reality, Browne explains. ‘The rules might make sense when you read them, but you don’t know how well they actually work unless you play the game. Quite often, rules that make perfect sense play terribly as games.’ …

“But computers can have blind spots too, in that they only measure what’s measurable. Here’s where Walter Crist comes in.” More at Wired, here.

Read Full Post »

Photo: Remko de Waal/ANP/AFP via Getty Images.
Rembrandt’s restored ‘Night Watch’ at the Rijksmuseum in Amsterdam.

A project to restore a Rembrandt called “Night Watch” has received a lot of attention recently, but at the risk of repeating what you already know, I’d just like to point out that trimming a work of art can seriously affect its greatness.

How many times have building renovations cut paintings to fit or squashed them into too small a space to be properly appreciated. I think, for example, of the many special WPA paintings in US post offices that have been significantly altered over the years. I understand competing needs, but it’s a loss.

What was lost in Rembrandt’s ‘Night Watch,’ the New York Times says, was a sense of movement. The original was “asymmetrical: The large arch that stands behind the crowd was in the middle, and the group’s leaders were on the right. Rembrandt painted them this way to create a sense of movement through the canvas.

“Once the new pieces were restored, so was the balance, [said Rijksmuseum’s director, Taco Dibbits.] ‘You really get the physical feeling that Banninck Cocq and his colleagues really walk towards you.’ “

The main focus of the recent news coverage, however, was on how experts used artificial intelligence (AI) — along with an early copy of the original painting — to reimagine Rembrandt’s intentions.

Nina Siegal reported at the Times, “Rembrandt’s “The Night Watch” has been a national icon in the Netherlands ever since it was painted in 1642, but even that didn’t protect it.

“In 1715, the monumental canvas was cut down on all four sides to fit onto a wall between two doors in Amsterdam’s Town Hall. The snipped pieces were lost. Since the 19th century, the trimmed painting has been housed in the Rijksmuseum, where it is displayed as the museum’s centerpiece, at the focal point of its Gallery of Honor.

“[Now] for the first time in more than three centuries, it will be possible for the public to see the painting ‘nearly as it was intended,’ said the museum’s director, Taco Dibbits. …

“Rather than hiring a painter to reconstruct the missing pieces, the museum’s senior scientist, Robert Erdmann, trained a computer to recreate them pixel by pixel in Rembrandt’s style. A project of this complexity was possible thanks to a relatively new technology known as convolutional neural networks, a class of artificial-intelligence algorithms designed to help computers make sense of images, Erdmann said.”

As amazing as AI is, the work would not have been possible if a less renowned painter hadn’t made an early copy of Rembrandt’s work.

“Indications already existed of how the original ‘Night Watch’ likely looked,” Siegal continues, “thanks to a copy made by Gerrit Lundens, another 17th-century Dutch painter. He made his replica within 12 years of the original, before it was trimmed.

“Lundens’s copy is less than one-fifth the size of Rembrandt’s monumental canvas, but it is thought to be mostly faithful to the original. It was useful as a model for the missing pieces, even if Lundens’s style was nowhere near as detailed as Rembrandt’s. Lundens’s composition is also much looser, with the figures spread out more haphazardly across the canvas, so it could not be used to make a one-to-one reconstruction.

“The Rijksmuseum recently made high-resolution scans of Rembrandt’s ‘Night Watch,’ as part of a multimillion-dollar, multiyear restoration project, initiated in 2019. Those scans provided Erdmann with precise information about the details and colors in Rembrandt’s original, which the algorithms used to recreate the missing sections using Lundens’s copy as a guide. The images were then printed on canvas, attached to metal plates for stability and varnished to look like a painting.” More at the Times, here.

The Guardian also covered the story, quoting the Dibbits as saying, “With the addition especially on the left and the bottom, an empty space is created in the painting where they march towards. When the painting was cut [the lieutenants] were in the centre, but Rembrandt intended them to be off-centre marching towards that empty space, and that is the genius that Rembrandt understands: you create movement, a dynamic of the troops marching towards the left of the painting. …

“I am always hoping that somebody will call up one day to say that they have the missing pieces. I can understand that the bottom part and top might not be saved but on the left hand you have three figures, so it is surprising that they didn’t surface because at the time in 1715 Rembrandt was already much appreciated and an expensive artist.”

Update 8/11/21 — Michiel of Cook & Drink went to the exhibit, sending a picture and comment: “The AI-part adds a lot of value to the overall painting, but obviously it’s a reconstruction. This is clearly visible (the painting lies a bit deeper than the reconstruction) and that helps to appreciate both the original and the extended version. We’ve seen the painting many times, always in its original frame. To see it without a frame was also special. Very nice to see so many people interested in this project. It’s special to see the combination of very advanced IT, AI, art and history.”

Nice to see a line for art!

Read Full Post »

Photo: Oxia Palus and Lebenson Gallery London.
The Hidden Picture of Beatrice Hastings by Amedeo Modigliani was created by Oxia Palus using AI technology.

Nowadays, art and science work hand in hand. Consider this story about how artificial intelligence was used to reveal an unknown painting by a great master. It starts with the practice of “overpainting.”

Suzanne’s art professor overpainted because he wanted you to sense what was underneath. But at Hyperallergic, Lauren Moya Ford writes, “Artists paint over their finished canvases for many reasons — out of frustration at a failed design, because they lack the funds to buy more material, or even to spite whoever or whatever they’ve depicted.

“The latter was the case in Amedeo Modigliani’s ‘Portrait of a Girl‘ (1917), an oil painting of a sullen, seated brunette now held in the collection of the Tate. X-ray studies of the canvas conducted by the museum in 2018 revealed that the piece was originally a full-length portrait of another woman, a slender blonde with angular, elongated features. A portion of this hidden painting — now on view at Lebenson Gallery in London — was uncovered and reconstructed by two scientists using a combination of stereoscopic imaging, artificial intelligence technology, and 3D printing.

“Neuroscientist Anthony Bourached and physicist George Cann joined forces in London in January 2019 to found Oxia Palus, a scientific project that uses machine learning to reconstruct what the duo calls ‘NeoMasters,’ or artworks that have been previously hidden from view under the layers of later paintings. Their past efforts have uncovered a Blue Period nude by Picasso, a Madonna by Leonardo da Vinci, and a landscape painting by Santiago Rusiñol that was later painted over by Picasso, the artist’s friend and mentee. To discover these ‘lost’ works, Bourached and Cann apply a neural style transfer algorithm to X-rays of paintings that are suspected to have another artwork hidden below their surfaces. The technology utilizes imagery from the scan, as well as information from the artist’s other works, to reproduce colors, brushstrokes, and other distinguishing features.

“Unlike conservators or other art specialists, Bourached and Cann bring uniquely non-art areas of expertise to the pieces they analyze.

‘George’s inspiration comes from his research on the surface of Mars for the detection of life,’ Bourached explains in a recent email to Hyperallergic. …

“Who was the woman whose likeness has suddenly been unearthed more than 100 years later? She’s thought to be Modigliani’s ex-lover and muse, the English poet, writer, and literary critic Beatrice Hastings. … The two years that the couple shared an apartment in Montparnasse were creatively productive for both: Hastings published prolifically, and is known to have posed for at least 14 of Modigliani’s portraits. But their relationship was also plagued by alcohol addictions, explosive personalities, and violent confrontations. …

“It was perhaps to symbolically scorn his former lover that Modigliani painted over her portrait in 1917, but, thanks to the two London scientists, Hastings has found a way to see the light again. As she wrote in 1937, ‘Civilized woman wants something more than to be the means to a man’s life. She wants to live herself.’ ” More at Hyperallergic, here.

Who gets the last word about what an artist shows to the world? At some point, the work no longer belongs to the artist but to the public. The only way an artist gets final say, I suspect, is to have some acolyte like Jane Austen’s sister Cassandra, who burned all the novelist’s letters after her death. Cassandra thought that whatever her sister wanted done was more important than what posterity might want.

Read Full Post »

Photo: Ismoon
The earliest recognized form of “written” communication may have been small bits of clay, called tokens. This charming one is from the Indus Valley.

Nowadays, one reads almost too much about artificial intelligence, AI. I myself have an ever increasing list of things I’d rather not have AI managing for me. But using it to translate ancient texts is one application that seems to make perfect sense.

Ruth Schuster writes at Haaretz, “Understanding texts written using an unknown system in a tongue that’s been dead for thousands of years is quite the challenge. Reconstructing missing bits of the ancient text is even harder. …

“Filling in missing text starts with being able to read and understand the original text. That requires much donkey work. Now an Israeli team led by Shai Gordin at Ariel University in the West Bank has reinvented the donkey in digital form, harnessing artificial intelligence to help complete fragmented Akkadian cuneiform tablets.

“Their paper, ‘Restoration of Fragmentary Babylonian Texts Using Recurrent Neural Networks,’ was published in the Proceedings of the National Academy of Sciences in September.

“ ‘Neural networks’ … means software inspired by biological nervous systems. The concept dates back more than 70 years. … The base concept is to teach machines to learn, think and make decisions. In this case, the computer decides on the plausible completion of missing text. …

“Gordin and the team feed their machine transliterations of the extant Babylonian texts i.e., what the text would have sounded like.

“Then what? When it comes to missing bits in a papyrus or tablet, humans can intuit that “’…ow is your moth…’ isn’t a query into the well-being of your mothball.

With machines, it’s all about mathematics and probabilities based on knowledge gained so far. …

“It may have been trading that inspired the earliest recognized form of communication: ‘pseudo-writing’ on small bits of clay in Mesopotamia around 7,000 years ago. The clay bits, called tokens, were shaped into simplistic imagery such as a cow or other ancient commodities. …

“Then we start seeing abstract signs; repetitive strokes or depressions are interpreted as numbers (price, perhaps); and possibly also personal names, using the first sounds of different imprints to put together words you can’t draw. …

“Anyway, after pseudo-writing came proto-writing: figurative proto-cuneiform inscribed on tablets, which arose about 5,500 years ago in the city of Uruk. … Within mere centuries, proto-cuneiform evolved to become increasingly schematic and Sumer was apparently where it happened, [Gordin] says. And figurative hieroglyphic script began to appear in ancient Egypt at about the same time, about 5,000 years ago. …

“By the time cuneiform became a thing, writing had passed the stage of ‘Sheep : four : Yerachmiel’ and reached the stage of official records, letters and formulaic recounts of the wondrousness of the ruler. …

“For cuneiform, we have the gargantuan multilingual text at Behistun, Iran. Darius the Great had his exploits described in three different cuneiform scripts. [The] Behistun text was monumental: 15 meters (49 feet) high by 25 meters wide, and 100 meters up a cliff on the road connecting Babylon and Ecbatana, all to describe how Darius vanquished Gaumata and other foes. …

“And over decades, linguists slowly interpreted the languages of Babylon and Assyria, thanks to Darius’ monumental ego. …

“Interpreting a dead language is a mathematical game, Gordin says. … Neural networks are a computerized model that can understand text. How? They turn each symbol or word into a number, he explains. …

“When humans reconstruct missing text, their interpretation may be subjective. To be human is to err with bias, and quantifying the likely accuracy of the completion is impossible. Enter the machine. …

“The machine proved capable of identifying sentence structures – and did better than expected in making semantic identifications on the basis of context-based statistical inference, Gordin says. Its talents were further deduced by designing a completion test, in which the machine-learning model had to answer a multiple-choice question: which word fits in the blank space of a given sentence.”

Not sure how many readers are into that kind of thing, but I do find it intriguing. More at Haaretz, here.

Read Full Post »