La nuova rappresentazione digitale dell'architettura, quale prodotto del rilievo, deve, oggi, essere sostanziata non solo dalla conoscenza geometrica ma anche da quella tecnica, relativa sia alla fase di acquisizione che a quella di elaborazione. I dati acquisiti sono, nel rilievo strumentale, sempre tridimensionali e quindi costituiscono la base metrica per le successive operazioni di modellazione in cui si possono distinguere, in rapporto alle modalità di rilevamento, modelli ottenuti dall'uso di immagini e modelli ottenuti dall'elaborazione di range map. In entrambi i casi, le più attuali linee di ricerca oscillano tra il potenziamento della capacità di acquisire dati e la necessità di gestirli per fornire modelli, il più possibile con procedure automatiche. Rimane il fatto che alle differenti tecnologie utilizzate per esaminare gli aspetti morfologici di una architettura corrispondono diversi tipi di rappresentazione e che la scelta di uno specifico metodo di rilievo può fornire differenti interpretazioni di un oggetto. E' la ragione per cui, nel caso esemplificativo qui riportato, fotogrammetria e laser scanner vengono posti a confronto per valutarne da un lato potenzialità e limiti nel rispondere alle due esigenze più richieste dalla modellazione e cioè e , dall'altro le effettive possibilità di integrazione delle due metodiche. La struttura analizzata, il plesso transetto-abside di San Francesco al Prato, in Perugia, presenta una geometria complessa, derivata dalla sovrapposizione di elementi architettonici di fasi differenti, dalle anomalie generate da danni strutturali e dal degrado dei materiali. L'assenza di punti e linee caratteristiche univocamente interpretabili ne fanno un caso di rilievo archeologico che ben si presta all'uso congiunto di differenti tecniche di rilevamento. Gli interrogativi che derivano investono sia gli aspetti metrici che la fruibilità delle differenti rappresentazioni ottenute quando l'obiettivo di intervenire sul manufatto deve confrontarsi con un referente testuale sostitutivo del reale. The new digital representation of architecture, as a result of the survey, which today, has to be backed up not only by a geometric knowledge but by a technical one as well, related either to the acquisition or elaboration phase. The acquisition data, in the instrumental survey, are always 3D and therefore constitute the metric base for future modeling operations where they can be distinguisched in relation to the surveying methods, models obtained by the use of images and model obtained by the elaboration of range map. In both cases, the most recent research studies fluctuate between the empowering of the data acquisition and the need to handle it in order to supply models, as much as possible in an automatic way. The fact remains that different kinds of representation corrispond to the different technologies used to examine the morphological aspects of architecture and that the choice of one specific method can provide different interpretations of an object. That is the reason why in the cases here described, photogrammetry and laser scanner are compared to evaluate the potentiality and limits in response to the two demands most requested by the modeling and that is the rise of the automation and the ability to have a realistic representation on one hand, and the actual possibilities of integration of the two methods on the other. The analyzed structure, the complex transept-apse of San Francesco al Prato, in Perugia, has an intricate geometry derived from the overlapping of architectonic elements of different phases, from the irregularities caused by structural damages and the decay of the materials. Due to the absence of points and characteristic lines unequivocally interpretable this case of archological survey is conducive to the joint use of different surveying techniques. The questions that arise involve both the metric aspects and the usability of the different representations.
A corpus of 963 images belonging to Near-Eastern seals of the Uruk/Jamdet Nasr period (late fourth Millennium BC) was analysed and classified through multivariate analysis techniques, applied on both presence/absence of iconographical elements and a text describing each image. Methods and results are discussed and compared. The presence/absence analysis is the most effective in dividing the corpus into different groups of images (scenes with common animals, “special” animals such as hybrids, war, religious, complex handicraft and schematic handicraft scenes). The results of textual analysis are similar in many respects, though here common features between different groups of seals are underscored. Textual analysis also seems a promising approach for the study of syntactical patterning of the seal images. The study of repeated segments (i.e. fixed sequences of lexical forms occurring in different texts) proved the existence of fixed sun-patterns, consisting of two or more elements and attitudes, which occur in images belonging both to the same group of seals and to different ones. Fixed sun-patterns tend, however, to occur more frequently on images characterized by a simple and repetitive structure, whereas they are only rarely used in the most complex seals. Finally, results of both analyses effectively proved that the iconography of the seals is related to their origin and function. Religious scenes and representations of hybrids, snakes, birds and lions generally occur on seals or sealings found in temple contexts, often on sealings fastening movable containers or storeroom doors; war scenes are apparently found only in urban centres. Complex handicraft scenes tend to be found in storeroom or in domestic contexts, often on “clay balls” (sort of primitive administrative documents). Finally, schematic images generally occur in domestic, non-official context. Schematic seals were apparently rarely used for sealing; most of their images derive from original seals and not from impressions. On the other hand, religious scenes seem especially typical of southern Mesopotamia, complex handicraft scenes of Iran and Syria, whereas identical schematic seals are found in all geographical areas. Further developments of the methods tested on the seal corpus (firstly through a deeper interaction with repeated segments analysis; secondly through development of methods for the analysis of the general image composition and finally through an integrated approach considering all aspects together) may lead to interesting results for the study not only of the seals themselves, but in general of structured images of different kinds.
In order to analyse a corpus of 963 Near-Eastern Uruk/Jemdet Nasr period sealings, three levels of image structure were identified, as being a) the presence of iconographic elements, b) the presence of subpattern, i.e. small images contributing as a whole to the total image, and c) the general image pattern, considered only under the syntactical point of view. This paper is based on second level analyses, performed through textual exploratory analysis of a formalized text describing the sealings images. Two different textual correspondence analyses were performed: the first on textual forms and the second on repeated segments, i.e. repeated sequences of forms considered as a whole. In the paper, the quality of results is discussed, in particular comparing them to classical techniques based on manual coding and to a previous coding. In this case, a better distinction of different sealings groups resulted from forms analysis, whereas the one on repeated segments, although repeating the forms analysis general pattern, seems less satisfactory. Both results suggest to modify the automatic procedures used so far, in order to limit attention to presence/absence of forms on one side and to select manually the repeated segments actually corresponding to a subpattern, rather than considering all of them.
Most studies on the use of punched cards and computers in archaeology seem to take for granted that scientific standards exist to express the data upon which algorithms are to be performed, for retrieval or classification purposes. The author's view is different; examples are given of descriptive codes which have been designed under his direction since 1955 for the storage of archaeological data (artifacts, abstract or figured representations, buildings, etc.) on punched cards of various kinds (marginal, peek-a-boo, IBM, etc.). In order to obviate the shortcomings of natural language, three categories of rules are required: orientation, segmentation, differentiation. The concluding remarks concern the relation of the descriptive languages which are thus obtained to scientific language in general; differences are stressed, as well as reasons for postulating a continuum from the former to the latter.
The literature provides a wide range of techniques to assess and improve the quality of data. Due to the diversity and complexity of these techniques, research has recently focused on defining methodologies that help the selection, customization, and application of data quality assessment and improvement techniques. The goal of this article is to provide a systematic and comparative description of such methodologies. Methodologies are compared along several dimensions, including the methodological phases and steps, the strategies and techniques, the data quality dimensions, the types of data, and, finally, the types of information systems addressed by each methodology. The article concludes with a summary description of each methodology.
It is discussed if the technological evolution of computer science in the nineties has resolved the methodological problems of the Archaeology, known since the sixties. It is concluded that the two first levels of cognitive methodology (recording and structuring) are resolved but the third and last level (reconstitution) is always the subject of sophisticated but rare experience.
After the initial enthusiasm for a hypothetical explosion of the metaverse phenomenon, which then waned, a careful analysis can reveal a possible dual model in the planning of this technology. On one hand, a closed, basically monopolistic, approach aimed at market concentration, and on the other a fragmented approach, starting from the bottom, consisting of small interoperating entities. This second model, in recent years, characterized in Italy a series of metaverse initiatives linked to the enhancement of Cultural Heritage and seems to be the most promising at the moment, provided that the longstanding issue of reproduction rights of the Heritage itself is addressed and resolved, preferably with an open approach: a crucial issue in the new digital scenarios.
Siamo stati alla prima mostra d’arte interamente progettata nel metaverso, “Meta Effect”, inaugurata il 20 dicembre 2022 a Genova. Organizzata da ETT, un’industria creativa digitale del Gruppo SCAI, la mostra esplora i temi dell’arte e della creatività, ponendo l’accento sulla possibilità che l’intelligenza artificiale possa essere considerata creativa. Meta Effect: tra “veri” autori e AI …
As part of H2IOSC WP2, the Rome UO (CNR-ISPC) contributed to mapping and understanding the Cultural Heritage (CH) and Heritage Science (HS) communities through an integrated strategy. Activities included an exploratory questionnaire, targeted interviews on digital practices, and the development of open access platforms such as DHeLO, BiDiAr, and the Open Digital Archaeology Hub. These initiatives aimed to observe and monitor digital outputs, identify gaps, and foster the aggregation of research projects, datasets, tools, and bibliographic resources. The work reflects a broader effort to build sustainable, community-driven digital infrastructures aligned with the evolving needs of the CH and HS research ecosystems.
The recognition of named entities in Spanish medieval texts presents great complexity, involving specific challenges: First, the complex morphosyntactic characteristics in proper-noun use in medieval texts. Second, the lack of strict orthographic standards. Finally, diachronic and geographical variations in Spanish from the 12th to 15th century. In this period, named entities usually appear as complex text structure. For example, it was frequent to add nicknames and information about the persons role in society and geographic origin. To tackle this complexity, named entity recognition and classification system has been implemented. The system uses contextual cues based on semantics to detect entities and assign a type. Given the occurrence of entities with attached attributes, entity contexts are also parsed to determine entity-type-specific dependencies for these attributes. Moreover, it uses a variant generator to handle the diachronic evolution of Spanish medieval terms from a phonetic and morphosyntactic viewpoint. The tool iteratively enriches its proper lexica, dictionaries, and gazetteers. The system was evaluated on a corpus of over 3,000 manually annotated entities of different types and periods, obtaining F1 scores between 0.74 and 0.87. Attribute annotation was evaluated for a person and role name attributes with an overall F1 of 0.75.
Despite the recognized effectiveness of LiDAR in penetrating forest canopies, its capability for archaeological prospection can be strongly limited in areas covered by dense vegetation for the detection of subtle remains scattered over morphologically complex areas. In these cases, an important contribution to improve the identification of topographic variations of archaeological interest is provided by LiDAR-derived models (LDMs) based on relief visualization techniques. In this paper, diverse LDMs were applied to the medieval site of Torre Cisterna to the north of Melfi (Southern Italy), selected for this study because it is located on a hilly area with complex topography and thick vegetation cover. These conditions are common in several places of the Apennines in Southern Italy and prevented investigations during the 20th century. Diverse LDMs were used to obtain maximum information and to compare the performance of both subjective (through visual inspections) and objective (through their automatic classification) methods. To improve the discrimination/extraction capability of archaeological micro-relief, noise filtering was applied to Digital Terrain Model (DTM) before obtaining the LDMs. The automatic procedure allowed us to extract the most significant and typical features of a fortified settlement, such as the city walls and a tower castle. Other small, subtle features attributable to possible buried buildings of a habitation area have been identified by visual inspection of LDMs. Field surveys and in-situ inspections were carried out to verify the archaeological points of interest, microtopographical features, and landforms observed from the DTM-derived models, most of them automatically extracted. As a whole, the investigations allowed (i) the rediscovery of a fortified settlement from the 11th century and (ii) the detection of an unknown urban area abandoned in the Middle Ages.
The Federated Archaeological Information Management Systems (FAIMS) Project is an Australian, university-based initiative developing a generalized, open-source mobile data collection platform that can be customized for diverse archaeological activities. Three field directors report their experiences adapting FAIMS software to projects in Turkey, Malawi, and Peru, highlighting three themes: (1) the transition from paper to digital recording has upfront costs with backend pay-off, (2) the transition involves decisions and tradeoffs that archaeologists and technologists need to make together, and (3) digital recording has both short- and long-term benefits. In the short-term, project directors reported efficient acquisition of richer, more accurate, data. Longer-term, they anticipated that the availability of comprehensive, born-digital datasets would support rigorous demonstration of field intuitions and faster publication of more complete datasets. We argue that cooperative development involving archaeologists and technologists can produce high-quality, fit-for-purpose software, representing the best chance to embedding new technology in established projects.
The availability of detailed environmental data, together with inexpensive and powerful computers, has fueled a rapid increase in predictive modeling of species environmental requirements and geographic distributions. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, absence data are not available for most species. In this paper, we introduce the use of the maximum entropy method (Maxent) for modeling species geographic distributions with presence-only data. Maxent is a general-purpose machine learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for species distribution modeling. In order to investigate the efficacy of the method, here we perform a continental-scale case study using two Neotropical mammals: a lowland species of sloth, Bradypus variegatus, and a small montane murid rodent, Microryzomys minutus. We compared Maxent predictions with those of a commonly used presence-only modeling method, the Genetic Algorithm for Rule-Set Prediction (GARP). We made predictions on 10 random subsets of the occurrence records for both species, and then used the remaining localities for testing. Both algorithms provided reasonable estimates of the species’ range, far superior to the shaded outline maps available in field guides. All models were significantly better than random in both binomial tests of omission and receiver operating characteristic (ROC) analyses. The area under the ROC curve (AUC) was almost always higher for Maxent, indicating better discrimination of suitable versus unsuitable areas for the species. The Maxent modeling approach can be used in its present form for many applications with presence-only datasets, and merits further research and development.
Over the past decade a series of major revisions to the generation and use of knowledge in the context of natural resources management has started to undermine basic assumptions on which traditional approaches to water management were based. Limits to our ability to predict and control water systems have become evident and both complexity and human dimensions are receiving more prominent consideration. Many voices in science and policy have advocated a paradigm shift in water management—both from a normative (it should happen) and a descriptive (it happens, and how) perspective. This paper summarizes the major arguments that have been put forward to support the need for a paradigm shift and the direction it might take. Evidence from the fields of science, policy, and management is used to demonstrate a lacuna in the translation of political rhetoric into change at the operational level. We subsequently argue that learning processes and critical reflection on innovative management approaches is a central feature of paradigm change and that contributions from psychology which emphasise the roles of frames and mental models can be usefully applied to paradigm change processes. The paper concludes with recommendations to facilitate debate and test alternative approaches to scientific inquiry and water management practice leading to critical reflection and analysis.
The Geohm System belongs to the geo-electrical family of prospecting devices, and is a system for substratum exploration by electrically scanning along lines of preplanted sensors. The system generates a horizontal section of the values of resistivity in archaeological sites. The Geohm is an uncomplicated system comprising of a portable computer, an analogical-digital conversion device, a multi relay switcher (software controlled), a solid state current converter (also software controlled), and a lot of moving sensors inserted in the ground. The measured groups of returned signals are processed by complex algorithms before the data is validated: this technique makes it possible to obtain more reliable measurements and allows the user to reprogram the device. The speed of the system allows the user to survey a large area using several electrical devices.
To fight the spread of the plague, early modern Mediterranean states commonly quarantined goods and people on the move inside complexes called lazzaretti. These institutions, managed by the health magistracies of different Mediterranean cities, formed a transnational plague-preventative system. Based on an exploration of both the early modern theory of contagion and the eighteenth-century quarantine procedures shared between Health Offices across the Mediterranean, this article demonstrates that goods were categorised into different levels of contagion depending on their materials. Different disinfection practices were followed according to the level of danger posed by different types of goods. Anxieties caused by the interaction between surfaces and the human body shaped the procedures, the architecture and the everyday routine inside lazzaretti. While analysing the materiality of quarantined goods, the regulations and the architecture of the lazzaretti, this article highlights the relevance of material culture to early modern medical preventative practices.
Mugello is a medium-high seismic risk area situated on the Italian Apennine mountain range, between Tuscany and Emilia Romagna. The territory is characterized by a large presence of long duration settlements characterized by well-preserved historic buildings, most of which are religious’ architectonical complexes. An area of Mugello, between 2010 and 2014, was characterized by the project “Archaeology of Buildings and seismic risk in Mugello”, a research focused on testing the potential information of the process of archaeological analysis of buildings as a form of knowledge, prevention and protection of medieval seismic risk settlements. Among the results that have emerged from the archaeoseismological investigation have played a central role the considerations pertaining to the supplying and use of building materials for the construction and modification of architectural structures, in a period between the late Middle Ages and the Modern Age.
Digital technologies are frequently considered as lacking material aspects. Today, it is evident that behind digital technologies lies a huge and complex material infrastructure in the form of fiber optic cables, servers, satellites, and screens. Postphenomenology has theorized the relations to material things as embodiment relations. Taking into account that technologies can also have hermeneutic aspects, this theory defines hermeneutic relations as those in which we read the world through technologies. The article opens with a review of some theoretical developments to hermeneutic relations with a special focus on digital technologies. The article suggests that in the digital world, material hermeneutics needs to be updated as it shifts from a scientific to an everyday technological context. Now, technologies not only “give voice” to things, they also produce new meanings to informational structures and direct users to certain meanings. When it comes to digital technologies, especially those involving artificial intelligence (AI), the technology actively mediates the world. In postphenomenological terms, it possesses a technological intentionality. The postphenomenological formula should be updated to reflect this type of technological intentionality, by reversing the arrow of intentionality so that it points to the user, rather than from the user.
How do archaeologists make effective use of physical traces and material culture as repositories of evidence? Material Evidence takes a resolutely case-based approach to this question, exploring instances of exemplary practice, key challenges, instructive failures, and innovative developments in the use of archaeological data as evidence. The goal is to bring to the surface the wisdom of practice, teasing out norms of archaeological reasoning from evidence. Archaeologists make compelling use of an enormously diverse range of material evidence, from garbage dumps to monuments, from finely crafted artifacts rich with cultural significance to the detritus of everyday life and the inadvertent transformation of landscapes over the long term. Each contributor to Material Evidence identifies a particular type of evidence with which they grapple and considers, with reference to concrete examples, how archaeologists construct evidential claims, critically assess them, and bring them to bear on pivotal questions about the cultural past. Historians, cultural anthropologists, philosophers, and science studies scholars are increasingly interested in working with material things as objects of inquiry and as evidence – and they acknowledge on all sides just how challenging this is. One of the central messages of the book is that close analysis of archaeological best practice can yield constructive guidelines for practice that have much to offer archaeologists and those in related fields.
Interest in mass spectrometry with an inductively coupled plasma as an ion source and its association with laser ablation as a sample introduction technique (LA-ICP-MS) has steadily increased during the past few years. After a description of the analytical procedure and the calculation method, we show the potential of this technique to characterize non destructively archaeological artefacts. A comparison is made between the results obtained with LA-ICP-MS and those obtained on the same objects with other analytical methods. A large variety of archaeological materials such as obsidians, glasses, glazes and flints are studied.
Digital technologies in the last twenty years have offered cultural heritage (CH) new possibilities in conservation and promotion. 3D digitization has especially become more and more affordable and efficient. This leads to massive digitization projects and increasing amount of CH digital data. As an engineering team working on industrial techniques for reverse engineering, we are deeply affected by this effect. In this paper we propose a way to combine semantic information on top of the acquisition and modeling steps in order to manage heterogeneous historical data. We illustrate our approach with a use case composed of three overlapping historical objects related to Nantes' harbour history.
During the fieldwork season in November 2021-March 2022, the ‘Missione Archeologica della Sapienza nella Penisola Arabica e nel Golfo’ (MASPAG), as part of the research activities supported and financed by the Great Excavations of Sapienza since 2019 and MAECI since 2022, planned and launched a new landscape archaeological project in the Sultanate of Oman. The first survey was carried out in an area of the Al Batinah South Governorate unknown to archaeology, combining remote-sensing and ground verification activities. This operation also saw the first result of the collaboration between the MASPAG research group and adArte srl, developer of pyArchInit open sources plugin for QGIS. The first season of the survey not only made it possible to estimate the archaeological potential of the study area, but also served as a workshop, opening a dialogue between universities and private companies, to discuss open source solutions in archaeology.