FOR THE PEOPLE
Read More
Do you live within the limits of the City of Turin? Take our survey, to help us shaping citizens social issues
FOR THE INSTITUTIONS
Read More
The IAT test for institutions’ employees
DATA
Read More
Maps, statistics, analysis
BIBLIOGRAPHICAL REFERENCES
Read More
S. Akhtar et al. (2020). Modeling Annotator Perspective and Polarized Opinions to Improve Hate Speech Detection. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing (HCOMP 2020).
Andrea Bacciu, Giovanni Trappolini, Andrea Santilli, Emanuele Rodolà, Fabrizio Silvestri:
Fauno: The Italian Large Language Model that will leave you senza parole! IIR 2023: 9-17
Y. Zhang et al. (2023). Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models. ArXiv, URL: https://arxiv.org/abs/2309.01219
Tom B. Brown et al (2020). Language models are few-shot learners. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NIPS’20). Curran Associates Inc., Red Hook, NY, USA, Article 159, 1877–1901
F. Cabitza et al. (2023). Toward a perspectivist turn in ground truthing for predictive computing. AAAI Conference on Artificial Intelligence (AAAI-23).
S. Casola et al. (2023). Confidence-based Ensembling of Perspective-aware Models. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023).
Conneau, A., Khandelwal, K., Goyal, N., Chaudhary, V., Wenzek, G., Guzmán, F., Grave, E., Ott, M., Zettlemoyer, L., & Stoyanov, V. (2019). Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
E Daga,et al. (2022). Integrating citizen experiences in cultural heritage archives: requirements, state of the art, and challenges. ACM Journal on Computing and Cultural Heritage (JOCCH) 15 (1), 1-35
De Mattei, L., Cafagna, M., Dell’Orletta, F., Nissim, M., & Guerini, M. (2021). GePpeTto Carves Italian into a Language Model. Proceedings of the Seventh Italian Conference on Computational Linguistics, CLiC-it 2020, Bologna, Italy, CEUR Workshop Proceedings, vol. 2769. CEUR-WS.org.
Devlin, J., Chang, M-W., Lee, K., and Toutanova, K.. (2019). BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
Di Bonaventura, C. Arianna Muti, Marco Antonio Stranisci (2023) O-Dang at HODI and HaSpeeDe3: A Knowledge-Enhanced Approach to Homotransphobia and Hate Speech Detection in Italian. EVALITA 2023
M. Diligenti et al. (2017). Integrating prior knowledge into deep learning. 16th IEEE international conference on machine learning and applications (ICMLA 2017).
European Commission (2021) Fostering a European approach to artificial intelligence. Communication. COM(2021) 205 final. Brussels 21.4.2021.
S. Frenda et al. (2023). EPIC: Multi-Perspective Annotation of a Corpus of Irony. 61st Annual Meeting of the Association for Computational Linguistics (ACL 2023)
Gangemi, A., & Presutti, V. (2009). Ontology design patterns. In Handbook on ontologies (pp. 221-243). Berlin, Heidelberg: Springer Berlin Heidelberg
He, P., Gao, J., & Chen, W. (2021). DeBERTav3: Improving DeBERTa Using ELECTRA-style Pre-training with Gradient-disentangled Embedding Sharing. arXiv preprint arXiv:2111.09543.
Claudiu D. Hromei, Danilo Croce, Valerio Basile, Roberto Basili (2023). ExtremITA at EVALITA 2023: Multi-Task Sustainable Scaling to Large Language Models at its Extreme.EVALITA 2023
Lai, M., Salvatore Vilella, Federica Cena, Viviana Patti, Giancarlo Francesco Ruffo (2023): United-and-Close: An interactive visual platform for assessing urban segregation within the 15-minutes paradigm. UMAP (Adjunct Publication) 2023: 115-120
P. Lewis et al. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. 34th International Conference on Neural Information Processing Systems NIPS’20).
L. Molinaro, R. Tatano, E. Busto, A. Fiandrotti, V. Basile, V. Patti (2022). DelBERTo: A Deep Lightweight Transformer for Sentiment Analysis. AI*IA 2022: 443-456
K. Shuster et al. (2020). Retrieval Augmentation Reduces Hallucination in Conversation. 34th International Conference on Neural Information Processing Systems NIPS’20).
Pires, T., Schlinger, E., & Garrette, D. (2019). How Multilingual is Multilingual BERT?. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4996–5001, Florence, Italy. Association for Computational Linguistics.
S. Pan et al. (2023). Unifying Large Language Models and Knowledge Graphs: A Roadmap. ArXiv, URL: https://arxiv.org/abs/2306.08302
Palmero Aprosio, A., Menini, S., & Tonelli, S. (2022). BERToldo, the Historical BERT for Italian. In Proceedings of the Second Workshop on Language Technologies for Historical and Ancient Languages.
Parisi, L., Francia, S., Magnani, P. (2020). UmBERTo: an Italian Language Model trained with Whole Word Masking. https://github.com/musixmatchresearch/umberto. GitHub.
Polignano, M., Basile, P., De Gemmis, M., Semeraro, G., & Basile, V. (2019). AlBERTo: Italian BERT language understanding model for NLP challenging tasks based on tweets. In CEUR Workshop Proceedings (Vol. 2481, pp. 1-6).
Santilli, A. (2023). Camoscio: An Italian Instruction-Tuned LLaMA. https://github.com/teelinsan/camoscio. GitHub.
Sarti, G., Nissim, M. (2022). IT5: Large-Scale Text-to-Text Pretraining for Italian Language Understanding and Generation. arXiv preprint arXiv:2203.03759.
Stranisci, M. A., Bernasconi, E., Patti, V., Ferilli, S., Ceriani, M., & Damiano, R. (2023, October). The World Literature Knowledge Graph. In International Semantic Web Conference (pp. 435-452). Cham: Springer Nature Switzerland
Stranisci, M.A., Rossana Damiano, Enrico Mensa, Viviana Patti, Daniele Radicioni, and Tommaso Caselli. 2023. WikiBio: a Semantic Resource for the Intersectional Analysis of Biographical Events. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 12370–12384, Toronto, Canada. Association for Computational Linguistics.
Stranisci, M.a., Simona Frenda, Mirko Lai, Oscar Araque, Alessandra Teresa Cignarella, Valerio Basile, Cristina Bosco, and Viviana Patti (2022). O-Dang! The Ontology of Dangerous Speech Messages. In Proceedings of the 2nd Workshop on Sentiment Analysis and Linguistic Linked Data, Marseille, France. ELRA
Wei, X. et al. (2023). PolyLM: An Open Source Polyglot Large Language Model. ArXiv, abs/2307.06018.