How do we design products for people? It is easy to get caught up in feasibility constraints when developing a product. Resulting in an avalanche of challenges that need to be solved before we can achieve the desired result. It is important to remember that we don't need to solve all the challenges. One needs to put themselves in their customers' shoes to remind themselves of the critical issue that is the key to the product's success. This talk will cover some tools to help keep the primary user need in mind and build product experiences for people. The speaker will also share examples of the challenges they faced and how they used these tools to overcome them.
Dasha Moskalenko develops websites for people. She believes that a website is as good as its user experience. Dasha is interested in how people behave and what drives them. She enjoys challenging preconceptions about the norm to develop seamless product experiences that people want because they are practical and easy to use. Positive experiences inspire a positive mindset that transfers from one interaction to another. Dasha works at Europeana Foundation, where she manages a team that designs and develops the User Experience of Europeana.eu.
The Europeana Foundation is an independent, non-profit organisation operating the Europeana initiative. Our task is to empower the European cultural heritage sector in its digital transformation. We do this by sharing and promoting European cultural heritage material on Europeana.eu for learning, work, and fun. And we work with partners in the sector to develop expertise, tools and policies that embrace and facilitate digital change.
When discovery is the primary unique selling point of your business then it is expected that you understand how people actually use your product. In this talk we will discuss how you can create incentives for your business to make discovery worse for no good reason at all, how you can create an insular experience that comprehends every piece of information as fundamentally equally valuable (and therefore equally useless) and how to recognize the signs of categorizing yourself into a corner.
Finn and Blocket are both part of the Schibsted group and cover Norway and Sweden respectively. The core business is providing a marketplace where businesses and consumers can buy and sell everything from earth movers to teaspoons. Both have been around for more than a decade and are well established brands with very healthy margins. Also some really odd ideas about the world.
The process of digitizing historical newspapers at the National Library of Sweden involves scanning physical copies of newspapers and storing them as images. In order to make the scanned contents machine readable and searchable, OCR (optical character recognition) procedures are applied. This results in a wealth of information being generated from different data modalities (images, text and OCR metadata). In this presentation we explore how information from multiple modalities can be integrated to improve searchability and enrich existing collections with useful metadata.
Faton Rekathati is a data scientist at KBLab, the National Library of Sweden's datalab. At the library his work is mainly focused on making image collections searchable and training Swedish language models. Faton graduated with a master's degree in Statistics and Machine Learning from Linköping University.
KBLab is a national research infrastructure for digital humanities and social science at the National Library of Sweden (Kungliga Biblioteket, KB). The library collects, preserves and gives access to almost everything that is published in Sweden. Through the lab we provide access to KB's collections in structured and quantitative form.This makes it possible for researchers both to seek new answers and pose new questions in their research. We also use the library's collections and data to develop and publish Swedish language/audio models and work with artificial intelligence.
Information retrieval has classically been framed in terms of searching and extracting information from static resources. Interactive information retrieval (IIR) has widened the scope, with interactive dialogues largely playing the role of clarifying (i.e. making explicit, and/or refining) the information search space. Informed by market research practices, we seek to reframe IIR as a process of eliciting novel information from human interlocutors, with a chatbot-inspired agent playing the role of an interviewer. This reframing flips conventional IIR into what we call an ‘inverse information seeking dialogue,’ which has largely been unexplored in the academic literature. While some standard methods from natural language processing can be repurposed, such as anaphora resolution and textual entailment, this problem presents unique challenges which invite creative and exploratory approaches. In this talk we aim to outline some of the key challenges and lessons learned through exploring this problem, and propose a novel method for eliciting consumer information through dynamically generated discourses.
Josh Seltzer is the CTO and co-founder of Nexxt Intelligence. With a background in computer science and cognitive science, in the past Josh has worked on applying machine learning techniques to ecological research, such as computer vision and bioacoustic machine listening for identifying animal species in the wild. In his current role, Josh has led a multidisciplinary team of machine learning engineers and data scientists with the goal of understanding consumers through natural language.
Nexxt Intelligence is a market research technology (ResTech) startup located in Toronto, Canada. Our flagship product, inca, is an insight platform powered by AI, built to understand consumers in-depth and at-scale. We pride ourselves in applying qualitative principles to quantitative research, engaging with consumers and deriving unique insights using natural language processing.
Over the recent years, podcasts have emerged as a novel medium for sharing and broadcasting information over the Internet. How can streaming platforms originally designed for music content assist listeners enjoy and discover this new podcast, and connect our podcasters with fans?
In this talk, we first introduce how we learned user goals and developed search metrics at Spotify using mixed methods research. We then introduce podcast and music search behaviors learned from a large-scale log analysis.
With the information needs learned, a simple yet effective transformer-based neural instant search model is developed, which retrieves items from a heterogeneous collection of music and podcast content. Our model takes advantage of multi-task learning to optimize for a ranking objective in addition to a query intent type identification objective. Our experiments on large-scale search logs show that the proposed model significantly outperforms strong baselines for both podcast and music queries.
Mi Tian is a Senior Data Scientist at Spotify. She is interested in user satisfaction understanding, evaluation of IR systems, metrics and experimentation from an applied perspective. She holds a PhD in audio based Music Information Retrieval and has a passion in connecting research insights with consumer products.
Streaming has become increasingly central to the music and talk audio industry in the past two decades. As a world-leading streaming service, Spotify has over 406 million monthly active users and hundreds of millions of tracks and episodes in the catalog. At Spotify, we apply our curiosity and mastery to understand the needs and behaviors of our listeners and artists, and aim to deliver a first-class listening experience.
Academic researchers and data scientists use Nexis Data Lab to search academic sources and other high-quality content quickly, visualize data, and reproduce results for their academic papers and experiments.
Do you want to analyze large data sets with news texts for your research? Then our academic tool "Nexis Data Lab" is the right solution and we will show you how to use it.
This cloud-based self-service platform allows:
This approach allows academic users to handle larger amounts of data, especially in the course of text and data mining projects.
Katrin Wagner is a LexisNexis Account Manager for the academic market in Germany, Austria and Switzerland as well as for public authorities and organizations. After graduating from TH Köln with a Librarian degree, she completed a Master's degree in Information and Knowledge Management at the University of Applied Sciences and Arts in Hanover.
LexisNexis is a leading global provider of data, content and technology solutions, covering the following sectors:
The Nexis Uni search tool was developed together with students. It is particularly suitable for students and staff at universities who need access to national and international news sources, company data and US legal information.
The Nexis Data Lab analysis tool can analyze large data sets with news texts for your research or text and data mining projects.
With this cloud-based self-service platform, global press sources can be searched, the results exported to a Jupyter Notebook environment and analyzed using the Python programming language. This approach allows academic users to handle larger data sets.
The U.S. legal system relies on precedent, where the law is written not only through codes and regulations, but also through court opinions. Legal research, the process of finding relevant law responsive to the facts of a case, is time consuming as lawyers often fear to be missing a key opinion. Thomson Reuters, for over a century, has develop tools and methods to aid legal research. I will present a few chosen examples, ranging from a master classification system introduced over 100 years ago to more recent work on search and recommendation. There are still plenty of open questions and I will conclude by presenting some remaining challenges.
Isabelle Moulinier is a VP, Applied AI Research at Thomson Reuters. She currently leads a team of applied scientists focused on bringing state of the art research in artificial intelligence to the next generation of legal and tax products. Isabelle and her team have built many features for Westlaw, a search engine for legal professionals over the years. Most recently, she and her team developed AI capabilities for Westlaw Edge, such as Quick Check, a citation recommendation approach, and finding potential answers to questions.
Isabelle holds a M.Sc. and Ph.D. from the Université Pierre et Marie Curie, Paris, France. Before her recent return to Thomson Reuters, Isabelle led a team of data scientists at Capital One working on speech recognition and NLP.
Thomson Reuters is one of the world’s most trusted providers of answers, helping professionals make confident decisions and run better businesses. Our customers operate in complex arenas that move society forward — law, tax, compliance, government, and media – and face increasing complexity as regulation and technology disrupts every industry. We help them reinvent the way they work. Our team of experts brings together information, innovation and authoritative insight to unravel complex situations, and our worldwide network of journalists and editors keep customers up to speed on global developments that are relevant to them. We’re on a mission to help professionals advance their businesses and gain competitive advantage with the trusted answers only we can provide. More about Thomson Reuters Labs: http://tr.com/ai-jobs
How do you build a search and discovery system for financial news and research? In this talk, we will look our decade-long investment in three specialized areas in the field of AI: natural language processing (or, the application of machine learning methods to text), information retrieval and search, and core machine learning (including deep learning), and how this is enabling us to apply autocompletion, query understanding, index enrichment (topics, people, sentiment), question answering, summarization, and relevance ranking in order to enable our clients to discover insightful information from the complexity of unstructured data.
Ivo Vigan is a Senior Machine Learning Engineer & Team Lead in Bloomberg’s AI Engineering group. For the past six years, he has been building products at Bloomberg that apply natural language processing and machine learning technologies to the financial domain. He holds a Ph.D in theoretical computer science from the City University of New York.
Bloomberg is building the world's most trusted information network for financial professionals. Our 6,500+ technologists are dedicated to advancing and building new systems for the Bloomberg Terminal and other products to solve complex real-world problems. Bloomberg's AI Engineering group is a close-knit team of 200+ researchers and engineers focusing on projects related to AI, ML, NLP, NLU, IR, and QA. Learn more at TechAtBloomberg.com/DataScience.