Ian Mulvaney, Head, Product Innovation, Sage Publications, said that artificial intelligence (AI) is one way to deal with data at scale. It can train models with large numbers of data. The use of AI has spread since the cost of computation has come down radically. AI and machine learning can do exploration, prediction, generation, but sometimes the algorithms are bad or can be tricked. Here is an example.
Today, AI is handcrafted, but we are on the edge of robots using AI coming into production.
Peter Brantley, Director, Online Strategy, UC Davis Library, said that AI emphasizes the derivation of patterns from data and makes inferences allowing for predictive services. It may approach the level of insight. “Narrow AI” has many application forms. Some are non-interactive or mechanical. We might make mistakes in building the AIs, but mistakes would have recoverable impacts.
AI is increasingly invading areas of social interaction. The manipulation of interpretations becomes fraught. It will get much worse as general AIs approach and impacts will become increasingly profound and persistent.
Ruth Pickering, Co-Founder and Chief Strategy Officer, Yewno, noted that AI is far from being mainstream. Algorithms can read huge quantities of information; we use AI to extract meaning from text and create a neural network model. Now, we move from data to algorithms to relationships to knowledge and use AI to create products. Uncover differences in information.
93% of people stay on page 1 of their results, and 53% take the first 3 entries only. We need to help people understand the broader concepts and see if they find anything interesting to them. Understand the reason why things are connected. As you are reading documents, you can categorize them and make an accurate picture. The technology is unbiased and can accurately and consistently apply the categorization, so people spend less time being frustrated.
Elizabeth Caley (Chan Zuckerberg Initiative) spoke about the applications of AI on scientific literature. These are the drivers.
How do you take 27 million papers and build a knowledge graph to transform them into meaningful connections that can be navigated, and then make predictions of where a field is headed? The way is to use machine learning.
Don Hawkins blogs about conferences for Information Today and Against The Grain. He also maintains the Conference Calendar on the Information Today website and is the Editor of Personal Archiving: Preserving Our Digital Heritage, published by Information Today in 2013, and Co-Editor of Public Knowledge: Access and Benefits, published by Information Today in 2016. He received his Ph.D. degree from the University of California, Berkeley, and has worked in the information industry for over 45 years.