v32#2 Emerging Tech: To Be or Not to Be? — Introducing Hybrid-Build: The New Approach to Building Modern, Content Platforms for 21st Century Publishing

by | May 11, 2020 | 0 comments

Column Editors:  Deni Auclair  (Media Growth Strategies, LLC)   www.mediagrowthstrategies.com

and John Corkery  (Client Engagement Director, LibLynx)    www.liblynx.com

As typical consumers, we’ve come to expect intuitive, personalized, and seamless technology experiences, as with our interactions with Netflix or Google.  Along those lines, publishers and libraries want simple, flexible, and advanced technology that meets their business and budgetary needs.  This begs the question:  Why has this type of experience been so elusive in our industry?

For one thing, there’s the puzzling paradigm that organizations cycle through approximately every seven to ten years that we call the “Buy vs. Build” cycle.  Companies vacillate as they look for the best solutions, locked in a transitory cycle with the pendulum moving from one extreme to the other, never truly finding its center.  There has been a valid reason for this in the past, as advancements leap-frogged and moved side-to-side.  But at this point it doesn’t ring true.  In fact, a “hybrid-build” strategy offering benefits from both build and buy approaches is what seems to make sense.  Is that now the center, that sweet spot?

The Buyers

Many favor outsourcing, letting external technology experts who have built, polished, and refined certain functionality handle it, enabling publishers to focus on publishing.  This has been the most popular route for the past several years, with a handful of companies providing platforms that publishers could not easily create themselves.  This route is expensive in terms of installation, training, and customization, but requires minimal internal technology staff or technical debt.

Less obvious in this scenario is the concept that technical debt is actually present in the form of cost and annoying platform inflexibility, given the one-size-fits-all approach (i.e., the needs of the many outweigh the needs of the few).  There is also the question of content enrichment that takes place within the platforms — and is ultimately lost if the organization decides to move on … a technical Catch-22.  What most platform providers don’t broadcast — and most would probably deny — is that most are using what is now considered dated technology (developed a decade or more ago for most of them).  They add more modern front-end tech to compensate for the inflexibility of their platforms but most do not allow customers to customize or interact flexibly with the platform.

Consider the vendors’ dilemma:  “How do we build a flexible system with the number of clients and data we already have?”  “It’s safer for us to make iterative improvements in the technology we are familiar with;  best to hold off investment of time and money that will force us to go through internal pains and capital losses.”  Most platform vendors are focused on the day-to-day servicing of current clients, not on R&D.  But is the pendulum gaining momentum in a different direction?

The Builders

For those who would design and create their own systems, there are no doubt many cautionary tales of past experiences, along with the whisperings of platform vendors:  “This really is rocket science — best not try it at home.”  Factors in the past have made building a platform complex and costly:  hiring a team of developers, selecting technology, building something that meets the needs of the business and end-users, maintaining, managing, servicing, and upgrading the technology.  Couple these with a financial amortization model that does not always keep pace with the exponential advances of technology and “tech-fatigue” begins to set in.  Those who built systems amortized over multiple years have found themselves with “old” tech faster than expected.

Systems become more difficult to maintain and replace as time moves on, of course, and frequently require workarounds to keep them somewhat current — or just working.  They also often do not play well with other systems (interoperability wasn’t really a thing, even as few as five years ago).  The net result is a delicate mix of old, disparate siloed systems that users pray will continue to work and nobody dares to think of replacing because the full cost of owning technology has become an ongoing liability to the business.  Given all of this, many organizations find it prudent to take the route of buying from outside providers.

Are these realities of the past still the case?  Not really anymore… here’s why.

What is a Hybrid-build?

The current state of information dissemination is increasingly complex and competitive.  The number of digital platforms and systems organizations have to support seems to expand year over year.  Most publishing firms (should we say all publishing firms that expect to stay in business?) employ technical people.  Most of those technical people are improving in their expertise.  Part of the reason for this is that technology has vastly improved and simplified complex functions.  Modern technology today, for example, is built to operate natively in the cloud.  These systems find common ground in IOT standards, great development frameworks, and common back-end tools.  The developers who have made much of this stuff possible are our first generation of digital natives who experienced monolithic code and slow-moving applications first-hand.  They are now creating smart, flexible frameworks built to interact with other systems and to create opportunities to build business-changing assets that will age gracefully.  Thinking again about the experience most business people have with the Google suite of products one asks:  Would that have even been imaginable in 2010?

The hybrid approach couples modular applications and tools built to play nicely with other systems.  These applications and tools usually focus on doing one thing well, allowing companies to focus on specific services and consistent pricing.  Vendors provide upgrades to the service in real-time, with little or no disruptions to the service or the other systems with which they are interacting.  Other features and applications can also be added to the architecture fairly easily.  The hybrid-build approach is meant to be flexible and change as the needs of users change.  Gone are the days when a system had to be offline for maintenance, or improvements caused a laundry list of bugs, scuttled standard features — or worse, caused the whole system to crash.

Companies that propose hybrid-build focus on specific business issues and requirements and bring tech expertise to bear to solve them.  They provide organizations the opportunity to build and own technology that conforms to their needs and grants them control of systems that become assets (rather than liabilities).  These solutions seamlessly move things like content enrichment, metadata enhancement, and search/discovery into the process, pushing advancements in machine learning and natural language processing upstream in the process, where it is both contextually appropriate and more cost effective.  The enriched content becomes part of a more valuable legacy and is easier for the business to produce.  The new platform is client-facing, making customers’ user experience much richer and more user-friendly, allowing staff to work more efficiently.

Smaller organizations with fewer technical chops and financial resources can now build and own something meaningful.  The approach also enables a wealth of data for strategic analysis and business improvements.  Things like creating personalized experiences for users, interactive features, or dashboards — which have been mostly lacking, or painfully manual in market platform offerings — are standard fare in this new hybrid-build tech world.

The resulting investment yields a product that is flexible and can appropriately expand to meet future needs (sans the bailing wire and bubble gum).  Content enrichment, legacy data, personalized architecture — to name just a few benefits — become part of a richer method of producing and replicating digital products when it is convenient for the business.  This is truly a different approach that will yield success for organizations disappointed in the past or tiring of the current platform vendor landscape in academic publishing.

Great technology, one brick at a time…

One company championing the hybrid-build model are 67 Bricks, a software development consultancy based in Oxford, UK.  We talked to their MD and co-founder, Sam Herbert, to find out more about their work in this area and the wider technology challenges facing the scholarly communications arena.

How do you work with scholarly publishers?

SH:  Most of our work with publishers centers around helping them design and build modern, data-driven products for the delivery of digital content.  Publishers, and anyone looking to disseminate scholarly content, need to consider themselves in relation to the bigger picture — the  publishers who are leading the way are transforming from straightforward “content providers” into digital product companies.  There has been an important digital transformation over the past 20 years, but for many publishers this has stalled with what we like to call “print-online” — print structures and formats with a digital interface.  This is simply not enough to survive and remain relevant.  Where publishers can gain significant value is from completing the transition to being a digital product company that delivers personalized, high-value insights and knowledge.  The products, platforms and systems we build enable publishers to do this, and deliver more value, improve efficiencies and develop new revenue streams. 

We typically follow an agile approach and aim to get people to an MVP launch to improve performance, security, efficiency, and discoverability.  Often that’s when the real work starts — because that’s when users start to interact with the platform and you can begin iterating based on the feedback gathered to really meet their needs.  We offer a partnership model for full-service product development and we bring the processes, technology, and people to make that happen.  We are very proud to have partnered with some of the most respected names in scholarly publishing including Sage, Taylor & Francis, Emerald Publishing, Wiley, and The BMJ, to name a few.

How critical is data in becoming a digital product company?

SH:  It’s crucial.  Any company delivering online digital products is building them based on data.  It’s the backbone of any digital product because you need data to deliver better features like search and discovery.  You also need data about users, usage, and how people behave on a site so you can build better user experiences.  That’s what digital product companies are good at.  The current vendor platforms don’t deliver that.  Publishers have little ownership or control of their client data and it’s a huge problem.  They can’t innovate or move forward.

Despite needing to change, are publishers finding it hard to transform?

SH:  Yes, it is a big challenge.  Publishing is an industry built on traditional, functional processes, and it’s a huge culture shock to have to “rebirth” in this way.  Technology-wise it is a challenge, too;  for while many firms have expertise in areas like XML, they typically don’t have strong experience in other technology areas and are struggling to acquire it.  We provide scale and the capabilities to jumpstart an organization’s move towards becoming a digital product company, by providing product managers, technical architects, developers and software components to expedite that transition.  There’s a long learning curve to doing that and our role is to help publishers avoid the normal mistakes companies make in that process — we know where the pitfalls are. 

How have you worked with information professionals such as institutional librarians and their staff?

SH:  Librarians are important stakeholders in many of our projects — for example, we built a librarian portal in Emerald’s publishing platform, Emerald Insight.  We see a lot of similarities in the challenges facing them to what publishers are facing.  They need to figure out what role they fill and what value they provide. 

If the best researcher experience is to go through Google to find content, then what role are libraries playing? How are they going to respond?

SH:  They need to become more user-centric to understand how to deliver more value in the new digital environment.

It’s very important to understand that disruption — and libraries are being disrupted by the internet itself — is typically in the form of better user experiences rather than dramatically different user outcomes.  For example, Amazon came along and allowed users to buy a book.  They didn’t change the outcome for the end user;  the customer still got a book, but they changed the experience by not requiring customers to take time to go to a bookstore.  Another example is Uber.  They haven’t changed the outcome for customers — you still go from point A to point B — but they have changed the user experience for getting to that outcome.  It is the same with Google.  You could find a company by looking it up in the Yellow Pages, now you find it — in a faster and easier manner — via Google.  These changes in user experience have radically changed entire industries.

An example in our industry might be a researcher trying to decide what area of research to move into next;  once we understand this, we can deliver a better user experience.  In this case, you could use the content and data you have to tell them what areas of trending research are similar to their areas of expertise.  You could then tell them what research areas are being funded and top authors or institutions in that field.  The end outcome is the same — the researcher decides what to research next but getting to that point is dramatically improved.  Institutions have access to the content and data that can help them move towards this type of service.

If digital products are the way forward, what are the common challenges/issues organizations first face when making the ‘buy or build’ decision?  We are asking both about publishers and librarians.

SH:  For many of our clients, the main challenge is about recognizing that this is Phase 2 of digital transformation: getting their heads around the limitations of Phase 1 and the different technologies and approaches needed to underpin each phase.  Phase 1 involved moving from print to print online — essentially replicating print business models and formats online.  The monolithic vendor platforms were very effective at supporting this transition, but are now holding publishers back from the second phase of digital transformation — to digital product companies.  Buying a complete vendor platform isn’t suited to the current “next phase” problems publishers are facing.  To achieve the agility, flexibility, and control you need in the digital era you must be able to control the roadmap and be able to say “For our users, we want to implement x, y, or z and we need to do it quickly.”  Current vendor platforms don’t allow that; even if you are the biggest customer of a platform provider, it takes a long time to get things built to specifications.

What we see working and the model we provide is building with a partner.  We have components that have already been built that can be incorporated into a publisher’s roadmap.  We work in partnership and bring the capabilities onboard quickly, then it’s about working out the right model for the future.  We talk about skills they need to have and those they don’t.  We help define the product roadmap so the organization gets good at doing that, then we provide someone to work alongside the product owner.

The second major challenge to embracing digital transformation is the risk averse nature of the industry.  Decisions are made at the top, which can present a huge challenge to the modern approach of iterating and working on how to deliver value.  It’s not complementary to the culture.  Publishers would love to say, “We’ve gone through the digital transformation, can we now stop?”  Of course, the answer is no, it is actually accelerating — the underlying technical innovation in processors, data storage, and usage and the use of algorithms are all exponentially increasing.  We are only going to see more and more change.  Most organizations are resistant to change and don’t have the ability to deal with it, so the buy model is attractive from that perspective but very limiting.

What are the main benefits/downsides of hybrid-build (building and buying)?

SH:  The key benefit is that it allows publishers to customize, where needed, to deliver specific and additional value to users, while at the same time saving time and money when improving generic problem areas that already have a best-of-breed solution.  This means publishers get to develop a unique proposition without having to rebuild everything, and they can invest more money where it is critical — where they are delivering new value for customers.  A good example is that it often makes sense to implement user management and access control via a standalone component.  There is a solution that would be wasteful to rebuild from scratch.  We often implement LibLynx to accelerate implementation while still delivering value to end users. 

Flexibility and agility are key.  A modern hybrid-build approach is component-based, allowing publishers to move components in and out as they decide where the value drivers are, and which parts of the system deliver unique value to their users.  This approach is about maximizing value to the customer and therefore maximizing value for the publisher.

Taking a hybrid-build approach does have its challenges.  To make it work you need to make sound decisions about which components to buy, which to build, and which to customize.  An excellent, trusted technology partner is key here: a company that knows the technology, has implemented the majority of components before, and can short-circuit this potentially time-consuming and costly part of the process. 

Integration is also critical.  You are looking to pull together different, best-fit components into a flexible and agile platform, and this relies on an experienced partner with the right skills.  The final challenge for hybrid builds is the management of different types of costs such as staffing, software component licenses, and technology partner costs.  

How do you see the scholarly information industry, in terms of technology, in the future?

SH:  Institutions and publishers will have developed their own digital products, or they’ll be pumping content into workflow tools that others will develop.  The current business model of making content available as longform content won’t survive.  Those that go up the value chain will be the ones who thrive and interact with users, others will be producing massive amounts of content and won’t interact with end users at all.  The role of individual content items will have diminished in value — it is the data, knowledge and personalized insights that can be extracted from content sets that will deliver the value of the future.  Challenges therefore include content diminishing in value while volume continues to increase. 

The barriers to entry for being a producer of content will also continue to come down, and those publishers competing purely on the basis of quality and quantity of content will struggle.  Content will become a commodity and only those with huge scale will win.  Publishers have a unique set of assets — they must realize what they are and what the end consumer really cares about, put that together and create something new.

Researchers found that if you could turn DNA into a way to store data, a teaspoon of it could store all of the data in today’s world.  We may not be there in ten years, but we’re heading in that direction.  In four or five years, desktop computers will have the same processing power as the human brain and huge amounts of data will be immediately accessible, with the capacity to manipulate, access, and store information.  We expect to end up with a single desktop computer with the brain power of the entire human race.  We keep feeling we’re hitting a barrier to that, but then it gets removed and the pace of change continues to accelerate.  We need to be more comfortable with this concept of continuous change and put in place the technology, skills, and ways of working needed to be able to compete and remain relevant in such an environment.

The industry will realize that humans aren’t the only creators and consumers of content, machines will soon create, analyze and consume content.  That’s scary!  There has been some amazing work in terms of human readable content generation from AI.  Springer Nature launched an AI-generated book last year that came up with the appropriate structure with chapters, key themes, an introduction for each chapter, and then pulled out sections from research articles to support those points.  Plenty of journalism today is being done by AI as well, so we aren’t that far from the concept of humans realizing that to create text we just need to give the data to the AI and it creates the text.  In fact, Thomson Reuters has said that already more computers consume more data than humans.  There will come a day when humans won’t be consuming research articles, it will be computers/software reading, analyzing, and feeding information from articles to humans in more personalized, useful, and impactful ways.  

Sign-up Today!

Join our mailing list to receive free daily updates.

You have Successfully Subscribed!