Even the biggest and smartest publishers still have a lot to learn about digital marketing – The Shatzkin Files The Shatzkin Files

Even the biggest and smartest publishers still have a lot to learn about digital marketing – The Shatzkin Files The Shatzkin Files.

Posted by Mike Shatzkin on March 26, 2014 at 10:45 am · Under General Trade PublishingMarketing

Doing business development for my Logical Marketing partnership with Peter McCarthy (we are on the verge of formally announcing our new business) gives me repeated and continuing confirmation that Pete just knows more about digital marketing than anybody else in publishing. This is partly because he’s a damn smart geek with a marketing-oriented brain who grew up in publishing. It is also because he had the good fortune to be effectively running a marketing experiments lab for the world’s largest publisher for six years.

We recently had three examples from three different Very Big Publishers with Very Smart People of mistakes, or misunderstandings, or structural paralysis, that seem almost generic. All of them involve challenges that every publisher faces on a daily basis.

The first is from a publisher which is among the first to take a step that all publishers must take eventually: optimizing the metadata for their backlist. (Backlist is a topic that has interested us for a long time.) This is a mammoth challenge for every publisher of considerable size. They have tens of thousands of titles and, often, many of the ones selling best will have been published years — or even decades — ago so that nobody among the editors or imprint marketers have read it or thought through the markets for it, except when traditional reissuing activities have occurred or an exceptionally sharp marketer saw an opportunity.

There really are two distinctly separate problems inherent if you want to maximize backlist sales in the digital age. One is the one they are tackling: to get the foundational metadata — the book descriptions and their placement in the information chain — solid so that the titles are called up in response to the searches suggesting a possible customer for them. The other is to build a mechanism to observe the news and social graph each day and to identify the titles that can benefit from new developments. And then, of course, to couple the two in order to optimize a given title or series for the most appropriate semantics to drive both discoverability and conversion in different environments. SEO, yes. But really nuanced and real-time SEO which accounts for fundamental changes in how all the engines work and subtle differences inherent in each. We have our ideas about that engine (and have developed a proposal to address it) but, for now, like that publisher, let’s just worry about the first challenge: getting the backlist metadata foundations right.

The pioneering publisher we encountered is addressing the question across the many imprints in their large organization by asking each one to work on the metadata for their top-selling backlist. What this means, in practice, is that fuzzy-cheeked editorial or marketing assistants — most operating with little direction from senior people and, frankly, mostly working with senior people who wouldn’t really know exactly what to tell them to do (this stuff has gotten very technical in nature) — are the ones looking at what is there now (if anything) and fixing or updating it. This house will inevitably find that they get very uneven results and, because most of the work will be done by low-level people who turn over (or get promoted) quickly, it will be hard to generate training or processes that will show steady improvement of this work in the future.

Unless some great care is being exercised to introduce procedures most of these people would be unlikely to know about, this also runs afoul of Pete’s repeated mandate that research must be done for each and every title before a marketer can create optimized metadata. For backlist, in Pete’s methodology, this starts with finding out who the people are who have already read the book and commented on it and what words they use when they describe it. LibraryThing tags and GoodReads reviews are key sources for that, but for us LT fits into an overall workflow and orchestrated use of tools that Pete has developed and trained our team to to employ to thoroughly crack this particular problem of “how readers think about, talk about, and feel about” a given work or author or brand. LT and GoodReads provide critical insight to cement or alter the context in which a book sits.

So, aside from the massive distraction created by asking each imprint to take on such a substantial additional chore, the chances are good, actually overwhelming, that the results — the new metadata foundations that will be created — will not be thoroughly applied and optimal in the best cases and that most imprints won’t be as good as the best. And, in a problem that repeatedly bedevils publishers in the age of digital media (as we will see again below), the staff time to do this exercise is not readily available. Everybody doing this work already has a fulltime job, largely jobs managing author talent and frontlist, jobs which must be done.

In another case, the Chief Marketing Officer of a large publisher — one who has heard Pete speak, knows me, and is firmly committed to trying our services — talked with me about what he presumed would be Pete’s many ideas about how to apply a full-text search of a book to improve the marketing. I thought he would fall down when I told him, “Pete doesn’t believe in reading the book so I don’t think he’ll want to apply a full-text search.” My CMO friend was immediately skeptical so I told him what Pete has told me. “I’m marketing to people who haven’t read the book.”

There are definitely some situations in which you’ll want to pull things out of a book’s text, although I think they’re almost always about proper nouns. If you’ve got a biography of a baseball star and there are stories about teammates and opponents, you’ll want those names in the metadata to signal to searchers on those people that there is relevant material for them in the book. But, in general, full text doesn’t help.

You hone marketing metadata by looking hard at the marketsnot at the text. The task begins by describing the audiences as precisely as you can: “teachers” is good, “high school science teachers” is better, and “high school science teachers with kids of their own” is better still. Then you find those people, learn where they cluster, and how they talk, and how they seek information. You apply the knowledge of what semantics they use and bump that against the terms that are searched in different venues to hone your pitch. There are shortcuts to get this work done (and we try to use them) but the application of a full-text search of the book is not one of them.

Experimenting with what is known as A/B testing, whenever you can do it, is critical. (It is admittedly difficult, though not impossible, to do it effectively in advance of publication.) Any media outreach campaign that is not utilizing A/B testing is doomed. Any digital marketing that is done without A/B testing is amateur hour.

And that brings us to the third time this month that we found with pretty near certainty that a big house was proceeding without knowledge Pete has.

We did an online audit for an author who is in the news. We found some circumstances which seemed to call for “paid media”: the purchase of search terms and phrases (ones on which major retailers had not bid and for which the book was not surfacing organically) or contextual display ads around news breaks to call attention to this author and his book in particular cases where we (really, Pete) believed it would do some real good.

So we made the suggestion to the top digital marketing thinker at this big publisher. He reported back that this kind of campaign had never worked for them, even tried on a big scale. Once again, Pete’s experience suggested to him possible reasons why it hadn’t worked for them and how it might. Pete told me:

The houses (almost to a one) do not know how to run these promotions, track their efficacy properly (they do broad “last-click” attribution, which is likely to capture a third of the actual effect, or so), and they make them amazingly expensive for themselves by not optimizing for the goals or for the platform, which at once benefits the platform and them.

Despite nearly infinite inventory, these places (the online venues) prefer that users click on ads when they see them. The better you are at it the cheaper those clicks are. Then you track the clicks and augment your attribution model with the known amplification effect (conservatively apply a % applied across outlets and formats per studies or in-house knowledge).

Drive a nice conversion percentage at Amazon and the book begins to rise. By rising, it gets better “placement” on Amazon via algorithmic store optimization (e.g. merchandised in the cart). That’s a virtuous circle that has momentum of its own.

So, once again, the problem seems to be that the targeting is too generic, using broad matches in AdWords and not applying negative keywords and the like. There is an outright lack of — or too little — A/B testing and segmentation, and there is no constant adjustment of the media buy reacting to the response. It turned out that a further conversation with this publisher revealed that their particular problem was around clearing the staff time to gain expertise and actively research and run these paid digital media campaigns, which, we agree, is not trivial.

Digital media is cheap to buy but expensive to manage.

The net effect of this is that potential consumers are in essence seeking the book but publishers are not putting the book in front of them because it is too inefficient to do so. The answer would seem to be to make the process more efficient rather than missing the opportunity. That’s why we’re training capable but less-expensive help to deliver that service as well.

All of these examples are real and all of them are recent. All of the people laboring under what looks to Pete (and therefore “to us”) as erroneous understandings of how best to apply digital marketing are smart and sophisticated. And I wouldn’t have been able to provide better answers, or discern any problems with their answers, a year ago. But what Pete McCarthy knows was learned during six years working alongside many forward-looking colleagues (and a management) who remain at the Random House part of PRH, and much of what we’re planning to deliver is almost certainly already baked into their workflow to a certain extent, although (like most big publishers) largely at the imprint level and therefore with varying levels of focus and capability.

Many of the obstacles to competing effectively with the World’s Largest Trade Publisher are obvious. But not all of them. It would also appear that they also have the hidden advantage of already having incorporated much of what Pete McCarthy has learned into their digital marketing practices. This will not be evident to other publishers. It will be evident in PRH’s sales and in the (lower) cost of getting them.

And what I’ve described above are merely a few of the marketing principles of Pete’s as we see their importance in specific examples. The ramifications of applying the broader set of capabilities implied by these principles, and what could be built or taught to scale, are massive. Just think of what a 5% increase in backlist sales can mean when a house’s backlist consists of tens of thousands of titles!

We are building the Logical Marketing web site right now and will formally announce the business, with services tailored to publishers, agents, authors, and brands very shortly. It will include a self-service portal for self-publishing authors. But we are already applying Pete’s knowledge on behalf of publishers large and small, two prominent literary agencies, and several independent authors. If you’re interested in getting more information about these services, an email sent to marketing@idealog.com is the way to let us know.

Etiquetado con:

Artículos relacionados