In a comment left on my last post (which is not quite the hot topic it was, but is still simmering), Deb said,
I wonder whether the “research” web sites might reconsider their dichotomous listings of publishers. I.e., a house is either listed as “not recommended” or no warning of any kind is listed. Example: “a publisher”.
Perhaps a rating system would be more apropos. Pubs who have had legitimate, verifiable complaints against them, in a certain narrow range such as: breach of contract, nonpayment of royalties, failure to distribute, etc., might result in a “D” grade, whereas a publisher without complaints would merit an “A”.
A system such as this would certainly be more work. However, it would warn authors off from the well-intentioned non-scam pubs who haven’t performed as they’ve promised.
Scams, of course, would merit a big fat razzberry “F.”
Although Writer Beware doesn’t list or recommend publishers, the issue of a rating system is an interesting one that has come up before, so I thought I’d address it in a blog post.
I have a number of problems with the idea of grading or rating publishers (or agents), and I think it would be difficult, if not impossible, to come up with a reasonably objective rating system that would be helpful to writers–and wouldn’t become an insupportable headache for the rater.
– Documentation is straightforward, but complaints can be hard to assess. One might possibly be able to come up with an objective system of grading based on documented problems–nonpayment, poor contract terms, breach of contract. But coming up with a system to grade authors’ complaints would be a lot harder.
“My publisher owed me X and didn’t pay” is reasonably straightforward, but “My publisher sent me a nasty email when I asked a question on an email loop” is more subjective, even if many authors say the same thing. As we’ve seen again and again, nastiness and harassment are all too frequently the last refuge of a failing micropress publisher. But what if the publisher is nasty in private, but otherwise does a good job of getting its books out? I have some examples of this in my files. Would the publisher get a “D” for author relations, a “B-plus” for publishing, or some complicated grade in between? (My head is already starting to hurt.)
There’s also the question of context. Two serious complaints may indicate a problem publisher–or they may be a fluke. Certainly there are publishers in my files that I wouldn’t hesitate slap an “F” rating on–but there are many more about which I have enough complaints and/or documentation to suggest that caution may be in order, but not enough information to feel confident about giving the publisher a rating or a grade.
Another issue–how do you rate author unhappiness, which may be a sign of real problems with the publisher, but also may reflect unrealistic expectations on the part of the author? Does the mere fact that complaints exist dictate a lower rating, or are there complaints that one can safely ignore? For instance, I regularly get emails from writers who are indignant that AuthorHouse or a similar self-publishing service did nothing to market their book. That’s not a problem with the self-pub service; it’s a problem with the author’s expectations. I’ve also heard from authors who are angry with otherwise problem-free micropresses for similar reasons. You all know my personal opinion of most micropresses–but is this really complaint-worthy? If you choose to publish with a micropress, you have to accept that it isn’t going to do what a commercial publisher will. Shouldn’t the author have done enough research at the outset to know what he or she was getting into?
In applying a rating system (if you did it right), you’d have to compare and contrast and weigh all these factors. Not only would this be difficult and time-consuming (and subjective–see below), it might not be especially helpful for writers, unless you explained the factors that went into each grade. Again, that’s time-consuming. I’m not going to whine about watchdogs being volunteers who do the work in their spare time–but I just can’t see this as the best use of limited volunteer hours.
– Complaint collection is serendipitous. The watchdog groups have to depend on authors who are having problems to come to us. We can’t be sure they will, even where there’s a really bad situation. (For instance, yet another micropress is currently in the process of imploding, but not one of its authors has contacted Writer Beware. My knowledge of the problems is second-hand, from blogs and message boards.)
So no complaints about a publisher might mean the publisher is great–or it might just mean we haven’t heard anything bad. A mere absence of complaints, therefore, doesn’t mean the publisher deserves a good grade.
Nor would a rating system eliminate the problem of publishers without notations. Even if we rate the publishers we do have information and documentation about, there will always be a large number of publishers about which we have no information or documentation at all, and thus can’t give a grade to.
One size does not fit all. Different publishers have different specialties and focuses. They also have different cultures and different expectations of their authors. An “A” publisher for one author will not be an “A” publisher for another, even if both authors write in the same genre.
I’m also concerned that writers, who are always eager for a shortcut, might use ratings as an excuse not to do proper research. (I know, I know. Many are going to do that anyway. But why encourage it?)
No matter how objective in their intent, ratings systems are created and applied by human beings, and are thus, in the end, subjective. Nonpayment of royalties or contract breaches, when documented, are obviously problems deserving of a poor grade. But if I created the system and did the rating, I might give a publisher a “D” because it had no distribution beyond the Internet and its owner had no previous publishing experience–even if there were no author complaints and the publisher had a decent contract. Someone else doing the rating, however, might feel that inexperience and POD distribution only pushed the publisher down to a “B,” especially if the publisher demonstrated good intent and was trying hard. I disagree–but hey, that’s my bias. I know a lot of people feel differently.
Remember the currently imploding micropress that I mentioned above? (Don’t worry, I’ll blog about it soon). I’d have given it an “F” to from Day One, due to a combination of factors: the owner’s total lack of any relevant professional background, a seriously nonstandard contract, no distribution, horrid amateurish book covers, and various other evidences of nonprofessionalism. Nonetheless, other observers were willing to give this publisher a chance, based on its expressed willingness to learn and change, and its reported intent to develop distribution, make its books returnable, etc. These observers might have given the publisher a “C” or even a “B.”
Could the publisher have wised up, made changes, and succeeded? Sure, in which case my “F” would have been mistaken. I’ve definitely been wrong about these things before. But in this case, I wasn’t–which means that someone else’s “C” would have been less than helpful to writers. (Which, of course, raises the issue of competing rating systems. I don’t even want to think about how confusing that might be.)
Bottom line: a rating system is only as reliable as the biases of the rater.
A rating system would be in constant dispute. Look at the shitstorm that has been stirred by my previous post about something indisputably factual: Light Sword Publishing’s recent loss of an author lawsuit. Imagine the shitstorm that would result from publisher ratings. Raters would be bombarded not just by publishers that didn’t like their “Fs”, but by publishers wanting to argue that their “Cs” really ought to be a “B-pluses.” I can imagine a situation in which a rater had to spend as much time defending his or her ratings as collecting information or disseminating warnings. Again, not a good use of volunteer time.
All other issues aside, a rating or grading system would just be too much of a headache to sustain.
For all these reasons, I think it’s more helpful for watchdog groups and research sites simply to collect and disseminate information, without attempting to rate it. Writers can then factor the information into their research, as part of the process of making up their own minds.
One thing you might consider posting, as it is straightforward: Length of time a publisher has existed and if they have revealed any distribution methods beyond their websites and, “Available through Amazon, etc.”
I know of several publishers that look really good, but they’ve been around for years and have never tried to get beyond that stage. Yet other, more aggressive publishers seem to reach the goal within months. To an author who wants to sell beyond friends and family, that would reveal a lot without putting you in a “subjective” area.
There’s no substitute for doing the research.
Amen!
Absolutely.
I’m in a sort of interesting position, as a writer aspiring to publication who also works in (a totally different and unrelated area of) the publishing industry. I don’t know that much about the specific ins and outs of commercial fiction publishing, except what I’ve learned from doing research and talking to published writers, but I feel I started out with a significant advantage over many of my fellow writers-aspiring-to-publication just because I know certain basic facts, such as the following: (a) Ther’es not a lot of money in publishing, but if there is any flow of money between the publisher and the author, it should be from P to A, not from A to P. (b) Publishing anything takes a really long time, and publishing a whole book takes a really, really long time. (c) Being polite to authors is good, but getting their articles/stories/books out on schedule is better. (d) The publishing industry in real life is not like the publishing industry on TV. (e) Mistakes happen. All the time. There’s no such thing as a perfect book — at a certain point you just have to let it go on to the next stage of the process. (f) People can have the best intentions, and really sincerely mean to follow through on their promises, and still turn out to be total deadbeats.
So much authorial disappointment is clearly the result of unreasonable or just misplaced expectations, or lack of research about how the industry works, or both. And author-agent and author-editor relationships, while of course they are business relationships, are just so intensely personal and individual that you can’t possibly rely on someone else’s assessment of whether the agent, editor or publisher is “good” or not — s/he/it might be perfect for some other author but a disaster for you, and vice versa. The only really useful, objective data are the answers to questions like, What kinds of work does the agent/editor represent/publish? How many books (or books like yours) has s/he sold/bought in the past [time period], and for how much? Does the publisher pay its bills? pay its authors? Does the publisher publish books like yours? Does the contract you’re being asked to sign have anything sketchy in it? etc.
And then there are all the questions that different authors will answer differently, like, Do you like the covers of books published by this publisher? Do you like the person/people you will be working with, or do they drive you nuts in any way? Are you happy with how much marketing this publisher typically does? Does the editor’s or agent’s style of communication suit you? etc.
I just don’t see how any rating system could be objective, or comprehensive, or effective, or manageable. There’s no substitute for doing the research.
Great post. We just had a similar discussion on the EPIC loop. Some want EPIC to develop such a pub rating system, but it would be a nightmare due to the reasons you posted.
A “grading” system would be especially difficult in the more ephemeral and/or judgment calls of author/agent/pub interaction, as has been mentioned–especially in the area of “rudeness, because “rude” is in the perception of the beholder.
I’ve learned this firsthand, in my move from the Chicago area to northeast Indiana. In this neck of the woods, if you relate to people in a quick, direct fashion–as I got used to from living in and around the city–the people think you’re being “rude” or “abrupt.”
So one of the prime examples many people cite–a scrawled “not for us” on the bottom of your query letter–to these people, would be callously insulting. Me? I’m sitting there thinking, “Hey, they took the time to personally write a note, and they knew who they were responding to, because it was on my letter. Could be worse.”
Yup, a grading system would be a dream…problem is, too many “gray” areas would make it pretty well meaningless at best and a nightmare at worst.
(sigh)
Janny
I feel I must echo some of Victoria's comments because P&E sees the same thing. We receive correspondence complaining about tardiness, curt notes, rudeness, and more that just can't be properly quantified in light of how the publishing industry operates. Consequently, P&E's rating criteria for its recommendations tend to stick to things that can be documented.
Where publishers and agents are concerned, the problem with user ratings is that–not to put too fine a point on it–too many of the users are inclined to rate the wrong things.
I can’t tell you the number of writers I’ve heard from who feel that Inexperienced Agent X has to be “legit” because she was so polite and responsive, or that Amateur Publisher Y is great because it turned a query around in less than two weeks. Factors like these are very important to writers–but they don’t mean very much when evaluating the competence of an agent, or the reputation of a publisher.
Conversely, I often hear from writers who want to trash an agent or publisher because he/it took months to respond, or scrawled “not for us” on the original query letter, or never answered an e-query. In an ideal world these things wouldn’t happen–but this is not an ideal world, such problems are, again, not particularly relevant when you’re evaluating competence and reputation.
I do think it’s helpful for writers to be able to comment on their experiences, as at Absolute Write and some of the agent-matching sites. Agent Z’s penchant for long response times may say nothing about his skill, but it’s good to know what to be prepared for when approaching him. But ratings, I still feel, are just too unreliable and subjective to be useful.
I wonder if it would be possible to do something like the travel sites do.
They list hotels for example, and users can comment and give rating stars. The site itself is neutral.
I’ve read some of those in looking for a hotel and sometimes one person has a good experience, someone else bad. The thing is it might be over two different issues like cleanliness versus rude staff. The person reading can tune into whatever is important to them and then check further.
I’m sure the tech logistics could be huge, but then again, mabye a blog format would work somehow.
Just my two cents.
Cheryl Pickett
http://www.publishinganswers.com
Victoria, I see your points. Thanks for addressing this–I can see how it would become a migraine for anyone attempting to do this on a comprehensive basis, given the number of non-huge presses out there and the sheer volume of data to be massaged.
I wish, however, there were some way. It remains for writers to do their due diligence and see if they want to go with a publisher who excites such a whirlwind of negative remarks on P&E, HiPiers, Absolute Write, or any of the other valuable blogs/sites.
Absent an overt rating system, I think it’s possible to infer something from a publisher’s popularity. I’ll use literary journals as an example, because I’ve compiled a list of them on del.icio.us, admittedly without regard to quality or circulation numbers. However, I assume other people using del.icio.us are more likely to bookmark some of the same litmags because they like them for whatever reason. Maybe people bookmark certain literary journals’ websites because they’re easy to navigate, the editors happen to be close friends, the magazines have published their stories or poetry, their submission guidelines were user-friendly, or the extraordinary subject matter is of particular interest. I can tell at a glance how many other del.icio.us users have stored a literary journal’s URL for future reference, because the number is highlighted in pink. The higher the number, well, YOU decide what it means. (Speculative fiction aficionados will be pleased to know the number of del.icio.us bookmarks indicates Strange Horizons is rather popular.)
As an indicator of reputation, the volume of traffic a publisher’s website receives would not be as reliable as social bookmarking might eventually prove to be. Perhaps what has been suggested will evolve naturally on social cataloging sites like LibraryThing, Shelfari, WorldCat, Goodreads, and the like.
Your Light Sword post has provoked a shitstorm…the only people giving you grief are a) anonymous and b) employees or authors of Light Sword.
The anonymous posters could be just one or two people…and to any objective observer, it’s clear Light Sword doesn’t meet the minimum standards of professionalism in the publishing industry. It was not an imprint that carried with it any respect among booksellers, readers, agents or authors even before a court determined they engage in fraud.
Lee
If the press you're hinting at is what I'm thinking about, it was once not recommended by P&E. It's rating was returned to neutral when the press made changes in its contract and policies. If the current situation deteriorates, P&E will not hesitate to give it a not recommended rating once more.
As someone who works in publishing, I hear a lot of complaints – valid ones – about standard, big house presses. We’re talking the big names of thepublishing world.
Thing is, these are valid, good presses. Yet they sometimes screw with authors, mess up PR and don’t get around to paying until the author gets a lawyer. A professional SF writer with a number of books out with legit presses described publishers has having payment plans: 1) payment when work is published (best). 2) Payment in segments if work sells (probably the best you’ll get). 3) Payment upon author bringing a lawsuit (more common than you think).
This is why having a good agent with a good lawyer in tow can be helpful. I’ve a friend who published a good book with a big house. It got nice reviews, sold okay. But then he was screwed over on the second book. He didn’t have an agent to make the scary phone calls.
My impression is that, worried about layoffs and the bottom line, publishers across the board are being more careless with authors than they once were.