Improve your visibility – gain more support
November 19, 2016
I was recently part of a small group offering advice and counsel to a young nonprofit trying to improve its visibility and develop new sources of financial support, as a large grant that has carried them will be expiring.
Here’s a summary of my advice to them:
- Broaden the population you serve. That means more stakeholders, and more people motivated to tell your story.
- Seek media coverage.
- Create tools top tell your story — articles, white papers, audio podcasts, short videos, etc.
- Become a thought leader. Offer authoritative opinions, clear professional advice, quotable quotes that the media can use.
- Encourage other thought leaders to use and recommend your program. They will then help tell your story.
- It will be hard for your program staff to focus on such public relations work at the same time that they are passionately involved in program activity. Seek earmarked funds to hire a full time (or maybe part time) professional publicist.
One of my major clients some time ago was Motherwear — maker of clothes that make it easier for mothers to nurse discretely. I was practically their IT manager, and did much of their system design and programming.
This story concerns a time when they were beginning to utilize the web, and needed a way for customers to un-subscribe to the email list. I implemented a system where customers or prospects could send email to a particular address (e.g. unsubscribe@Motherwear.com), and that would cause the desired result.
However, it wouldn’t work if the mother was subscribed at her home address, and sent the message from a work email. All sorts of other things could result in no match. So I needed to print a report listing as much data as possible about the email received at the unsubscribe address. This would help a staff member manually match up each unsubscribe request with the proper entry in the database. And although most of the data was in various email headers, I assumed that I’d also print out any text the mother wrote.
The new junior staff person at Motherwear who was assigned to this task tried to talk me out of including any message the mother might have sent. “She should know that she’s writing to a robot that doesn’t understand”, this staffer told me. “It’s just an automated process, and she has no business using that medium to write tho us”, she added.
I persisted. And, as a good friend of Motherwear’s owners, I felt comfortable proceeding on my own. I didn’t know what these women would have to say, but felt it was only right to listen to their message. If a woman had bothered to write something, then she didn’t regard the recipient as a robot, and that is what mattered.
So I did print out all those messages, and the vast majority of them told the same story — “I just lost my baby, so please stop sending me information about nursing.”. Of course, Motherwear immediately set up a process by which a personal note was sent to each mother who shared this story. Technology could have made it easy to hide from the social reality. But in this case, technology gave us a way to communicate in a compassionate way to a set of people who were obviously in pain.
When asked what should be included in an Executive Director’s report to the board, I responded with this model of “OARS” to help the board be aware of the steering environment:
- O = Opportunities . . . that the organization can (and perhaps should) pursue
- A = Accomplishments . . . both little and big successes
- R = Risk factors . . . things that look like they might go wrong, including action taken to mitigate these.
- S = Surprises . . . that the ED encountered. Yes — even in a well run organization, with very professional staff, there are surprises
This model was inspired by the “Significant events” report I had file each week when I was a mid-level manager at General Electric. Each of my staff had to write such a report to me, and I distilled and condensed these, along with my own list, in my report to senior management. Our “significant events” were named differently, but functioned in the same way to alert our managers about situations that would likely develop — either into more mature problems, or into inspiring successes.
The underlying value here is truth telling. I knew it was easy for my staff to report great successes, or opportunities that seemed to be developing. It was much harder to report those out of control situations that could get worse, those stakeholders whose dismay was escalating, those situations that seemed only to work against us. But my job was to know of such situations, and to organize appropriate responses. Were I kept in the dark, I couldn’t really do my job.
Asking the right questions is a key to successful donor engagement
September 11, 2013
Recently I was asked to comment about how best to engage donors to a nonprofit.
My answer — Ask the right questions. Decide what conversation you’d like to have, and figure out what QUESTIONS will start that off. Donors like to be listened to, like to be heard, and like to be treated as important. Asking the right questions, and then listening carefully will make a huge difference.
An example: “Why have you been so generous with us . . . with three very significant contributions in just the past year?” That’s not a question many fundraisers would ask. But prompting the key donor to review his or her satisfaction in supporting your work may be more effective than any words you might provide about why your work matters.
One caveat, however: You must care about the answer you will receive. Asking questions just for effect won’t work at all. The donor or prospect may surprise you, may confirm your understanding or expectations, may challenge you. But, whatever its effect, the answer will be important.
The importance of the answer – That’s a major part of what makes a great question.
One caveat
They just lost an eager customer
December 12, 2012
I’d read about Three Buoys — the new fish restaurant that recently opened up a few blocks from my office. It sounded like a wonderful place for simple fish sandwiches, and for more sophisticated seafood preparations as well. So, I thought this morning, why not try it for lunch today.
I was warmly welcomed, but then presented with a dense page of typewritten text, that must have contained at least 100 different menu items. All were priced well above what I wanted to spend for lunch, and most were not seafood.
Yes — I could have scanned the menu, checking out the seafood items. There probably were some that would have interested me, and at a price not too much above what I expected to spend on lunch. But the very fact that there were so many items on the menu was proof positive to me that this place couldn’t be doing anything very well.
My “proof positive” may have been completely wrong. I might have missed a taste thrill for lunch today. But that’s not the point. What’s interesting is that I entered the restaurant wanting to buy something, and I left feeling upset that this mission was so hard. They might have lost not just my one purchase today, but a loyal customers for years to come.
What might have helped turn me into a real customer?
- A menu sorted by kind of item — Perhaps fried seafood, broiled seafood, soups, salads, meat, poultry, etc.
- A menu that was just shorter.
- A server who has offered to help me pick out something on the long menu for this new place.
- A page of lunch specials — perhaps only slightly cheaper than the full menu, but much more approachable.
Also, as I left, somebody might have asked, “We’re sorry you’re leaving . . . What were you looking for that you didn’t see on the menu?”
I want this little place to succeed, and may go back to offer my feedback — for what it’s worth. But if they are not querying their customers (or would-be customers), there’s not a lot of hope that they will get it right.
Assessing web site usability
October 4, 2012
There are lots of tests to ensure that web sites have readable type, clearly delineated links, reasonable numbers of elements per page, etc. Web sites can be assessed according to various standards of accessibility, such as for people with motor or visual handicaps. All the information gathered from such tests can be useful – but doesn’t by itself answer the important question, “Does the web site work?”
A web site works when users feel comfortable navigating it, find themselves engaged in the experience, are able to find the information or understanding that they want. It works when users are spared those moments of fear during which they are not sure how to proceed, and are afraid that they will lose their place in some way. It works when the users’ experience is enjoyable, and doesn’t end when the most immediate goal is reached.
But – perhaps most important – a web site “works” when the user is engaged in the virtual conversation that the site owner has tried to create. This might be “Let us help you find the software you need to get your printer working”, or “. . . find the car you need, and can afford”, or “. . . sign up for the education program that will help you meet your life goals.” Of course, a specific client’s goal may be quite different than these examples.
What connects all of these conversations is that they have to do with more than just information – although information is important. They are about a user experience, that promotes engagement, that cements a relationship with the vendor or provider, that instills confidence, and, often, that results in continued sales. A colleague of mine once said, “If you want to use social media, you need to be social”, and I find this dictum a very helpful guiding principle for all web development and evaluation.
Imagine a web page for the car manufacturer, that offers three choices:
• Daisy models
• Tulip models
• Amaryllis models
While these names may be perfectly clear to those very familiar with this carmaker’s line, their presence would probably be intimidating to many users. “How do I know where to begin?”, they would ask themselves, and would then feel that they are just making a guess on one of these three.
Now consider an improvement on this:
• Our basic line – the Daisy series
• Adding features and elegance – our Tulip series
• The car you’ve dreamed of owning – the fine Amaryllis series
This removes the ambiguity for users not familiar with the car models. In that sense it’s probably “correct”. But what kind of relationship does it establish with the user? What’s the conversation? It’s simply, “We have these cars. You can learn about them here.” That’s not the conversation that will create eager buyers, or will sell many cars.
So, lets imagine a stronger approach, designed to really engage the user:
• Configure your Daisy model – a basic car, for any budget.
• Configure your Tulip – offering you more comfort, style, and class.
• Configure your Amaryllis – and be so proud of the car you’ll be driving.
Here we have a strong invitation to the website user to really try out one of these cars, start looking at colors, options, etc. The language here may not be exactly right, but I expect most of us would still find this third option the most likely one to win friends and initiate sales. It invites a relationship that must, of course, be continued in the rest of the web site interaction.
In the examples above we can see at least three aspects of web site usability:
• Users can proceed with clarity and confidence (not made to feel foolish).
• Users learn relevant information about product or service.
• User are drawn in to a conversation, engaging with the vendor.
How can we assess these in a systematic way? As a skilled practitioner, I can certainly review a web site, and offer much constructive feedback. Indeed, much of my role is in offering such expert critique or suggestion.
But such one-person theoretical review has strong limitations. The real test is how the web site works when actually used by typical users. (I may resemble the “typical” printer user or car buyer, but I’m certainly not the typical prospect for a vocational college.) My method is simple to understand, but logistically can be quite complex.
- Clearly identify the persona to be used in testing. (This should have happened during web site design, but often it does not.)
- Define a test script, which the subjects will be asked to perform. (This may be finding some information, assessing several institutions, learning a skill, etc.)
- Determine a performance test, that will be used after the test to see what the subject has learned, their inclination to proceed with the content, their inclination to consider a purchase if there is a sales objective.
- Find the test subjects, using the criteria identified in (1) above. Typically subject will be paid for their time.
- Conduct the test, simply watching each subject, but with no intervention. Sometimes we will video the test as well.
- Conduct the test again, but asking the subjects to annotate their behavior – at each step, say what they are doing, why, and what kind of response they are seeking.
Note that we are never correcting or guiding the subjects – with one exception: If they appear to be lost, we may inquire what they are seeking. We will not answer their question, but will record in detail the dilemma the user reported.
On occasion, we’re called upon to review not just a web site in isolation, but its performance relative to the sites of competing vendors. This might involve simply repeating the test on several sites, or we may devise particular performance tests that measure how subjects rate the various vendors based on the web site experiences.
What I’ve described here may seem quite different from the more analytical evaluation processes often used by other usability consultants. I prefer this holistic approach, in which web sites are evaluated primarily by their performance rather than by an enumeration of characteristics.
Only after going through the testing process might I want to review the statistical data offered by such tools as Google Analytics. These tools are particularly helpful for identifying how users arrive at the site and where on the web site they tend to go. But the tools offer little guidance about the user experience, motivation, relative ease or frustration, etc.
In summary, I recommend, and I practice a holistic evaluation of web sites, in which behavioral goals are clearly identified, and in which silent observers watch users during real interactions with the web site, or in which the observers interact with the users only to identify more completely the user’s experience. Web sites work when they create and engage users in a productive conversation.
Postscript: Usability review is not design review. I’m a very visual person, and appreciate fine typography, uncluttered layout, elegant design. I’d like to believe that these are an important part of web site success. But data suggests that they may not be as important as I would like. In any case, the tests that I’m describing here evaluate how users behave when working with the site, and not how the site appears to its designers or critics.