A digital analysis that relies primarily on hunches rather than proven intelligence isn’t a reliable analysis at all. Analysts use data to make decisions. Going the extra mile to transform data into true marketing intelligence is key.
Digital Analysis is split into three core phases: Reporting, Data Mining and Statistical Analysis. A mediocre analyst might consider his or her job finished after Phase One, but a skilled analyst knows that each phase is an important step toward developing transformative insights.
Let’s take a closer look at each phase of a successful digital analysis:
Phase One: Reporting
In this step, an analyst uses a digital analytics platform such as Google Analytics or Adobe SiteCatalyst to find data trends, which are presented in a report to a specific audience. This information can be very important for an organization, but only if it answers the question “Why?”. Let’s say I run a report on display ads for a client, and I notice a few campaigns perform significantly better than the others, so I advise the client to end the display ads that performed poorly. Even though there is supporting data, my suggestion is based on a hunch because I haven’t pinpointed the reason why certain campaigns are more effective than others.
Phase Two: Data Mining
Data mining helps uncover more information about why the effective ads performed well. Continuing with the same example, let’s say I take a closer look at the ads and use every type of data point possible in a data visualization tool like Tableau. With this program, I am able to drill down into the data, which shows that square ads with a call to action get more clicks than the others. This extra knowledge adds more validity to the suggested marketing decision because it pinpoints the campaign’s shape/size and message as reasons for success.
Phase Three: Statistical Analysis
Running data through statistical analysis software like SPSS or SAS not only ensures that your data is statistically significant, but can narrow down more specific insights. In the example of the the display ad campaigns, I used software to discover that the ad messaging is statically significant to success, while the size of the ad is not. This insight is sound in addition to being actionable, making it valuable marketing intelligence that can be used to tweak future campaigns.
Looking for a deep-dive data analysis? MaassMedia can uncover transformative insights to help your organization produce better results. Contact us to find out how we can help.
After successfully passing the Adobe Certified Expert SiteCatalyst Implementation Exam, I thought I would help out my future ACE professionals with this blog documenting the method to my madness.
When I signed up for the ACE Implementation Exam, I first reviewed the list of topics that were to be included on the exam. The volume and breadth of topics covered on the test were initially pretty overwhelming. To prepare, I searched for blog posts and other resources. I found a few good ones, including this one written by my predecessor here at MaassMedia. Church Mojo and Apothes posted helpful articles, as well.
Everyone prefers a different study method. I like to print out my study materials and make notes on the pages. There are four main categories of topics in the list I mentioned, so I made a study guide for each category. To fill each study guide, I went in to the knowledge base and searched each topic, then I copy and pasted all the whitepapers, resources, and other materials into a Word document that I cleaned up to make my study guides. Some bloggers suggest watching videos and studying the implementation guide, which are also good ideas.
Below are some of the more important topics the exam focuses on.
Written Section: Know the Products String!
I can’t stress enough that you need to know the SiteCatalyst products variable and all of its syntaxes. The majority of written questions were based on the products string, and almost all of them asked you to omit the category in the string.
There were also a good amount of multiple choice questions about product variable syntaxes. One question provided five versions of a product string with a different syntax and value in each, then asked which two of the five strings were syntactically correct.
The exam will also ask you to do simpler tasks, like properly define a traffic variable with an event. This was one question that wasn’t worded clearly, so make sure to look out for buzzwords to avoid confusion. For example, a question might ask: Someone wants to know what city people are from that go to this page, and this data should be correlated with geo-region. Also, an event should be launched on this page. In this example, “this page” and “correlated” refer to s.props, so keep that in mind whenever you see those keywords.
Multiple Choice: Know the Details!
The multiple choice questions cover a variety of topics. Keep these 14 tips in mind:
The most important advice I can give to help you pass the ACE SiteCatalyst Implementation Exam is study the details. You may be able to define all the major variables, but make sure you can distinguish traffic variables from conversion variables, and know their character limits. The multiple choice questions will test your knowledge of all the topics, so remember: DETAILS, DETAILS, DETAILS! Make that your mantra. Good luck!
Are you an Certified SiteCatalyst Implementation Expert? Tell us how you prepared for the test by leaving your tips in the comments.
For any online or offline company, collecting the opinions of customers is crucial to understanding which aspects of business work well and which need improvement. Online surveying has become quite popular for e-commerce, content-based, and conversion-driven sites. Most surveys are in the form of a pop up window upon entrance to or exit from a site or a questionnaire sent via e-mail and both methods are proven effective for gathering data.
There are several different types of surveys a company can implement on their site:
Panel surveys use carefully selected groups of people that are diverse yet representative of the “universe” of people being measured. For any survey, the vendor will segment their existing database of respondents into a sample that is applicable to the scope of the specific survey. For example, a survey focused on the attitudes of senior citizens towards public transportation will use only respondents that fall into the senior citizen demographic. The company Survey Sampling International will either create a customized panel to reach niche audiences or expand upon a company’s already existing panel by increasing its membership base. Global Marketing Insite (GMI) uses fraud detection, location verification, and de-duplication technology to ensure the highest quality of respondents from five different regions of the globe.
Customer satisfaction (C-Sat) surveys measure exactly that–the level of satisfaction a customer has with their experience on the site, a recently purchased product, or even an encounter with customer support. The questions are usually formatted with scaled response values (i.e. 1-10) and/or corresponding positive and negative descriptors (i.e. agree, strongly disagree, neutral). A company’s C-Sat score is a numeric value that can be compared with national industry averages as reported by the American Customer Satisfaction Index (ACSI). For instance, the June 2012 ACSI release for Transportation reported an industry average of 67 (out of 100) for airline customer satisfaction, with Southwest at a 77 and United at a 62. ForeSee and iPerceptions are two popular C-Sat survey vendors that offer similar services.
OpinionLab serves a “comment card” that asks users to enter suggestions, report a page error for a specific page, and rate the quality of the site on a numeric scale. The comment card is a non-intrusive pop-up that is either served at the end of a visit, or triggered manually via an internal site link.
Kampyle is another feedback survey vendor that allows customers to opt-in to fill out a short form. Visitors can provide feedback for sales, marketing, or support and include an optional screen capture of the specific issue.
While there are many advantages to using online surveys, companies must be aware of questionnaires’ inherent pitfalls. The main caveat of web-based surveys (except for panel surveys) is that the sample of visitors is rarely representative of the population because respondents are voluntary and not randomly sampled. In general, those who opt-in to surveys are more engaged with the website content and more inclined to take surveys. Therefore, any hypotheses formulated about the population based on the sample would not necessarily be accurate. The survey data may still be usable, but keep this drawback in mind when interpreting the results.
Another downfall of surveys is their tendency to contain biases. Nonresponse bias can occur in surveys with low response rates (According to market research by SurveyGizmo, average response rates for external surveys like C-Sat and feedback are around 10-15%).
Nonresponse bias refers, not only to the inability or unwillingness of chosen individuals to participate in the survey, but also to respondents who fail to answer one or more questions. Both factors contribute to the inaccuracy of the sample in statistically representing the population. To reduce this bias, increase the sample size to at least 10% of the population, and consider offering an incentive for participating. Also, make the survey as simple and straightforward as possible by reducing the number of questions and time it takes to complete (The same SurveyGizmo study shows that surveys that take longer than 11 minutes to complete have significantly higher abandonment rates than that take 10 minutes or less).
Survey questions’ wording and composition can also be biased. Oftentimes, questions can be phrased in such a way that they inadvertently (or purposefully) lead a respondent to a certain answer. Similarly, questions that contain vague words can confuse respondents, leading them to select answers that do not reflect their true feelings, or skip the question altogether. To avoid this, make sure the questions are clearly stated and use neutral, specific language.
For the advanced surveyor who has already mastered the basics, there are several ways to enhance simple surveys to gather more meaningful information. Overlaying survey data with behavioral data from a web analytics platform like Google Analytics or Site Catalyst can help a company understand, not only what people say, but also what they do. For example, someone who reports a low C-Sat score may not have reported a difficult experience using the shopping cart. Matching the respondent’s survey answers to web analytics data can reveal this hidden information.
A conditional structure is another option for enhancing basic surveys. For instance, respondents who say they purchased a product can be directed to additional questions about that product, while those who did not purchase may see different questions about consideration of the brand. Conditional questions can also arise when certain website actions are completed. If a visitor watches an embedded video, it is possible to immediately serve questions regarding the visitor’s experience with that video. Using this type of branched model will allow for deeper diving into areas of interest.
Whether you’d like to offer a survey to your customers, draw actionable analyses from your survey responses, or integrate web analytics with survey data, MaassMedia can help you use online surveys to allow your organization to reach transformative insights. Contact us for more information about our services.
Does your company look at survey data? We’d love to hear about the insights you have compiled from respondents. Leave a comment to share your thoughts.
If holiday seasons are representative of one another, we will likely see unique shopping behavior emerge over the 4th of July and subsequent days. This behavior will have significant impact on e-retailers’ conversion rates in the weeks to come.
Pew Internet (January 2012) tells us that, during holiday seasons, 25% of cell phone owners use their phones inside stores to gather price comparisons, and a similar percentage use mobile phones to look up online reviews. 19% (or 1 in 5) of those who searched for a better price on an in-store product eventually bought the product online.
Let’s do the math: 75 million people in the US (1/3 of the total internet population) buy goods online to be shipped to them over the course of a three-month period. At the end of 2011, 9% purchased through their mobile phones, a whopping 6% increase vs. 3% mobile e-commerce in 2010 (comScore, May 2012).
There are approximately 6.7 million mobile shoppers in a given quarter, of which a sizable portion will be comparison and product review shopping while in your stores over the upcoming (holi)days. This is a good audience to wow with big ticket items and back-to-school shopping.
How do you capitalize on this trend? By understanding your audience behaviors as they are interacting with your brand across multiple channels.
This holiday, why not…
At MaassMedia, we specialize in combining and analyzing multi-channel data, including mobile, to uncover transformative insights. Using these insights, MaassMedia has built statistically accurate predictive models to segment the people who are ready to buy from the tire-kickers.
Will you use your analytics expertise and guide these mobile leads proactively with uniquely personalized and targeted messages to your sites? Or, will you wait until they organically flock to your online stores?
We would love to hear your thoughts! Happy 4th of July from all of us at MaassMedia, LLC.
1. ComScore Online Shopping Customer Experience Study, commissioned by UPS, May 2012
2. Pew Internet and American Life Project, a project of the PewResearchCenter. The rise of in-store mobile commerce, January 2012.
Just about anything on a website can be tracked with web analytics. While it’s good to thoroughly tag your website, it is also easy to drown in a massive sea of data. That’s why it’s important to distinguish meaningful metrics from vanity metrics.
Vanity metrics is a phrase made popular by Eric Ries in his book The Lean Startup. These are metrics that might make your business look good (especially when they go up), but in reality don’t provide much insight to help your business grow.
Save time on your web analytics reporting by taking these 5 misleading metrics with a grain of salt:
Executives love a good spike in traffic. To the untrained eye, more visits means the company is doing something right. While that may be the case, number of visits alone doesn’t provide any indication of what the company is doing right. Often, a business will attribute a traffic spike to a recent project, like new ad campaign or change to the website. The problem is that the design team could be high fiving each other for the success of their new feature, while the marketing team celebrates their effective social media strategy. And when traffic plummets, no member of the team wants to take responsibility.
Causation and correlation are not one and the same. It is crucial to determine how and why visitors come to your website–or don’t come to your website–rather than make assumptions. Once you can pinpoint the actual cause, you will have a more sound idea of which techniques work and which don’t.
2. New vs. Returning Visits
In theory, it makes sense to monitor the ratio of new to repeat visits. Visitors who return to your site later are hopefully doing so because they had a positive experience with your site. Unfortunately, it is possible for repeat visitors to be counted as new visitors (or “unique” visitors) and vice versa. That is because information about which sites a person has visited is stored in a browser cookie. Clearing cookies from a browser deletes any record of a prior visit, so a returning visitor who has cleared his or her cache will be counted as unique. Additionally, if a visitor switches browsers, IP addresses, or devices when returning to your website, he or she will be counted as a unique visitor instead of a returning visitor.
On the other hand, if a new visitor uses a shared computer to view your website, and if that website has been previously viewed on that device, the new visitor will be counted as a repeat visitor.
3. Traffic sources
At a glance, a breakdown of traffic sources can provide a general idea of how visitors find your website. Maybe they use search engines, click links from other sites to yours, or type the URL into their browser. Instead of providing answers, this information only raises more questions: What keywords do visitors use to find your website on search engines? Which traffic sources refer visitors who typically convert? Why do some visitors come directly to your site?
To find more meaningful insights, take the time to tag your marketing channels (display, paid search, e-mail, etc.) and focus on referrals from each source. If your business uses print ads, use a vanity URL unique to the ad to see which campaigns brought the most visitors.
This metric is directly influenced by your content and navigation, but doesn’t tell you if your website is any good. A visitor searching for something specific might visit several pages if the content is difficult to find, while another visitor might be presented with engaging content that keeps him or her clicking around your site. Both of these visits would increase your number of pageviews, but only the latter visitor has a positive experience. Therefore, deeper analysis would be necessary before determining the quality of your site’s content and navigation.
5. Time on Site/Time on Page
Like pageviews, time on site can go up or down without indicating why. A person who finds interesting content on your site could spend just as much time as a frustrated visitor who can’t find the content he or she is looking for.
Another reason to take these metrics with a grain of salt is their accuracy — Time on site and time on page are inherently skewed. For exit pages, time on page is recorded as 0 seconds because there is no server request when a visitor leaves your site. So, if someone spends 2 minutes on your homepage, clicks an article that they spend 15 minutes reading, then clicks offsite or closes their browser window, their time on site will be recorded as 2 minutes.
Getting to the data that can positively influence your organization’s decision making process often requires custom tagging and drill-down reporting. Read Matt’s post about 5 website actions you should be tracking to learn what metrics you should be focused on instead of vanity metrics.
If you need help looking past meaningless metrics to uncover transformative insights from your data, MaassMedia can help. We’ll implement and deliver tailored reports with actionable analyses to help your business boost ROI and improve user experience. Get in touch with us for more information.