Role of Geo-data in Channel Marketing
From a business perspective we operate in a highly competitive environment where we strategize, theorize and forecast the best methods to convince clients to purchase our products and services while thwarting numerous competitors who themselves are also competing for these same dollars.
This highly competitive circumstance must be viewed as a highly inter-correlated econometric ecosystem where we have a multitude of intertwined forces, where change in one force will have direct effect on another.
No longer can we simply look at one dimension of the problem such as customer sales—nor is it enough to simply augment this circumstance with socio-geo-demographic data; but we must also include spatial, econometric, competition and cyber / social / internet data.
In order for our marketing efforts to be complete, we need to establish the link between these data sources; from the point of the earth’s surface of a given address to the socio-geo-demographic data frames available from the government to the nebulous and ambiguous information framework of the internet. The goal here is to integrate all these points together such that we can create an actionable marketing framework across all marketing channels.
This article is the first in a series of four articles that helps investigate how we create this integrated data framework linking the world of “Bricks and Mortar” with that of the “internet”, “cyberspace” and social media. In this first of article we investigate what we need to build from a data perspective in order to be able to determine which econometric, socio-geo-demographic and competitive forces are responsible for effecting our business—and to what degree.
In our quick and simple definition of "geo-data", lets consider the notions or properties of “what”, “when” and “where”.
Geo-data has a “where” property - fields in a dataset that have a location, coordinate or reference position on the earths surface. "Geo" implies that the dataset has a spatial component that allows us to geo-reference the described object to a specific location or region on the earth—so in simplistic terms this would be a coordinate system like a latitude and longitude pair.
Geo-data is also thematic in nature—this would be the “what” property. In the early discipline of geography this would have been simple zenith (height) values, but today is would also include census data collected by government agencies to include themes like econometrics, population and household characteristics, employment, education and so on.
Geo-data may also has a temporal element that defines “when”. “When” could be the time a bank account is opened; when a mortgage matures or could be when a sales transaction occurred and collected from a point of sale system (“POS”). We can also have various “states” of when—or different points in time—as would be the case in time series data or perhaps dates in a transactional data stream.
Geo-data vs. Spatial Data
Geo-data has over the years taken on a number of other labels most notably tags such as "spatial data", "geographic data", "geographic data sets" or "GIS data". However to be somewhat more exact, “geo-data” will refer to a item of “data” that refers to a “geo” point (point, line or object) on the earth’s surface at a point in time, whereas “spatial data” refers to a item of “data” that refers to a point (point, line or object) anywhere in space at a given point in time. By these terms “spatial data” is more a view of data in the “Einstein view of space and time” whereas “geo-data” is more a regular view of location data here on earth that we are more accustomed to.
Visualizing Geo-data
When we traditionally work with data we are working with relational data structures where we link various data sets together based on a common key—something like a customer ID. An example would be a client address file and a transaction history linked by a single common client identification number or code.
Geo-data would be perhaps be best understood as overlapping layers of data—and the nature of the overlaps determines the relationship between the data files. The relation depends on whether one data item is within; contains; touches; or is adjacent to another data item.
Hence geo-data is somewhat different in that not only is it capable of using these common events, methods and functions in our relational data circumstance, but it has operational properties that are not available with traditional relational data sets.
To visualize these layers of data consider figure 1, where we are dealing with things like points (addresses for instance), line segments (these would be roads and highways for example) and polygons (areas like “states”, “provinces”, school districts, etc) and we link these data sets together by how one “geo” data set “fits” (within, contains, crosses, or touches) in relation to the other.
Geo-data is remarkable in it’s breadth and scope. Census data has been collected by countries like the US, Canadian and UK governments for hundreds of years and have developed a well organized methodology to not only collect but administer and distribute this data as well. This data is critical for government since it provides vital information for public services like education, police, health and medical, water, hydro and so on and various aggregates of this information become important sources of data for the business and marketing communities as well.
Let’s consider how the USA organizes geo-data as defined in the figure 2 below. Data is organized in a very structured hierarchical framework where the highest level of aggregation is at the top of the hierarchy (The United States of America) and American Population (the individual person) at the bottom.
When we consider geo-data however, our lowest level of aggregation available to us from a data distribution basis begins with the block—something which might be loosely considered equivalent to a street block face. Block groups are collections of blocks and the best description of block groups would be to consider them as neighborhoods.
Census tracts are collections of block group; counties are collections of census tract and so on. What is important to understand however is that as you go up the hierarchy the breadth of information fields grows and this is due to data suppression at the lower levels—but you lose the granularity and the detail of the lower levels. Consider you live on a street block with only five houses— if one was to consider data like income or investments it would be very easy to determine a particular persons details—and so for these reasons this data could be suppressed. As you move up the hierarchy to block group or higher up like census tracts, this becomes less probable and therefore suppression is less of an issue.
Naturally the data we try to use for marketing must be as refined as possible and as accurate as possible. Probably the best unit of analysis are block groups since block groups are small aggregates of households, retaining the unique nature of the population and less likely to have suppression issues. From a marketers perspective we believe that birds of a feather flock together—that the forces driving people to purchase (rent) homes in an area for example means that their ability to afford the home and residual cost of living to support that home are relatively equivalent; hence grouping the people at this neighborhood level would be very acceptable since it still retains the unique nature of the individuals of the neighborhood but protects their personal security at the same time.
Visualizing the relationship between data layers
To illustrate how the inter-relationship between these data layers works, lets take a look at an example. Let’s pretend that I live at XY01 19th St, Sacramento, CA (just between 3rd and 4th Avenues) at the zip code 95818—a place I call ‘myHome’; what would be the links between myHome and the census hierarchy available?
In order to get a feel of where exactly I am geographically, I rendered a quick view in MapPoint (now Bing Maps) where the inset shows us where my position is relative to California and the map itself illustrates my position relative to 19th St—between 3rd and 4th Avenues.
My home position can also be visualized in a different format where we see postal geography as zip plus4 points such as "95818-3010": note that the "plus 4’s" are not static and change often since they reflect the USPS postal carriers delivery route. For example in figure 4 we see 'myHome' zip+4 surrounded by a blue square and we also see that this zip plus 4 exist within a block group—an area of geography in this case bounded by red lines. We should also note that there many zip plus 4’s within a block group, and the actual zip+4 point represents the center of the street segment.
One should note that this relationship between the 10 byte (including the hyphen) zip plus 4 and the block group is equivalent to blocks and block groups—in that blocks exist within block groups. For our example however, we are more familiar with zip plus 4’s and therefore we illustrate our example with zip plus 4’s rather than blocks but the idea is the same. One could draw a conclusion therefore that there is an inherent relationship between blocks and plus 4’s: however the difference comes down to usage.
Blocks and block groups are government generated administrative objects used for data collection, whereas the 5 byte Zip and the 4 byte “Plus 4” are generated by the USPS for mail distribution. In terms of longevity a block code is static (doesn't change until the next census where it may or may not change) whereas the "Plus 4" has a high probability of change given it represents a letter carriers walk route and distribution. Both are driven by population density - block’s are essentially street fronts between intersection points which is operationally what the Zip+4 is.
By zooming out somewhat we see better the relationship between zip plus 4’s and block groups; specifically that there are many zip plus 4’s within a block group—and that block groups themselves are themselves small in nature—essentially like a neighborhood.
In our example of figure 5, the block group where our postal geography lives is “06 067 002300 2” (shown by yellow box), and we also see that there are many block groups which themselves are bounded by either man made or natural boundaries (roads, rivers, etc).
As we continue to zoom out (to 7.5 miles) we can begin to appreciate how well the government has organized the hierarchy. Our zip plus 4 is not visible although we do see many of the dots that represent those points—and we can still see the yellow box bounding our block group. There are many block groups which themselves are bounded within counties (green lines).
Figure 6 illustrates the hierarchy depicted in figure 2—specifically that counties are aggregates of census tracts that are aggregates of block groups; and that block groups are aggregates of blocks or zip plus 4’s; and blocks and/or zip plus 4’s are aggregates of individual people.
Figure 7 (zoom at 25 miles) is a good illustration of how census boundaries mix and work with administrative boundaries. First it is clearly evident that there are many block groups that exist with counties, and secondly that the basic 5 byte zip code is a rather large administrative boundary area that itself exist within counties.
What should come to light is that the zip code (5 byte) itself can be a rather large area which is a collection of block groups - an important detail from a marketing perspective.
While zip codes (5 byte) are well defined areas, their geographic spread is large—this is good for public administrators but probably not so good for marketers who are looking for definitive differences (or similarities) between populations. That is it becomes virtually impossible to do target marketing campaigns when your unit of analysis contains 10,000 (+/-) people as is the case for a zip code, contrasted to maybe 500 (+/-) people within a block group [that is—blocks and block groups are best, not 5 byte zips].
As we move up another zoom level to 150 miles as in figure 8, only the more remote zip code (5 byte) are visible (sparely populated) — but the counties are recognizable as the unit of collection. If we were to zoom up another level to say 250 miles then we would see the boundaries of state of California with the counties being smaller area’s within California.
In the simplest sense data enhancement is “the process where you improve the condition of your database by correcting invalid or out of date information, filling in gaps where data is missing, or to add new fields of data”. Data enhancement is also the creation of data through marketing sciences methods where our analytical techniques create new data elements such as predicted sales, segment membership and so on. Under this definition of enhancement the addition of socio-geo-demographic data would certainly be a powerful way to enhance client data. So lets review a few of these methods.
Enhancement by merging Socio-Geo-Demographic Attributes
The beauty of managing data in North America is that the government agencies provide us with a broad range thematic data files with thousands of data fields within these tables. For example table 1 below illustrates the basic data that marketers may use to better understand their client circumstance—this data is available as current year estimates (CYE) as well as historical (previous census period) for trending purposes.
In terms of understanding how we merge these thematic data sources to our client file, it’s best to consider our discussion under visualizing data. In this case we have the two “sides” of data; “left” and “right” and we will describe how various “sides” of geo-data tables can be linked to one another by their joint relationships on a geographic level. On the left side we have our client tables of name, address and postal code; and since a postal code is a point on the earth’s surface this means we will have latitude and longitude information. On the right side we have the entire breadth and scope of our socio-geo-demographic data framework— just as we visualized in figure 1 described in “visualizing data”.
The Left Side—A Simple Client Table
Let’s consider I have a table of retail sales information. As shown in Table 2 all that I know is where they live and how much they have purchased, but otherwise no details on who they are, their life cycle, their jobs, and so on;
Geo-coding is the process by which we use our postal service supplied postal code information (USPS in the USA, CPC in Canada, Royal Mail in the UK for instance) and combine it with our governments census bureau’s geography database. Table 3 illustrates the result of geo-coding where the zip+4 is converted to a latitude / longitude which is then matched to it’s associated census unit.
Let’s quickly review the concept presented in “What is Geodata” section where our address occupies a point on the earth’s surface coded with coordinates of latitude and longitude. Now our census geography is a collection of area polygon’s and when we take our postal code points and overlay them onto the geography area file, they fit inside these predefined area’s; hence they take on the area value that they fit into. As shown in figure 9, our address in a spatial sense occupies a given point on the earths surface (point A1) and we take on the census geography value like that of the blocks shown in the “zoning” layer (point A2). That is, point A1 occupies the same space as a particular zoning area square indicated (see arrow) so that point takes on the value of that zoning square A2.
In this example when we geocoded the table we merged on the blockgroup, however you can merge on any census area including county codes, sub divisions, open area’s [UK], state [US] or province [CDA] codes to name a few but could be any city administrative area (like the zoning block shown). The point is that based on our clients address we can take that single point on the earth surface and use that to connect into our socio-geo-demographic data frame.
The Right Side—The Socio-Geo-Demographic Data Frame
As revealed in Table 1, there is a plethora of socio-geo-demographic data available where this data regardless of what unit of measurement is structured the same way: the rows are comprised of the “area of measurement” or "geocode" and the columns define the thematic data. This is shown in table 4 illustrated by the average
yearly expenditure for an average person for a given block. In this case we show the main category (ie; Housekeeping Supplies) and the different sub layers of that category. For example in the case of housekeeping supplies we show the sub-categories of “Laundry and cleaning supplies”; “other household products” and then “postage and stationary” (there are more but for the sake of space we just showed these three sub-categories).
For a given sub-category like “housekeeping supplies: laundry and cleaning supplies” we have further sub-categories as in this case it is “soaps and detergents” and “other laundry cleaning products”. Through this method we are able to build up a detailed framework of consumer behavior from the data collected by the government and other third party agencies.
Combining the Left and the Right — Building our analytical framework
So what does the data table look like after we have merged together our left side and our right side? Table 5 shows how the postal code / geocode overlay resulted in the addition of the latitude, longitude and block group. Through the latitude-longitude we are able to link to any number of expenditure categories. Hence in our example we have a proxy of our client’s expenditure for dishwasher and lawn care when we did not originally. Perhaps even more exciting is in those cases where when we have actual sales data for the categories we are merging: that is we can generate the gap between true clients sales behavior relative to the norm.
JSA is a Nonprofit that prepares students for college involvement. From a marketing execution perspective, JSA markets across a wide number of channels that include direct mail, social media, email and traditional web services.
The problem JSA faced was rising mail volumes, rising costs, flat response and a decreasing budget. In order to deal with this problematic circumstance JSA partnered a number of vendors including SGA to create a new framework with the goal of significantly lowering the outbound volume through the use of highly targeted, personalized communications across all marketing channels.
Clearly targeting is an ongoing issue where upon the start of the project the direct challenges were;
- 69% decrease in DM budget
- 700% increase in online advertising
- 33% decline in key conversion of application starts
- 42% decline in online channel conversion.
To deal with this issue, SGA developed a plan to collect all past mailing data over the past two years as this creates our program universe. Next we match this table against the student enrollment table as this defines success (success = in student enrollment table). Upon building this collection of records we would then go through the process of geocoding and augmenting the client data as outlined in this document.
Problem Definition & Objective
Through segmentation profiles and predictive models we would gain a unique understanding of the marketing segments and enable the team to define the required marketing strategy.
Our key goals for this project are:
- Stabilize the precipitous year over year decline in enrolled students
- Bring key conversion rates back to historical norms or better
- Create a framework for data-driven analysis and decision making
- Increase response rate by a factor of 3 to 5 times (300% to 500% improvement): that is, rates better than historical rates.
The Process
The SGA framework is comprised of the client data where upon ratifying the zip codes, we geocoded the file and merged approximately 2500 fields of socio geo-demographic and econometric data. Upon this augmentation of the client data, the framework would;
- create generic segments based on the enhanced client base (sales and product information from the client side (left side) and the enhanced Socio-Geo-Demographic data (right side).
- create descriptive models of success (and failure) of the programs that had been implemented to date
- build out the structure of success and translate this into predictive models of success
- roll the model out over an acquisition framework in order to select the best potential set of new target prospects.
Creation of the Marketing Segments
To develop these marketing groups, we use a process called cluster analysis. Cluster analysis seeks to identify homogeneous subgroups of cases in a population. The goal is to generate a set of groups which both minimize within-group variation and maximize between-group variation; said another way we want to make members of a group as similar as possible and make the groups as different as possible.
After we create our marketing segments we use a process called NCA (New Customer Allocation) which creates a set of linear equations (N-1) where basically each segment becomes a single algebraic equation. NCA ratifies the discoveries learned from the original segmentation process, and the equations we generate enables us to to code the prospect file with our current marketing segments.
In the case of JSA our clustering process generated the ten segments as shown below;
- Upper Class Families
- Successful Self Employed
- Struggling Renter
- Wealthy Upper Class
- Urbanite
- Mainstream USA
- Very Wealthy Professional
- Disadvantaged
- Upper Middle Class and House Poor
- Successful Married with Kids
What makes each segment so unique would be illustrated by our segment profiles. These profiles are charts and tables which illustrate extraordinary characteristics, as calculated by comparing the each attributes segment mean against the norm. For example, if the average income for one segment is $100,000 when the average income for the universe is $50,000, then that segment has and index of 2x the norm, or 200%; which if we think of the norm being like an IQ test where 100 is normal, then 200 is twice as smart ~ same idea.
In the case of our profiles then, this is exactly what we look for: what are the extraordinary characteristics of our segments when we compare them against each other: this is where the power of data enhancement and augmentation is most evident.
SGA has developed fact sheets - a comprehensive way to detail the unique characteristics of the segment showing just the highlights using bullet points, charts and simple imagery. To help display what a fact sheet might look like, lets compare two of the developed segments: the “Very Wealthy Professional” and “The Struggling Renter” as shown in Illustration 1:
From the sampled fact sheets (Illustration 1), we can very quickly identify the importance of income for both segments: in the case of the "Wealthy Professional" income is 4 times the norm whereas the income for the "Struggling Renter" is 25% below the norm. If we would use our IQ analogy the "Wealthy Professional" would have an income IQ of 400 whereas our "Struggling Renter" would have an income IQ of 75. Very actionable information to be used by creative staff and lead marketers.
Fact sheets also help identify other important qualities such as employment, occupation, family structure and so on. The details found in such fact sheets enables marketers to zero in on the most actionable characteristics of that segment. That is, members of the "Wealthy Professional" segment would have content and imagery of affluence, wealth, mature life cycle and private schooling: members of the "Struggling Renter" segment would instead have imagery of public schooling, a younger population living in rental conditions, with a hint of financial hardship.
Ultimately fact sheets help marketers by putting onto a single page / screen all the key details and highlights of that segment. A single page with details how to; write the script; create specific imagery; determine channel mix; and determined performance and ROI measures. Essentially all the key marketing and execution details in a single comprehensive statement.
The importance of marketing segments is that we will be varying our creative message by the essence of the segment—that is our imagery, text message and offer will vary one segment to the next. We generate our specific offer mix as based on the unique qualities and aspects of the segment.
Cluster analysis (or the process of segmentation) is generic, meaning we are not looking specifically at sales, response or conversion of the products at hand but rather all attributes and how they inter-relate to generate the segments: then we study to see whether or not our segments have any variation on the econometric's at hand. This is of critical importance for we could have created distinct groups of customers but if they don't really discriminate sales, or response or conversion, life time value or ROI—then who cares! We create marketing segments to maximize our marketing potential such that we can understand better how our products, services and metrics are related to the complex nature of our clients.
In the case of JSA, we find that there is significant variation as shown in table 6: we find that the wealthier segments perform well (9, 7, 4) especially when we focus on the index relative to application starts: this represents the demand or interest in the product(s). In fact the “Upper Middle Class” segment is virtually twice the norm, and the “Very Wealthy Professional” and “Wealthy Upper Class” are both at 70% over the norm.
A pleasant surprise could be that the group of students who were labelled as “disadvantaged“ (segment 8) who you would expect to not perform as well but delivered 30% over the norm. If we believe only wealth drives performance - this segment would be a surprise: but our cluster analysis shows that segments are formed on more than just wealth: and this is the beauty of socio-geo-demographic data enhancement. There are so many different forces at play that generate the segments!
The beauty of this type of analysis is it illustrates that response, conversion and interest is a multi-dimensional matter, that wealth alone does not drive demand and that there are social and family status elements that are driving demand as well.
The “Analysis of Enrolled Students” helps understand segment potential of conversion. In this case we see that in the case of the “Disadvantaged” there is clearly interest but when the economics of the program become real, there is a definite drop of success. Conversely the “Upper Middle Class” and “Very Wealthy Professional” convert at almost twice the norm (index at almost 200%) hence wealth and affluence do appear to influence conversion. Given the program is supposed to be available equally to all regardless of economics, this does therefore suggest that there is a bias to wealth and that perhaps greater support should be provided to the disadvantaged groups if possible in order to ‘level out’ the playing field...
Predictive Models
At this point we have developed marketing segments that have distinct socio-geo-demographic characteristics and we have discovered that these segments have varying and actionable potential relative to the suite of services offered by JSA.
The next step now is to find specifically what is driving interest and to model this in order to then be able to apply this logic against prospect list geography.
For the purposes of understanding the drivers of interest, we used a technique called Multivariate Interaction Detection. In this case we see that there are significantly different forces driving interest for each segment. For marketing purposes this allows us to combine the unique characteristics as developed by the segmentation process with the highly targeted methods of prediction in a single approach.
Discoveries from the modeling process
In the case of interest we discovered that membership in the more affluent segments was enough to generate a favorable level of interest (Wealthy Upper Class, Very Wealthy Professional, Upper Middle Class).
However in the case of the “Successful Self-Employed” or “Struggling Renter” the key drivers were the level to which they worked at home on one hand, and the level of ‘rural’ on the other. Specifically higher rent with high self-employment were the economic drivers of interest for the self-employed segment whereas lower rent and higher rural population were drivers for the struggling renters.
Relative to the “Disadvantaged” segment, the results suggested great opportunity. In this case it was driven by the level of Caucasian population in one circumstance and the level of employment in the Armed Forces in the other. Although the total size of the “Disadvantaged” group was larger we are able to ‘cherry pick’ the best opportunities in this segment such that we only choose the conditions that support the best possible targets.
In the remaining segments the drivers of interest were a combination of the level of Caucasian population, income (higher income being favorable) combined with higher levels of successful graduation from high school.
When we deal with list brokers the question becomes whom do we select, and what socio-geo-demographic and econometric conditions can we focus on for the purposes of selection. We are able to be rather precise given the predictive models detail exactly which attributes are key.
Figure 10 is a visual representation of the predictive model which described the key forces driving success in the program. Attributes such as marketing segment, self employment, measures of urban / rural / suburbia lifestyle, monthly rent, degrees of self employment, serving in the military, success academically in high school and ethnicity were found pivotal in determining success and failure. Figure 10 nicely depicts how we identify the attributes, and how many people fit that criteria. Given we are dealing with people, this also means that we can isolate geographies (people have to live somewhere...).
At the conclusion of our modeling processes, we had successfully generated a generic segmentation schema that effectively helped us understand the nuances of performance (Illustration 1), and we have been able to determine the probability and levels of success in the program (Figure 10). The segmentation schema is ratified when we generated a series of regression equations for each segment: essentially converting nuance to mathematics. Likewise when we created our predictive models we ultimately generated the SQL logic which detailed the best performing clients. Thus we have operationalized the mathematics and selection logic representing our target audience and geography.
Operationalization of the prospect pull
Figure 11 is an illustration of the net result of applying the performance scoring and selection logic at the blocks and block group levels. It is important to note that when generating such equations everything is made relative to the distribution - hence we are looking distribution ratio's not tallies: this enables us to take models based on people and households and apply to geography and vise-versa.
Earlier in document in the section "Visualizing geo-data" we described how your postal code links you to a broad framework of thematic data: tables 4 and 5 illustrate what the augmented data record might look like. Essentially showing for a given person or household the wide range of rich thematic data sources available.
When we are dealing with prospect selection, the process is reversed. That is, we begin the process at the econometric, socio-geo-demographic data frames level, and the application of our predictive models refines the list of geographies that exhibit the conditions of success.
In essence what our models do is score the geography virtually exactly as we had scored the individuals (see figure 11); but in this case our models technically score the blocks and streets where people live and this allows us then to provide a list of target streets, blocks and block groups to the broker. The broker then supplies us with names for the streets (or zones, blocks) that our models had highlighted as being the best target.
The relationship between data augmentation & list selection
Figure 12 is a visual representation of the data augmentation and prospect generation processes. Essentially the two process are virtually identical except one is simply the reverse of the other.
Let’s first examine the case of data augmentation. Let’s take a given address in the USA and call it point A1. This point A1 can be any address as shown in tables 2 or 3. When we geocode the address of A1 and overlay the street network, we bind that point with that latitude, longitude and FIPS information.
When we bind this point with the lat/long pair and FIPS codes, we are making available the remarkable data sources as noted in the Thematic Data Sources of Table 1 (“Table 1—Sample of ‘Data Themes’ available from Government Sources”). A2 represents the blocks and block group zones that hold the summary thematic data we would like to merge based on the spatial overlay.
When we create client based models, the models are focused on the socio-geo-demographic data: thus we generate socio-geo-demographic equations. Given the common reference framework, we can use these equations to score the geography. In essence, each item of zoned data (B1: block, block group, county, sub-division, state…) is scored by the model such that we can then prioritize the geography. By rolling up the geographic layers, the summary zone will have a street that intersects it (B2): given our prospects live on that street the process would select those prospects since that are in the overlap of the original zone (B3).
This level of performance increase became possible because of a iterative and intuitive analysis.
- The primary phase was to first understand what was driving success and failure at the client level: this was achieved by first augmenting the client data file; creating actionable generic market segments; building the predictive models around success; then generating regression equations of success for each segment.
- The secondary phase was to score the prospect universe based on our learning around success. This required taking the segment based regression equations and scoring the prospect geography. The result was an understanding of the type of neighborhood of the prospect coupled with the probability of success.
- The third phase scored the prospect geographies from 1 to 100; 100 being the highest probability. The marketing was based on the predicted segment membership. The final selection was a combination of targeted streets and preferred zip+4's.
This targeted campaign, the first of it’s kind for JSA, was a favorable investment that ultimately yielded $1,302,000 with a profit margin of a 5.2:1 ratio! For every $1 spent by JSA for marketing for this campaign the project yielded $5 return back! This is a remarkable performance ratio and was seen as a significant success.
Notable highlights of this campaigns execution would be:
- In terms of cost per application, the rate improved remarkably: about one third (1/3) of the cost of T-1 and 10% less that T0.
- In terms of cost of conversion, the costs significantly dropped! The costs of conversion were approximately one third that of T-1 and about half that of T0.
- The mailing universe was significantly smaller! In fact the mailing universe was just 20% the size of the T-1 universe – but yielded the same return of $1.3M!
- The Return on marketing dollar investment (ROI) was 5.2 : 1.
- The mailer of 205K prospects yielded $1.3M virtually the same revenue of T-1 when the client mailed 1017K prospects. Hence we mailed 80% less but generated the same revenue—a remarkable improvement over the previous two marketing periods.
Today our businesses function in the world of multi-touch marketing, operating in the paradigm of Ultra Precision Marketing Sciences. This new marketing paradigm is an adaptive, interactive, relevant and highly personalized paradigm that drives us to develop the tools, workflows and technologies required to manage digital elements (data, imagery, graphics, social media connectivity, etc) across a suite of integrated systems, relational databases and back office solutions. The excitement of this paradigm is the ability to receive critical data real-time, enabling program modification mid stream execution, and the ability to output exact content and quantities to the specified channel whether it is print, email, SMS or web service.
In this first article we have shown the world of rich socio-geo-demographic and econometric data and how each and every one of our marketing campaigns can be enhanced by using this elaborate data frame. For businesses that have rich data on their clients, this enhanced data provides the means by which you can create powerful GAP statistics along with the most accurate predictive models of client behavior; in fact data rich businesses can not only generate the most accurate client models but can also create models of the market itself!
In the second article we tackle the world of cyber-space (basic internet, SMS and social media). What do we need to collect in order to understand who visits our web site; is there a relationship between ISP’s, domains, email and is there evidence to suggest that certain ISP’s, domains and email providers have a propensity towards a given product purchase. Furthermore we tackle the issue of how to merge the data from cyber-space with that of the bricks and mortar world ~ something not thought possible.
Our third article looks at completion and the effect of distance relative to marketing. One might think that in the world of cyber based marketing that distance is irrelevant—however we will illustrate how this is not the case! In fact what we experience in our daily journey’s to and from work or school relative to what we see in terms of competitors to your product actually effect behavior on the web. Given we can actually decompose an IP address once it hits our web server, this means we can get an idea of what competitors your clients see on a given day; and this means you could build dynamic content to counter this circumstance: both in the world of cyberspace and likewise Bricks and Mortar!
Our final article of the series combines all these aspects together—but with a focus on ROI and Life Time Value (LTV). In this work we explore the range of analytics needed to properly implement LTV but with an view on channel management. Given we can track human behavior and channel preference it becomes easy to determine what channels to execute at what time, leveraging the appropriate content.
Additional Material
For more information regarding tools used in the segmentation process such as index development or fact sheet development, take the time to visit https://smartguysanalytics.com/sgademo/startpage.sga and request our worksheets for download.
About Smart Guys Analytics
Smart Guys Analytics is a full service marketing analytics firm specializing in multi-touch OMNI-channel programs. SGA understands the development of OMNI-channel campaigns requires expertise in a broad series of disciplines including network and IT infrastructure, marketing sciences, domain management, data management, lead management, contact and sales management, data processing, marketing analytics and post web analytics. Smart Guys Analytics is your choice for innovative, reliable, out of the box proactive experts you can count on.
Director of the Project implementation unit for the Real Estate Management Project at Republic Geodetic Authority
5 年it is fascinating how quick you can learn and master geospatial concepts. good work. congrats.?