In the ever-evolving landscape of fundraising, accurately forecasting income is crucial for effective planning and decision-making. However, the inherent uncertainties and fluctuations in economic conditions, donor behavior, and global events make this task challenging. In this blog post, we explore various forecasting methods and highlight how they can enhance your fundraising strategy, even in the most unpredictable times.Zum Bearbeiten hier klicken. Understanding Unpredictability No two years are alike when it comes to fundraising. Economic shifts, social changes, and unforeseen events like natural disasters or global pandemics can significantly impact donor behavior. Unlike daily sales in retail, forecasting daily income for fundraising might be neither practical nor necessary. Instead, fundraisers should focus on broader trends and patterns to make informed decisions. Of course, the specific approach will also depend on the goal of the forecasting—whether you’re looking to understand long-term trends or planning for a single year. The Importance of Preprocessing Effective forecasting begins with robust preprocessing of historical data. Preprocessing involves cleaning the data, removing anomalies, and normalizing trends to reflect a more accurate picture of typical fundraising patterns. For instance, income spikes or drops due to catastrophic events should be flattened out, as they do not represent normal fundraising conditions and are not predictable anyways. To handle such anomalies, it might be necessary to predict a baseline first, using data from previous “normal” years to estimate what a typical year would have looked like without the impact of catastrophic events. This approach helps account for shifted income distribution and provides a clearer picture of underlying trends. However, experience in fundraising is essential to ensure that data cleaning is done correctly. It's important to strike a balance; data cleaning should not eliminate all uncertainty, as this could create a misleading sense of certainty in the prediction. Time Series Forecasting with ARIMA One of the most powerful tools for forecasting income is the ARIMA (AutoRegressive Integrated Moving Average) model. ARIMA analyzes the time series data to identify patterns and project future income based on past trends. This method relies solely on the internal trend and seasonality present in your historical data, making it a straightforward choice for many fundraisers. However, it's important to acknowledge that ARIMA’s predictions might carry significant uncertainty due to external factors influencing donor behavior, which it does not account for. Incorporating External Factors Fundraising income is rarely independent of external influences. Economic conditions, donor sentiment, and the relevance of your cause can all impact the effectiveness of your campaigns. To enhance the accuracy of your forecasts, consider incorporating external data such as economic indicators or social media trends. While we cannot predict future developments of these variables precisely, scenario planning can help create a range of possible outcomes, offering a more comprehensive view of potential income trajectories. Simple Linear Forecasting In some cases, simplicity can be highly effective. A straightforward linear yearly forecast over a longer timeframe, such as several years, can capture the overall trend in fundraising income without delving into complex relationships. For fundraising, two crucial factors to consider are the donation amount per donation and the number of donations. Since these factors often follow different trends - such as a slight increase in donation amounts and a decrease in the number of donations - they can be forecasted separately and then multiplied to estimate the total donation amount. This approach can also be applied to different donor groups, such as various generations, to reveal shifts in age patterns. However, the ultimate goal here is to keep the forecasting process simple and straightforward, in order for this method to provide a clear, big-picture view of where your fundraising efforts are headed, making it easier to set realistic goals and allocate resources efficiently. Bringing It All Together
To summarize, effective forecasting in fundraising involves a combination of methods tailored to your specific needs and data availability:
Conclusion Forecasting fundraising income during challenging times is not an exact science, but with the right tools and techniques, you can significantly improve your planning and decision-making. By preprocessing your data and comparing and combining methods such as time series methods (ARIMA), simple linear forecasts and the incorporation of external factors you can create a more accurate and comprehensive view of your fundraising landscape. This approach not only helps you set realistic goals but also prepares you to navigate uncertainties with greater confidence. Embrace these forecasting methods to enhance your fundraising strategy and ensure your organization’s continued success, even in the most unpredictable times. Next Steps? In case you are interested in learning more about these approaches, talking to an expert or even discussing whether and how forecasting could be conducted with your data, please do not hesitate to get in touch with us.
0 Comments
By Carolina Pelegrin Data Scientist at joint systems In today's data-driven marketing landscape, understanding the effectiveness of marketing activities is crucial for optimizing marketing strategies and maximizing return on investments. Two powerful methodologies that can decision-makers achieve this are Marketing Attribution and Marketing Mix Modelling. These state-of-the-art approaches are versatile and complement each other to a large extent - this is why we decided to delve into them in this blog post. Marketing Attribution methods are used to determine how each marketing interaction (touchpoint) contributes towards reaching a desired output (like a donation). It aims to determine which channels, campaigns or interactions are most effective in driving donations and therefore, what revenue is expected to come from all different channels. This can help us allocate resources more efficiently, prioritizing those channels and campaigns that are expected to return the highest revenues. In 2022, our data science team applied marketing attribution methods to a dataset of website visits and donations. We were able to conclude that different online marketing channels did have different effects on donations and donation revenues, with the best results obtained for branded paid search and organic search. You can find an introduction to the topic and the main results we obtained in our previous post on the topic, just follow this link if you are interested. Marketing Mix Modelling (MMM) on the other hand are methods that assess the impact of various marketing activities on overall business performance. This technique tries to identify the relationship between donations and different marketing elements like media campaigns, external variables like macroeconomic factors, internal variables like new products or new pricing, seasonal trends, etc. In general, MMM involves collecting and analyzing historical data to identify patterns and relationships between marketing activities and business outcomes. This analysis typically employs regression techniques and other statistical methods to isolate the effects of individual marketing components, allowing us to forecast the impact of different strategies and make informed decisions about resource allocation. The main characteristics and differences between MA and MMM methods can be found on the following table: Marketing Mix Modelling – data requirements and techniques
From our previous blogpost on attribution modelling, we know that marketing attribution traditionally focuses on the analysis of online data during a specific, short period of time. Also, results from marketing attribution are based solely on the touchpoints or channels that a donor has used to “land” on conversion sites. No other information is needed to build the models, although total budget/investment per channel is an interesting feature, since we can reallocate that budget depending on the results obtained. Also, attribution models are not limited to only channels an can be applied across all communication channels On the other hand, Marketing Mix Modelling tends to use a a wider variety of variables. Data requirements include historical donation data, marketing costs across different channels, data on media metrics (if available) including reach, frequency and engagement levels, as well as data on external factors, including economic indicators, seasonality data or any other relevant and available external factors. Other kind of data, like promotions and competitor pricing are typically included in MMM. As for the techniques widely used in MMM to uncover data insights, they will mostly depend on the goal of the specific analysis, although regression analysis, machine learning algorithms and time series analysis are the ones most widely used.
To give you an idea of example deliverables MMM may provide, the web holds a plethora of interesting resources such as this well-summarized article on LinkedIn. Conclusion In summary, understanding the effectiveness of marketing activities is essential for optimizing strategies and maximizing ROI. While both Marketing Attribution and Marketing Mix Modelling methods have their own unique strengths, by leveraging the insights from both methodologies, we can optimize our marketing performance, enable more informed and strategic resource allocation and achieve better overall results. Are you also interested in Marketing Attribution and Marketing Mix Modelling? Let’s stay in touch! 😊
Why should we segment Donor Data? Segmentation is the process of dividing a market of individuals or organizations into subgroups based on common characteristics. This process allows organizations to align their products, services, and communication strategies with the specific needs of different segments. In the case of fundraising, by understanding what drives donor groups and their available choices, organizations can tailor their approaches for more effective engagement. Key Dimensions for Donor Segmentation Donor segmentation can be approached from various dimensions, including psychographic, geographical, behavioral, and demographic factors. Each dimension offers unique insights:
Of course, not all data necessary for the above-mentioned dimensions is available right away. Some of it is hard to obtain, only indirectly available, or not obtainable at all. In a stylized, way, the possibilites can be summarized as follows: Behavior-Based Segmentation with RFM One of the most effective methods for behavior-based segmentation is the RFM model, which evaluates donors based on Recency, Frequency, and Monetary value:
Unsupervised Learning for Donor Segmentation In addition, not necessarily as a replacement, unsupervised learning offers a more advanced technique for donor segmentation. Unlike RFM, which relies on predefined categories, unsupervised learning models detect patterns or groups within the data without prior labels. This method is highly flexible and can uncover hidden (sub-)segments that traditional methods might overlook. A simplified "cooking recipe" for a clustering approach like k-means looks as follows:
Comparing RFM vs. Unsupervised Learning
Both RFM and unsupervised learning have their advantages and limitations:
So What? A straightforward conclusion Unsupervised methods of donor segmentation are designed to incorporate various data types unbiasedly, offering a data-driven approach to understanding donor behavior. While these methods provide valuable insights, they also come with limitations, particularly regarding traceability and group stability over time. Ultimately, a combination of RFM and unsupervised learning techniques can yield the most comprehensive and actionable insights for donor-centered fundraising. Inspired? Interested? In need for a chat? Or are there experiences you can share? Please go ahead and do so and do not hesitate to reach out. All the best and have a great summer! Recommendation Engines in a Nutshell Recommendation engines are advanced data filtering systems that use behavioral data, computer learning and statistical modelling to predict which content, products, or services a customer is likely to consume or engage with. Therefore, recommendation engines are not just giving us a better picture on our users’ interests and preferences; they can also enhance the user experience and engagement. Recommendation engines, like any other data-driven method, need data to be applied. While no specific amount of data is required, data does need to contain high-quality interactions, as well as contextual information about users and items, to be able to create good predictions. Examples of high-quality signals are all kind of data that clearly states the user’s preference, like explicit ratings, reviews or likes to a specific product. While high-quality signals are preferred, implicit signals like browsing history, clicks, time spent reviewing a product or purchase history can also be used – those are more abundant but can be noisy and hard to interpret, since they do not need to indicate a clear preference. In order to extract the relevant information from a dataset, data filter methods are used. Different filtering methods exist, including collaborative filtering, content-based filtering and hybrid filtering methods.
Why do recommendation engines matter for NGOs? - Possible Use Cases ...
NGOs, like any other organization, are rapidly undergoing digitalization processes. They are increasing their online presence and communication, using online platforms like websites, social media channels or emails, to raise awareness, inform donors about their projects and news, as well as to run online fundraising campaigns. Traditional NGOs also rely heavily on offline communication campaigns like letters, postcards, or even booklets that contain the last news and projects ongoing in the organization. Even though NGOs campaigns may be slightly adapted to different audience groups, we are far from using the donors’ interests in our communication campaigns. Using recommendation engines could help us do data-driven decisions about who to contact, with what content and even what kind of donation to ask for, all based in previous donation behavior. This can potentially improve our fundraising results while keeping donors engaged and supportive towards our projects. Challenges While we know that recommendation engines may help us better address our donors, there are some challenges that we need to take into consideration, like the potential lack of high-quality data and the presence of noise and bias in the datasets.
Conclusion Although some challenges exist (mostly related to the quality and availability of data), we still believe that exploring recommendation engines’ methods and applying them to our data, would help us to better understand our donors’ interests, which ultimately could keep donors engaged and supportive towards our projects. Are you interested in recommendation engines? Do you think that they can help you better understand your donors’ interests? Contact us and let us explore your data together! In a volatile, uncertain, complex, and ambiguous environment, organizations need to constantly adapt and evolve. This is especially true for fundraising nonprofits, as their sector is increasingly embracing digital transformation. This transformation isn't just about adopting new technologies but reshaping how organizations operate and create value as well as impact for their stakeholders. Digital transformation can be a driving force that propels organizations into the future, enabling them to be more agile, customer-centric, and efficient. To a significant extent, the history of digital transformation was coined by the evoluation of data and its use. The Evolution of Data From the 1950s to the 2000s, businesses relied mainly on descriptive analyses. Reports gave an ex-post view on processes and their results. These were relative „simple“ times, with a focus on internal, structured data from databases and spreadsheets. Around the turn of the century, there was a shift. The 2000s saw the rise of digital data. Innovative, data-driven business models began to emerge. Although there was an ongoing focus on descriptive analyses, the scope widened to include unstructured and external data, for instance from the web and social media platforms. Fast forward to today, and we are witnessing another paradigm shift. Organizations, both from traditional industries and those built on digital business models, are leveraging data-driven decision-making. Predictive and prescriptive analyses aren't just buzzwords but becoming imperatives. It is clear that both structured and unstructured data hold equal relevance, positioning analytics as a core function in any organization. Data is a pivotal resource in Digital Transformation and the sheer volume of data generated today is mind-boggling. However, data essentially is not more than a „raw material“ like oil or wood which need to be cleaned,refined, processed etc. Using data the right data in the right way for the right purposes can be an key success factor for modern organizations. This is where data strategies come into play. Dimensions of a Data Strategy Navigating an ocean of data requires a compass, a robust data strategy. A data strategy can be defined as a comprehensive plan to identify, store, integrate, provision, and govern data within an organization. While a data strategy is often perceived as primarily an “IT exercise”, a modern data strategy should encompass people, processes, and technology, reflecting the interrelated nature of these components in data management. A data strategy is not an end in itself. Ideally, it should align with the overarching strategy of the organization, as well as the fundraising and IT strategies. A closely interlinked area with a data strategy is the analytics strategy. It's crucial to ensure synergy between these strategies for the successful exploitation of data insights and value creation.At least six dimensions of a data strategy can be named.
Crafting a Data Strategy In a nutshell, the crafting of a data stragey can be achieved following four generic steps.
Four Commandments for your Data Strategy One should consider at least four recommendations when starting to develop a data strategy.
So what?
According to experts like Bernard Marr, an influential author, speaker, futurist and consultant, organizations that view data as a strategic asset are the ones that will survive and thrive. It does not matter how much data you have, it is whether you use it value-creating and impact-generating way. Without a data strategy, it will be unlikely to get the most out of an organization´s data resources. In case you need a sparring partner or somebody to accompany you on your data strategy journey, please do not hestiate to get in touch with us. All the best and have a great third quarter! Johannes What is Next Best Action? Who does not sometimes wish for a mentor who always has an appropriate advice? In general, humans are very good at rating known situations and making the right decisions, but in case of new circumstances or a huge amount of aspects to consider, good advice is priceless. In marketing and similarly in fundraising, one-on-one support is very effective, but not possible in most cases. As companies and organizations were growing, mass marketing became the state of the art, trying to sell as many items to as many customers as possible. Customer received overwhelming amounts of advertisement and offers, which can be described as a “content shock”. Since the rise of machine learning and the possibility for sophisticated data analysis, companies try to stand out by putting the customer in the center. This is done by trying to predict what the customer wants and needs. New technologies should advise marketing specialists what to do next in order to satisfy every customer individually – supported by enormous amounts of data. This concept is called Next Best Action (NBA) or Next Best Offer (NBO) and can evolve into a useful fortuneteller in case of effective implementation. The benefits of the concept are reduced advertisement cost and increased customer loyalty. How can NBA systems be developed? The goal, structure and architecture of a NBA system strongly depends on the environment and purpose it is used for. Questions to answer before the development are for example:
After goals and actions are defined, the general structure of a NBA system can be summarized as follows: The process of step one - filter- is rather rule based, depending on what actions or offers make sense for a customer. For instance, there might be products, which cannot be offered more than one time and therefore their offer will be excluded from the action list after purchase. The same applies for the third step: Typically, the actions with the highest benefit should be chosen but it depends on the user of the system, whether more than one action is taken or the actions need to be filtered again. A reason for a second filtering step could be that only one single action of a certain category is chosen even though two actions from the same category show very high benefit values. As it turns out, most NBA systems are based on proven business rules and requires a well-defined framework before the most complex component – the rating of the actions - can be developed.
A rule-based system requires a lot of experience in the domain and takes time to develop because it is all done by hand: Rule like “rate action xy high if the customer already bought from the category ab in the last two years” can be data driven if they are based on previous findings. Most companies already use such business rules for marketing. The scoring does not need to be metric (with numbers), as shown in the picture below. The system is not adaptive and needs to be adjusted, if the circumstances change. Rule-based systems are characterized by extreme simplification, since the real, very complex interrelationships cannot be completely cast into rules. The needs and the reactions of a customer will change depending on the previous action, which is very difficult to represent in a rule-based way. If a low or intermediate level of machine learning is considered, relatively simple scores and predictions can be incorporated into the system. The simplest approach would be to calculate the probability for all customers to react positively to an action (propensity). To just predict which action might provoke a positive reaction might not be suitable for predicting long term customer loyalty. Several scores and probabilities can be combined to achieve this goal. For example, the propensity can be combined with a prediction of long term customer engagement. The reason for this is that on one hand no action is useful if the customer does not like the action (propensity), but on the other hand the actions also need to have a benefit for the organization (prediction of customer engagement/success for the organization). The scores can be weighted according to their importance and any number of further adjustments can be made. Typically, such a system will still be embedded in several predefined rules. Similar like factors for weighing scores (by multiplication), a higher importance of a score can be expressed by adding meaningful numbers to the score. Disadvantages are that still a lot of experience in the field of use is needed. Scores are required for all actions and they need to indicate the benefit or propensity to the same extent. The system could automatically adjust to a certain extend if the scores themselves are linked to current data and regularly monitored. Full Deep Learning Models require more abilities that are technical, experience with deep learning and an appropriate technical environment. The difference to the previous systems is that these models are black box-models without the necessity for the users to tweak and define the rules and scores themselves. While intermediate levels of machine learning require a programmer to intervene to make adjustments, in deep learning the algorithms themselves determine whether their decisions are right or wrong. Internally, the algorithm will calculate scores for each action but will determine the best calculation and weighing itself. Rating systems in this category can range from a deep neural network for predicting benefits of actions to reinforcement learning models, which do not need a lot of training data in advance but can learn implicit mechanisms by trial and error. As long as these systems are fed with current data, they are adaptable. Disadvantages of this approach are the complex development and reduced interpretability due to the fully automated process. So What?
In our view, George Box´ good old quote “All models are wrong, but some are useful” is also worth considering in the context of NBA models. Whether and how these approaches suit the needs of respective (nonprofit) organizations should be evaluated holistically. If you are interested in getting to know more about the concept of NBA or even think about implementing it in your SOS association, please do not hesitate to get in touch with us. Sources https://www.altexsoft.com/blog/next-best-action-marketing-machine-learning https://databasedmarketing.medium.com/next-best-action-framework-47dca47873a3 https://medium.com/ft-product-technology/how-we-calculate-the-next-best-action-for-ft-readers-30e059d94aba https://www.mycustomer.com/marketing/strategy/opinion-five-rules-for-the-next-best-offer-in-marketing |
Categories
All
Archive
August 2024
|