Blog
Signals of Market Momentum: Understanding Consumer Sentiment and Behaviors

Executive Summary
Understanding the health of the U.S. economy requires a view of consumer mindsets and behaviors, both across industries for the market overall as well as within industries. This simple expectation is rarely met in a single data asset. In most cases, business decision-makers leverage disparate data and research that were not designed to tell a connected story because they are built on either different time frames (continuous or periodic data collection) or they are simply not designed to cover all categories in the market (most research is focused on an individual industry). That data reality makes it difficult to tell a holistic and coherent story of the market.
At Morning Consult, our syndicated intelligence platform solves this problem by continuously measuring at a very high frequency consumer mindsets and behaviors across all audiences, brands and industries, a total level for the overall U.S., within specific industries of interest, and at the brand level within those industries in the same data asset.
This data design provides the raw materials for us to curate clear signals of market momentum. In this memo, we dive into the thinking and methodology behind our two primary measures of momentum, the "psychology" of markets (user consumer sentiment) and market behaviors (per capita consumption).
Together, these two metrics offer a powerful, holistic view of the health of each industry, and when viewed across industries, offer the optimal explanation for the state of the economy overall.
Introduction
Industry-level Index of Consumer Sentiment (ICS) and per capita consumption trends provide a clear, standardized way of keeping the pulse on market momentum. These metrics reveal whether a market is expanding or contracting, and whether consumer attitudes are becoming more or less favorable. ICS can be viewed as a bellwether for the "psychology" of a market, revealing consumer confidence and how they feel about the economy. Meanwhile, per capita consumption provides the "behavioral data," quantifying how much they are actually using various products and services. Together, these two metrics offer a powerful, holistic view of a category's momentum.
Measuring industry-level ICS is straightforward, as it simply involves averaging the sentiment scores among a category's users. However, a bigger challenge lies in measuring per capita consumption, especially when relying on a standardized survey that asks the same questions across vastly different industries.
Indeed, our primary data source (Morning Consult’s syndicated survey) uses a standardized questionnaire. This means the same usage frequency question is asked regardless of the category being investigated. This constitutes a difficulty in the analysis in that a single "use" of one product is not comparable to a "use" of another. How do we translate a single usage question like, "How often do you use this service?" into a meaningful, comparable metric when the nature of that "use" is so different? For example, a person who flies "once a week" is a high-frequency traveler, but a person who buys coffee "once a week" is a low-frequency consumer. Using the same value for these different behaviors would lead to wildly inaccurate per capita numbers.
To overcome this, we developed custom usage profiles for each category. These profiles assign a unique, weighted value to each survey response (for example, "about once a week" is worth 4 for ridesharing, but 0.10 for airlines). These values are carefully adjusted so that the final per capita number aligns with an external, reliable industry benchmark for that category. This transforms “subjective” consumption data into a consistent, plausible per capita number, allowing us to compare trends across categories over time and identify potential shifts as they develop.
Description of Our Approach:
We calculate a category-specific ICS by first identifying and isolating all responses to the "Index of Consumer Sentiment" question for users of a given brand. By taking the average across all brands that define an industry, we can measure the overall consumer sentiment for that industry. We report a rolling three-week average of the ICS score for each industry, providing us with a stable trend line that allows us to see how sentiment is shifting over time.
The per capita calculation is designed to translate subjective survey responses (such as once a week, about once a day) into a consistent, weekly rate that can be used for calculations — or a quantifiable, reliable measure of consumption across an entire population. Because we make assumptions to transform high-level survey responses into weekly rates, the actual per capita measure we derive is not to be interpreted as a precise measure of, for example, how many rides or flights the average American takes in a week. Instead, to capture or compare trends across categories, we focus attention on the percentage change over time (3 weeks rolling) within each industry we track, not the absolute per capita values.
This metric is a clean, normalized way to compare momentum across different categories because the same methodology and assumptions are applied to every data point in the time series. Therefore, any change in the rolled number is a direct reflection of a change in underlying consumer behavior, and not a change in our method.
By focusing on WoW changes in the rolled periods, we can confidently compare trends across different categories and identify shifts in market momentum. We use rolling periods to smooth out noise and short-term volatility in the data. Survey data can be noisy from week to week, with random spikes or dips that don't reflect a true change in consumer behavior. By averaging the per capita number over a rolling period (in our case, 3 weeks), we create a more stable and reliable trend line. This allows us to focus on the signal (the actual trend) rather than the noise (random weekly fluctuations).
Process and Calculations
As mentioned earlier, we model per capita consumption by assigning a numerical value to each survey response in each category. A person who uses a rideshare app "a few times a week," for example, is assigned a higher value (such as 5 uses) than someone who uses it "about once a month" (such as 3 uses). We also assign a value of zero to non-users, ensuring they are properly factored into the total. We take a weighted average of these numbers across all brands and respondents to produce a raw per capita figure. To make this number more useful, we anchor it to an externally validated industry benchmark, creating a final scaled metric that reflects real-world usage.
The entire process is described below.
1. Build Usage Profiles: Assign Numerical Values to Usage Frequency
The core of the per capita measurement is the profiles dictionary we built, which assigns a numerical value to each survey response, such as "Several times a day" or "About once a month or less often." These numbers represent an assumed average weekly frequency. In essence, this dictionary maps a survey response (e.g., "About once a week") to a specific numerical value representing the number of uses per week.
Let’s take the rideshare category as an example:
- "Several times a day": 10 uses per week
- "About once a day": 7 uses per week
- "A few times a week": 5 uses per week
- "About once a week": 4 uses per week
- "About once a month or less often": 3 uses per week
- "I do not have an account or do not use": 0.0 uses per week
Within that rubric, ten uses per week (“several times a day”) assumes two rides a day for the average round-trip daily commuter over the course of five days. While that may be high for a small percentage of the population, this is balanced out by the low percentage of respondents who select this option.
2. Calculate Per Capita Usage for Each Brand
Per capita usage or consumption is the total usage divided by the total number of people. It naturally includes non-users, as they contribute zero to the total usage.
We aggregate across the brands we identified a priori as defining the category, and non-users are included in the denominator, so their lack of usage pulls the average down. The list of brands we selected to represent each category make up more than 80% of the total market in the sectors they play in..
For each brand within a given industry sector (e.g., Uber, Lyft), we multiply the proportion of respondents who chose a particular frequency by the numerical value assigned to that frequency. For instance, if 20% of respondents said they use a brand "About once a week" and that response is valued at 4 uses per week, this segment contributes 0.20 * 4 = 0.8 to the brand's per capita usage. This is done for all possible responses, including "I do not have an account or do not use." which gets assigned a 0. The 0 value for the non-user group is crucial; for instance, it ensures that the majority of the population who don't fly often, or at all, are properly accounted for, which significantly lowers the final per capita number.
We then sum these contributions from all frequency responses to get the total per capita usage for that brand.
Note that for some industry sectors, the frequency numbers are calibrated to reflect the penetration of a particular category, which could be high or low. For example, the airlines profile assigns a very low value to "About once a week" because, when averaged across an entire population where most people don't fly, a single weekly flight has a tiny impact on the overall per capita number.
By multiplying the proportion of respondents for each frequency by its assigned value, we calculate a raw, unscaled per capita number for each brand that makes up the category.
3. Aggregate to Category-Level Per Capita
Finally, we calculate a weighted average of the per capita usage of all brands in the category. The "weight" for each brand is its sample size (total n). This ensures that brands with more survey data have a greater influence on the final category metric. This weighted average correctly accounts for the fact that a large portion of the population doesn't use the brand at all, as the 0.0 value for "I do not have an account or do not use" correctly pulls the average down.
When calculating per capita consumption across multiple brands within a single category, some brands may have a much larger sample size (total n) than others. A simple average of the per capita usage for each brand would give equal weight to a brand with thousands of respondents and a smaller brand with only a hundred. This could skew the final category metric and make it less representative of the broader population.
By using a weighted average, we multiply each brand's per capita contribution by its corresponding sample size. This gives the larger brands more influence on the final result, ensuring the calculated category metric is more robust and statistically sound.
This process gives you the final per capita uses per week for the entire category. Note that the per capita calculation is not based solely on users. It is a metric that considers all respondents in the survey, including non-users.
Assign a Target to each category
The raw per capita number is not particularly useful on its own because of the varied assumptions in the usage profiles. This is where the category targets dictionary we built comes in, so we could still derive a plausible raw per capita consumption number every week.
We use specific, externally validated industry estimates (the targets) to anchor the raw per capita calculations so they reflect actual per capita consumption per industry. For each category, we look at a recent period of its raw data (8 weeks), and calculate a scaling factor by comparing the raw average to the target, and then apply that factor to the entire time series. This process effectively normalizes the data, making the absolute numbers more plausible and, most importantly, ensuring that the trends over time are reliable and comparable across different categories. The target serves as an external check, validating the final numbers and ensuring they reflect real-world consumption patterns. We chose a target window of 8 weeks to create a stable scaling factor. Using a longer period, like 8 weeks, smoothes out short-term fluctuations and noise in the survey data. A single week might have an unusual spike or dip, but by averaging over two months, we get a more reliable representation of the current trend to use for scaling. A longer window would be even more stable, but 8 weeks is often a good balance between stability and responsiveness to recent changes.
Important: The presence of a target doesn't force the raw number to be that target; instead, it normalizes the raw number to the target. The raw number is still calculated based on the actual survey responses for that week.
Here’s a simple breakdown of how the scaling factor is integrated to the calculation:
- The raw per capita number changes: The raw per capita number for a given week is impacted by the percentage of people in a frequency band. If a higher percentage of people report using rideshare "About once a week" in a given month, the raw per capita number will naturally increase.
- A scaling factor is calculated: We take the average of the last 8 weeks of these raw per capita numbers.
- The scaling factor is applied to all historical data: We calculate a single scaling factor by dividing the category target by this 8-week average. For example, if the target for Ridesharing is 0.22, and the 8-week raw average is 1.1, the scaling factor is 0.22 / 1.1 = 0.2. We then apply this factor (0.2) to every raw data point in the time series.
This means if the raw number in a given week was 1.5, the final scaled number becomes 1.5 * 0.2 = 0.3. The target doesn't force the number to be 0.22; it ensures that the relative change in the raw number is accurately preserved in the scaled metric. The per capita number for a given week is still a direct reflection of the survey data for that week. For example, if consumer behavior shifts and more people start using rideshare apps "a few times a week," the raw per capita number we calculate will increase. When we scale this new raw number to our target of 0.22, the trend will accurately show an increase, even if the absolute number was an estimate to begin with.
The Approach in Action: Sample Industry Outlooks and Commentaries
Now that you understand our thinking and methodology, we want to bring this to life by applying our approach to two industries that are very familiar to most consumers: social media and department stores. This is just meant to be a snapshot of consumer momentum in these two industries in a given week, defined as both the "psychology" of markets (user consumer sentiment in the industry) and market behaviors (per capita consumption). We provide weekly updates of these trends every week in our consumer industries outlook, the summary below is limited to toplines of these industries to introduce these concepts to you.

Social Media: ICS analysis
In the week ending 9/24/2025, consumer sentiment amongst users of social media platforms like Facebook, TikTok, etc, was edging down as compared with the previous 3-week rolling period. In fact, our data over the last four years indicates that the outlook of social media users towards the economy has become universally more negative. A key insight is how this trend aligns with platform demographics: users on platforms like TikTok, which tend to have a larger lower and middle-income user base, have been shown to be disproportionately affected by inflation and other economic pressures. Interestingly, users on platforms like Reddit and Pinterest show a slightly more positive outlook, a nuance that may be linked to their unique user bases and platform use-cases.

Department Stores: Per capita consumption analysis
While it may seem counterintuitive given long-term trends commonly reported across various media outlets, our data appears to align with the retail industry's recent pivot towards "experiential retail" to compete with online shopping and capitalizing on a consumer desire for in-person experiences. However, our brand-level ICS data amongst users of some of the main players suggests a stark heterogeneity amongst the main actors (see figure below), reinforcing the idea that some of the strategies at play to bring consumers back are likely more effective at tapping into the current psychology of the market.


Franck brings over two decades of global leadership in market research and consumer insights. Originally from France, he moved to the U.S. to expand Synovate’s North American Censydiam business (now part of Ipsos) before leading its Brand and Communications practice. He later served as Global Director and Head of the Americas for Kantar TNS’s Brand & Communication division, then joined Qualtrics as Principal XM Scientist, helping build its Brand Experience product by blending modern technologies with traditional methods. Passionate about the intersection of people, technology, and evidence-based strategy, Franck now joins Morning Consult as Lead Solutions Architect for Brand and Campaign Effectiveness—driven by a belief that brands must anticipate shifts in demand with precision and context. He holds dual citizenship in France and the U.S. and earned his Master’s and Ph.D. in Engineering Physics in the U.K.

Bill has been at Morning Consult for 4 years, and leads the building, scaling and activation of our solutions across brand measurement, corporate reputation, campaign effectiveness, crisis management, segmentation and pricing strategies. Previously, Bill was at Kantar for 19+ years and served as the Head of Global Analytic leads and chaired the Global Marketing Science Council at MIllward Brown. Bill also led the quantitative analytics of IBM’s global tracking programs in the late '90s, and holds a Ph.D. and M.A. from NYU.