Article

10min read

The Ethical Use of First-Party Data for Personalization

With the end of third-party cookies in sight, first-party data has moved to the forefront of digital marketing.

First-party data is a powerful tool for personalizing your customers’ buying journey. It’s generally more reliable and offers deeper customer insights than third-party data, helping you gain that competitive edge. But these benefits also bring responsibility. It’s essential from both a compliance and customer experience perspective that you practice ethical data collection when it comes to first-party data.

In this article, we take a closer look at first-party data—what it is, how you can collect and use it ethically and the benefits first-party data offers both your customers and your business.

What is first-party data?

First-party data is information about your customers that you collect directly from them via channels you own.

Potential sources of first-party data include your website, social media account, subscriptions, online chat or call center transcripts or customer surveys. Importantly, the first-party data you collect is yours and you have complete control over its usage.

Examples of first-party data include a customer’s

  • name, location and email address
  • survey responses
  • purchase history
  • loyalty status
  • search history
  • email open, click or bounce rates
  • interest profile
  • website or app navigational behavior, including the page they visit and the time they spend on them
  • interactions with paid ads
  • feedback

As it comes straight from the customer, first-party data provides you with deep and accurate insights into your audience, their buying behavior and preferences.

These insights are essential for guiding the development of digital marketing strategies that prioritize the human experience, such as personalization. They can also help you create customer personas to help connect with new audiences which may inform key business decisions, including new products or services.

How to collect first-party data

Customers may voluntarily provide first-party data. For example, customers submit their email addresses when signing up for a newsletter, offer their responses when completing a survey or leave comments on a social media post. This is often referred to as declarative data—personal information about your customers that comes from them.

Alternatively, first-party data can be collected via tracking pixels or first-party cookies that record customers’ interactions with your site. This produces behavioral data about your customers.

First-party data is typically stored on a Customer Data Platform (CDP) or Customer Relationship Management (CRM) Platform. From this, you can build a database of information that you can later use to generate customer personas and personalize your marketing efforts.

What is third-party data?

Third-party data removes the direct relationship between your business and your customers during the data collection process. While first-party data comes straight from your customers, third-party data is collected by a separate entity that has no connection to your audience or your business.

Unlike first-party data which is free to collect, third-party data is typically aggregated from various sources and then sold to businesses to use for marketing purposes.

From a marketing perspective, third-party data is further removed and therefore offers less accurate customer insights. You don’t know the source of third-party data and it likely comes from sources that have not used or don’t know your business, limiting its utility.

For many years, marketers relied on third-party cookies to provide the data needed to develop digital marketing strategies and campaigns. But over time, concerns around the ethics of third-party data collection grew, especially in relation to data privacy and users’ lack of control over their data. As a result, most of the major search engines have banned—or will soon ban, in the case of Google Chrome—the use of third-party cookies.

Is first-party data ethical?

First-party data is ethical if it’s collected, stored and used according to data privacy laws, regulations and best practices that require responsible and transparent data handling.

The move away from third-party cookies highlights how first-party data is preferable when it comes to ethical considerations. With full control over the data you collect, you can ensure your first-party data strategy protects the data privacy rights of your customers. You can clearly explain to your customers how you handle their data so they can decide whether they agree to it when using your site or service.

Unfortunately, unethical first-party data collection can and does happen. Businesses that collect data from their customers without informed consent or who use the data in a way the customer didn’t agree to—such as selling it to a third party—violate their data privacy. Not only does this carry potential legal consequences, but it also significantly undermines the relationship of trust between a business and its customers.

How do you collect first-party data ethically?

The first step towards ethical data handling is compliance. There is a range of data privacy laws protecting customer rights and placing obligations on businesses in terms of how they collect, store and use personal data, including first-party data.

Confirming which laws apply to your business and developing an understanding of your legal obligations under them is not only essential for compliance, but it also informs your data architecture structure. The application of data privacy laws depends on your business or activities meeting certain criteria. It’s worth noting that some data privacy laws apply based on where your customer is located, not your business.

Data privacy legislation in Europe

European customers’ data privacy is protected by the General Data Protection Regulation (GDPR). The GDPR requires businesses to demonstrate ethics in data collection and use.

This often means customers must provide informed consent, or opt-in, to their data being collected and used. Businesses must also keep records of this consent. Customers can withdraw their consent at any time and request their data be deleted in certain cases. You must implement reasonable security measures to ensure data is stored securely, according to the level of risk. One option is to use air-gap backups to protect data from cyber threats by isolating it from the network. In certain circumstances, you also need to nominate a data protection officer.

Data privacy legislation in the UK

If you have UK-based customers, you need to comply with the provisions of the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018. These include providing a lawful basis for collecting personal data, such as consumer consent via a positive opt-in.

Consumers have the right to request the use of their data be restricted or their data erased, in certain circumstances. Relevant to first-party data, consumers can object to their data being used for profiling, including for direct marketing purposes.

Data privacy legislation in the US

The US doesn’t have a federal data privacy law. Instead, an increasing number of states have introduced their own. The first state to do so was California.

Under the California Consumer Privacy Act (CCPA)*, you can only collect customer data by informed consent—customers need to know how data, including first-party data, is collected and used. Customers also have the right to opt-out of the sale of their personal data and to request their data be deleted. If a data breach occurs where you have failed to use reasonable security measures to store the data, customers have a right of action.

2023 looks to be a big year for the data privacy landscape in America. In Virginia, the Consumer Data Protection Act (VCDPA) is due to commence on January 1. The VCDPA includes a provision for customers to opt-out of data collection for profiling or targeted advertising processes. Colorado, Connecticut and Utah have introduced similar laws, also ready to commence next year.

Beyond compliance

As you can see, some general principles emerge across the different pieces of data privacy legislation:

  • Customer consent — customers should consent to the collection and use of their data
  • Transparency — you should explain to customers what data you collect, how you collect it and what you do with it, typically via a privacy policy or statement
  • Control — customers should be able to control the use of their data, including requesting its deletion.

From a consumer perspective, compliance is the bare minimum. While the design of your data architecture structure should be guided by the above principles and comply with any relevant data privacy laws, you can also take extra steps to demonstrate your business’s commitment to ethical data handling. This may include appointing a data protection officer to oversee compliance and provide a point of contact for complaints or providing your employees with training, even where it isn’t required by law.

How to use first-party data

In a crowded online marketplace, it’s hard to make yourself heard over the noise. Arming yourself with accurate and reliable first-party data, however, helps you stand out from the crowd and communicate your message to both current and potential customers.

Firstly, you can use the first-party data you collect to create an exceptional customer journey through personas—fictional representations of your customers’ broad wants and needs. Building a series of personas can help you tailor your product or service and business practices to better serve your general customer base.

First-party data is also a crucial ingredient for more specific 1:1 personalization. With it, you can craft a unique user experience for your customers by delivering individual recommendations, messages, ads, content and offers to improve their purchasing journey.

In addition to serving a marketing purpose, first-party data is also essential for retargeting customers, for example, by sending abandoned cart emails. It can also help you identify and address gaps in your customers’ buying experience or your current offerings.

Want to get started with 1:1 personalization or personal recommendations?

AB Tasty and Epoq is the complete platform for experimentation, content personalization, and AI-powered recommendations equipped with the tools you need to create a richer digital experience for your customers — fast. With embedded AI and automation, this platform can help you achieve omnichannel personalization and revolutionize your brand and product experiences.

Benefits of first-party data

Personalization

First-party data provides deeper insights than second or third-party data, allowing you to incorporate a higher degree of personalization in your marketing. In turn, this improves the buying experience for your customers, gaining their loyalty.

Reduces costs

Engaging a third party to aggregate data costs money. First-party data, on the other hand, doesn’t cost you anything to collect.

Increases accuracy

Collecting data from your specific customer base and their interactions with your company produces tailored insights, rather than generic information. First-party data comes directly from the source, increasing its reliability.

Gives you control over data

You own first-party data collected from your customers. This puts you in full control of how it is collected, stored and used.

Transparency

As you have full control over how you collect and use first-party data, you can clearly explain this to your customers to obtain their informed consent. This transparency builds trust and loyalty with your customer base.

Strengthens customer relationships

In a recent Ipsos poll, 84% of Americans report being at least somewhat concerned about the safety of the personal data they provide on the internet. At the same time, Salesforce found that 61% of consumers are comfortable with businesses using their personal data in a beneficial and transparent way. First-party data builds better customer relationships by balancing customers’ desire for data privacy with their preference for personalized advertising.

Compliance with regional privacy laws

Most countries are strengthening their legislative framework around data privacy and prioritizing users’ rights. With first-party data, you can design your data architecture structure to ensure it complies with any relevant laws.

Ethical first-party data handling benefits both you and your customers

First-party data is the key to accurate and sharp customer insights that help you shape effective, targeted marketing strategies. But with the demand for ethical data collection at an all-time high, it’s important you treat your customers’ first-party data with care.

First-party data should be collected responsibly and transparently, with the customer’s fully informed consent. Your first-party data strategy also needs to comply with any relevant data privacy laws, regulations and best practices. This approach achieves a happy medium between addressing customers’ data privacy concerns with their desire for personalization during the purchasing journey. It also helps you optimize your customer’s experience with your business and, in turn, your profits.

Interested in learning more about how you can use first-party data to benefit your business? Check out our customer-centric data series for more insights from the experts.

*Amendments to the CCPA are due to be introduced in 2023, via the California Privacy Rights Act. Many of the related regulations are still being updated.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to send you communications as described in our  Privacy Policy.

Article

10min read

Measure your DevOps Performance: DORA Metrics

Nowadays, as software development processes become more decentralized and as the number of teams working on different projects (in different places too) increases, it becomes that much harder to set and track metrics to measure performance across these teams.

And yet data is now more important than ever. Data is a company’s most valuable asset in order to measure how efficiently teams are performing over time to deliver the best products and user experience to customers. 

This is especially relevant for DevOps teams where there’s a need for a clear framework to measure their performance.

This is where DORA metrics come in.

What are DORA metrics?

DORA metrics provide a standard framework to help leaders who are implementing a DevOps methodology in their organization to measure the performance of their teams.

This framework was the result of a six-year research program conducted by Google Cloud’s DevOps Research and Assessment (DORA) team after analyzing survey responses from over 32,000 professionals worldwide. Their goal was to determine the most effective ways to develop and deliver software.

Through the use of behavioral science, the research identified four key metrics that would indicate the performance of a software development team. 

With these metrics, teams can measure their software delivery performance, monitor it over a period of time and be able to easily identify areas of improvement to optimize performance. In that sense, they shed light on the capabilities that drive high performance in technology delivery.

Therefore, DORA metrics are especially relevant for DevOps teams as they provide them with concrete data to measure performance and improve the effectiveness of their DevOps operations. It also allows them to assess whether they are building and delivering software that meets customer requirements as well as gain insights on how to improve and provide more value for customers.

The four DORA metrics

In this section, we will list the four main metrics that the DORA team identified for DevOps teams to measure their performance. 

The following chart shows from the 2022 State of DevOps report, updated each year, shows the ranges of each metric according to the different categories of performers:

The four key metrics used are:

  1. Deployment frequency

Deployment frequency measures velocity. In this case, the goal is to measure how often an organization successfully deploys code to production or releases it to end users.

This is an important metric particularly for DevOps teams whose ultimate goal is to release software quickly and frequently. It helps teams to measure their productivity and agility as well as uncover issues and bottlenecks in their workflow that may be slowing things down.

High performing teams deploy on-demand, multiple times a day. Thus, this metric stresses the importance of continuous development and deployment, which is one of the principles of a DevOps methodology.

Each organization will need to consider what constitutes a “successful” deployment for its teams such as taking into account what level of traffic is sufficient to represent a successful deployment.

How to improve this metric:

To enhance this metric, it’s usually best to ship code in small batches on a frequent basis. This will allow you to reduce risk of deploying bugs and increase speed of delivery. Implementing an automated CI/CD pipeline will also enable you to increase deployment speed.

  1. Lead time for changes

Lead time for changes is the amount of time it takes a commit to get into production. Therefore, this metric also seeks to measure velocity and gives an indication of a team’s cycle time. The lower the lead time for changes, the more efficient the team is at deploying code.

This metric requires looking at two pieces of data: when the commit happened and when it was deployed. The goal is to keep track of the time development starts until the committed code is finished and deployed to uncover any inefficiencies in a team’s processes. The average time can then be used to analyze overall performance.

In other words, the purpose of this metric is to give an indication of the waiting time between the initial stages of implementing the change and deployment. A high lead time may suggest inefficiencies in the CI/CD pipeline and not enough automation, especially if every change has to go through manual testing instead which significantly slows things down.

How to improve this metric:

Again, here it’s best to work with smaller changes. This allows for faster feedback so developers can immediately fix any issues. Teams should also eliminate bottlenecks and integrate automated testing at every stage of the CI/CD pipeline to detect issues early on. 

Feature flags are also a great tool to lower lead time as any unfinished changes can be hidden behind a flag while other changes can be deployed.

  1. Change failure rate

This represents the number of deployments causing a failure in production. In other words, it measures any changes to code that resulted in incidents, rollbacks or any other failures. This depends on the number of deployments attempted and how many of those resulted in failures in production.

As a result, this metric is a measure of the stability and quality while the previous two focus mainly on speed of software delivery. 

This metric requires the number of deployments that resulted in failures divided by the total number of deployments. The percentage calculated will give insight into how much time is dedicated to fixing errors as opposed to delivering new code. 

The lower the rate the better. High performing teams have a change failure rate of 0-15%. 

Consequently, teams with a low change failure rate is a sign that these teams have an efficient deployment process in place, which can be mainly achieved through automating every step of the process to avoid common manual errors.

It’s important to note, however, that this metric can be hard to quantify as the definition of failure can vary widely. Therefore, it’s best for each organization to set goals for its teams according to their unique business objectives. 

How to improve this metric:

Automation is crucial to also help improve this metric. Automated tests can evaluate code at every stage in its development. This way, issues are caught and fixed early on so they’re less likely to make it to production. Creating critical feedback loops are necessary to get a low change failure rate to prevent incidents like this from happening again in the future.

  1. Time to restore service 

Also referred to as “mean time to recovery MTTR”, this indicates how long it takes for an organization to recover from a failure in production that impacts user experience.

This metric, like change failure rate, is meant to determine the stability of a system or application when unplanned outages occur. Thus, information about when the incident occurred and when it was resolved then deployed will be needed to measure the time to restore service.

Therefore, the “time to restore service” metric is important as it encourages teams to build more stable systems and create action plans to be able to respond immediately to any failures. High performing teams will resort to deploying in small batches to reduce risk while increasing speed of delivery.

This is particularly applicable to DevOps teams as they place high emphasis on the idea of continuous monitoring, which will in turn help them to improve their performance when it comes to this metric.

How to improve this metric: 

Consider using feature flags. Feature flags act as switches enabling you to turn a change on or off in production. This means that if any issue occurs, you can toggle the switch off if something goes wrong with a change in production with minimal disruption while it’s being resolved. This will then help reduce your MTTR. 

The DORA metrics can then be compiled into a dashboard. To do so, DORA created the Four Keys dashboard template to generate data based on the metrics and visualize the results. See example below of this dashboard:

The dashboard gives a higher-level view for senior stakeholders of their organization’s DORA metrics to understand how their teams are performing and what corrections can be done to remedy any problems.

Why are DORA metrics important?

As we’ve already mentioned, DORA metrics are a great way to keep track of the performance of DevOps teams and identify areas of improvement.

It helps organizations assess their delivery process and encourage teams to streamline their processes and increase the speed of delivery while maintaining quality.

As a result, the main benefits of these metrics are:

  • More effective decision-making– with the data acquired from these metrics, teams know what aspects to focus on that need improvement. Teams can easily detect issues and bottlenecks within the software development process and devise an action plan to address them. Decisions will be based on data rather than opinions or gut feelings which may be misleading. 
  • Better value- DORA metrics will give teams an indication whether they’re delivering value to customers by evaluating the efficiency of your value stream and finding areas to improve within your delivery process to build higher quality products.
  • Continuous improvement- this is particularly important as it is one of the main pillars of a DevOps methodology. Using DORA metrics, the team gets insight on their performance and set goals to improve the quality and delivery of software.

Challenges of DORA metrics

DORA metrics have a lot of advantages, but they do come with their own challenges as well.

One of the main challenges when faced with these metrics is that they will vary across organizations and teams as, often, they have different definitions and processes in place. In other words, no products or teams are the same and may operate at their own level of complexity. As a result, it’s important to put this data into context before making decisions.

DORA metrics give a good overall picture of how teams are performing in certain categories. This means that it’s important to have a valid way to keep track of the data but don’t rely solely on them. 

Teams may be facing issues beyond what is accounted for in these metrics. DORA metrics are focused mainly on the bottom line instead of the inputs and processes that lead to the outputs being measured. Sometimes, there’s more to the story than what DORA metrics measure so tread carefully. 

Ultimately, enhancing performance will be unique to each organization. Work on shifting your attention to your team and goals to give context to the story all these metrics are telling. Focus on building the right culture for your team and providing them with the tools they need to enhance performance. This, in turn, will help them deliver business value faster. 

DORA metrics: The key to unlocking more value

DORA metrics are a great starting point, especially to help teams make informed decisions about what can be improved and the steps to take to achieve that. 

They give a good indication of a team’s progress along their DevOps journey and encourage the implementation of the key principles of DevOps including shipping in small batches more frequently.

In particular, it enables them to assess and analyze the efficiency of their development and delivery processes by offering a framework for measuring performance across two important variables in DevOps: speed (deployment frequency & lead time for changes) and stability (change failure rate & time to restore service). 

Teams will then be able to create more value for their customers faster. Above all, DORA metrics are a way for teams to shift their focus to maximizing velocity and stability.

It’s important to note that tracking these metrics should be in line with your organizational goals and customers’ needs to give context to these metrics, make sense of them and improve them.

Feature flags are also another effective way to improve performance across these metrics. They will allow you to ship new changes in small batches and hide any that are not yet ready to speed up deployment while reducing risk of big bang releases making problems easier to detect and resolve.

Get a demo of AB Tasty to unlock the value feature flags can bring to your teams today.