Video - The New Reliability | Blameless

Video - The New Reliability

Video - The New Reliability

Video - The New Reliability | Blameless

What is reliability? When you try to nail it down, it's surprisingly nebulous. But with our new definition, you'll see reliability as something clear, measurable, and concrete. We'll equip you to present reliability to your org in a way that's true to your needs and resonates across the business overall. This talk reintroduces humanity into the reliability equation. It's not just about product health, it's about the humans on your app and the humans behind it.

Featuring Emily Arnott

Video - The New Reliability | Blameless

Video - The New Reliability

Video - The New Reliability

Video - The New Reliability | Blameless

What is reliability? When you try to nail it down, it's surprisingly nebulous.

But with our new definition, you'll see reliability as something clear, measurable, and concrete. We'll equip you to present reliability to your org in a way that's true to your needs and resonates across the business overall.

This talk reintroduces humanity into the reliability equation. It's not just about product health, it's about the humans on your app and the humans behind it.

Takeaways

  • The new definition of reliability is based on a combination of product health, customer happiness, and socio-technical resilience.
  • This definition works for you because it aligns your whole organization, motivates impactful changes, and prioritizes where changes are needed.
  • In essence, reliability is your system’s health, contextualized based on user expectations, prioritized based on your team’s socio-technical resilience and readiness.
In this talk, Blameless’s Emily Arnott discusses The New Reliability, which is based on product health, customer happiness, and socio-technical resilience.

Description

In this talk, Blameless’s Emily Arnott discusses The New Reliability, which is based on product health, customer happiness, and socio-technical resilience.
In this talk, Blameless’s Emily Arnott discusses The New Reliability, which is based on product health, customer happiness, and socio-technical resilience.

Speakers

Emily Arnott

Community Relations Manager, Blameless
Read Bio

Emily Arnott

Community Relations Manager, Blameless
Emily is the Community Relations Manager at Blameless, where she fosters a place for discussing the latest in SRE. She has also presented talks at SREcon, Conf42, and Chaos Carnival.

Video Description

In this talk, Blameless’s Emily Arnott discusses The New Reliability, which is based on product health, customer happiness, and socio-technical resilience.

Table of Contents

1. What is reliability?

You might be surprised at the diversity of answers you get when you ask many different engineers to define reliability. Isn’t reliability just uptime? Google says to consider the customer’s expectations. That invites further questions. What customers are we looking at? How do we determine what those levels are? We know an absence of reliability incurs major costs to an organization.

2. Why do you need to align

Outages and failing to meet customer expectations are extremely costly to organizations.

3. Reliability in the “real world”

Emily provides a real world example. When you fly, you assume that the airline is prioritizing your safety and your needs (Customer Happiness). You also assume that the airline systems are working properly and the airplane is properly stocked (Product Health). You also assume that the pilot knows how to fly, that the crew will show up on time and the airport is properly staffed (Socio-technical resilience). The second real-world example is holiday flight disasters. Heightened demand and terrible weather lead to many travelers unable to arrive to their desired destinations on time. The airline systems hit their limit and flights were cancelled (product health), poor communication lead to travelers being frustrated (customer happiness), and the staff was not trained to handle this level of strain (socio-technical resilience). There are countless other examples - poor cell phone service, cars breaking down, and apartment buildings needing maintenance. This type of unreliability is everywhere.

4. A technical example

We evaluate three services: Service A, B, and C, based on system health, user expectations, and sociotechnical resilience. You may think one service should be improved based on bad system health, but if user expectations of the service are not high, it may not need to be prioritized over other services. You must consider all three buckets of the New Reliability framework in order to decide which services to prioritize assigning resources to.

5. Measuring the new reliability

Ask yourself some questions. What are the sources of manual labour for each type of incident? What is toilsome, tedious and repetitive? How many incident hours has each engineer spent on-call? How much time has your team spent fixing each service?

Video Contents

0:00 - 1:18 - Introduction

Emily Arnott, Content Marketing Manager at Blameless, proposes a new definition of reliability. What definition of reliability are you currently working with, and does it really help your team define what reliability is?

1:19 - 3:22 - What is reliability?

You might be surprised at the diversity of answers you get when you ask many different engineers to define reliability. Isn’t reliability just uptime? Google says to consider the customer’s expectations. That invites further questions. What customers are we looking at? How do we determine what those levels are? We know an absence of reliability incurs major costs to an organization.

3:23 - 4:27 - Our thesis: The New Reliability

We have spoken to many engineers about reliability from many organizations. It boils down to: the health of your product, the happiness of your customers, and the socio-technical resilience of your team.

4:28 - 12:00 - The reliability of flying

Emily provides a real world example. When you fly, you assume that the airline is prioritizing your safety and your needs (Customer Happiness). You also assume that the airline systems are working properly and the airplane is properly stocked (Product Health). You also assume that the pilot knows how to fly, that the crew will show up on time and the airport is properly staffed (Socio-technical resilience). The second real-world example is holiday flight disasters. Heightened demand and terrible weather lead to many travelers unable to arrive to their desired destinations on time. The airline systems hit their limit and flights were cancelled (product health), poor communication lead to travelers being frustrated (customer happiness), and the staff was not trained to handle this level of strain (socio-technical resilience). There are countless other examples - poor cell phone service, cars breaking down, and apartment buildings needing maintenance. This type of unreliability is everywhere.

12:00 - 17:35 - Let’s take a tech example

We evaluate three services: Service A, B, and C, based on system health, user expectations, and sociotechnical resilience. You may think one service should be improved based on bad system health, but if user expectations of the service are not high, it may not need to be prioritized over other services. You must consider all three buckets of the New Reliability framework in order to decide which services to prioritize assigning resources to.

17:35 - 22:13 - Let’s break it down a bit further

What exactly do product health, customer happiness, and socio-technical resilience entail? Product health includes observability, telemetry, and the four golden signals. Customer happiness includes the user experience, what is most important to them, and what their expectations are, in addition to their confidence in your service. Socio-technical resilience includes how effective your team is during incident response, whether we have clear service ownership, and whether teams are aligned on their priorities and responsibilities.

22:13 - 24:11 - Why this definition works for you

This defintion is all-encompassing and holistic. More importantly than this particular definition, it is important your entire team is aligned on a singular definition. This framework also motivates impactful changes. Lastly, this framework helps your team prioritize where changes are most needed.

24:11 - 27:27 - How to measure the new reliability

Ask yourself some questions. What are the sources of manual labour for each type of incident? What is toilsome, tedious and repetitive? How many incident hours has each engineer spent on-call? How much time has your team spent fixing each service?

27:27 - 29:01 - Conclusion

New Reliability takes your system’s health, contextualizing that based on your user’s expectations, and all of that prioritized based on your engineers’ sociotechnical resilience.

Video Transcript

Emily Arnott (00:04):

Hello, my name is Emily Arnott and I'm the Community Manager here at Blameless and today I'd like to propose something sort of bold. A new definition of reliability. Now, reliability is obviously a very important concept. If you're in the SRE space it lives right in the center, site, reliability, engineering. In the world of chaos engineering when you're doing these experiments with failure, when you're seeing what makes your system tick. When you're trying to proactively prepare for the biggest possible problems, what is it that you're trying to improve? In a word, it's reliability. But what is reliability really? And whatever definition you're working with, is it sufficient? That's the first thing we're going to cover in this talk. Then we're going to cover the need for alignment on a singular definition of reliability across your organization. We're going to take a step back and think about our new understanding of reliability in the real world and how that can kind of motivate a more holistic view of what the concept really is.

(01:07):

Then we're going to zoom back into the world of tech with a technical example and talk about the all important measurability of this new reliability. So if you asked some of your peers, other engineers, what exactly is reliability? You might be surprised at the diversity of answers you get. At first some people might say, well isn't reliability just uptime? That kind of makes sense, right? If the site is up people can access it, therefore it's reliable. But then somebody might say, well uptime isn't everything. You have to think about how often is the site giving you errors? How often is the response actually the correct response? Somebody else might chime in and go, well, it's not just about is it correct or not but is it fast enough? Then you're thinking, well, what is enough in this case? What is fast enough? Well, the popular definition with SREs is to consider your customer's expectations. You have like a service level objective. Are we hitting that? That means we're reliable. But then that kind of just invites further questions. What customers are we looking at?

(02:15):

How do we determine what those levels are? You'll find that the more you talk about reliability, the more nebulous of a concept that it becomes. Despite this nebulousness and despite the fact that you can talk all day trying to pin it down. Companies are really eager to figure out exactly what reliability is and how to improve it, because although we all have different definitions for reliability in the positive. It's all too clear what happens when there's an absence of reliability, and that's major costs. So if we look at some famous newsworthy outages in the last few years, you'll see that there's no easier way than to wipe out tons of your revenue, tons of your operating costs, than to suffer major outages. So these are major corporations of course, but for organizations of all sides. A major downtime, incidents, unacceptable rates of failure can be extremely debilitating to their reputation and their customer happiness. So what is it that we're trying to convince you of today? What is our definition that we think is a little more holistic and a little more helpful?

(03:31):

We're not here to just kind of freak you out with all of these terrible statistics, but we want to inspire some confidence. We want you to really trust in your understanding of this concept and feel motivated in improving it across your organization. We've been speaking to a lot of engineers about this from all sizes of organizations, and based on those conversations we realize that it kind of boils down to three major facets. First the health of your product, the happiness of your customers and third, and this is the aspect that we feel is very underrepresented in these conversations. The sociotechnical resilience of your team. These may sound just as nebulous conceptually as reliability itself, but we promise that by the end of this talk you'll be able to confidently measure and work towards these goals. So let's think about the real world, the world outside of tech for a little bit. Everybody has lots of services and products that they rely on, that they depend on but what's going into that decision? So think about taking a flight somewhere.

(04:44):

When you book a ticket with an airline, you have a lot of assumptions that you trust in that give you confidence that you'll be able to get to your destination on time. Let's take a look at our three major buckets and where these different assumptions fall into them. First off, you assume that the airline and you are aligned on this idea that they understand you have certain needs. That you're entitled to a certain level of service by purchasing your ticket and that they're working their utmost best to meet those needs.

(05:18):

You also assume that all the systems are working properly, that the ticket you bought is actually for the correct flight, that it actually will get you a seat on that airplane. You also assume that all of the systems in place to keep the airplane properly fueled, properly stocked, up to date with the latest hardware and software are all functioning smoothly and that there won't be any gaps. Also, when you're thinking about the way that they stock and prepare the airplane. You're assuming that they're working with your wants and needs in mind. That they're prioritizing the customer happiness when making those decisions. But there's also a lot of sociotechnical assumptions that you make that you may not even be aware of. Namely, you assume that the pilot knows how to fly. It goes without saying that you would make this assumption, but it's actually a very critical point that you assume the systems in place to train and license pilots are functioning correctly.

(06:17):

You assume that the crew will show up on time and that the airport is properly staffed. That the crew is ready to do their job to their best, that they're in good spirits, that they're cooperative. That they're able to make these choices that will help you, and that even all across the airport that the staff that is handling your bags, checking you in, working you through customs and security. That all these people are properly trained, that they know what they're doing and that they're going to be able to get you to your flight on time. It's also very illustrative to think in the negative, although maybe this can be a little triggering for people who maybe had some holiday plans go awry last Christmas. Over the holiday season all sorts of planes had flights rerouted, delayed, outright canceled. These were exasperated by a number of factors such as heightened demand, terrible weather. You can see on this terrifying chart that the number of flights last holiday season that were canceled vastly outpaces most in the last decade.

(07:36):

When we think of what went wrong, it's also very helpful to sort them back into these buckets. So certain things are sort of out of your control and negatively impact the health of your overall system and product such as the bad weather or the high demand. We can think of these in terms of the demands that the customer is inducing and just sort of external demands that are kind of outside of anyone's control. As a result the systems hit their limit, they weren't designed to take this much strain. Flights start being canceled. The guarantee that you have that a ticket purchased is equivalent to a seat on the plane no longer applies. Customers feel that they're dealing with poor communication, they're not proactively warned about these risks. They're not managed properly in terms of getting them onto alternative flights. The messaging can be inconsistent. There was so many horror stories of people being told, "Oh, your flight is back on. Oh, it's delayed again. Oh, it's canceled. Show up. Don't show up." All over the board.

(08:44):

Then also in the sociotechnical resilience bucket, we see a lot of problems emerge as a result of these other systems strained to breaking. So for example, when these systems go down and the automatic booking systems can't handle this number of requests. A lot of airline staff were needed to manually reorganize customers into new flights, which is something that people aren't really trained on. It's a crisis scenario that they had never fully anticipated, and these people weren't ready to take on all of this manual work. In general, the staff was just not trained for what to do in this sort of crisis situation. The communication was lacking because there wasn't a system put in place for the staff to communicate with each other across leading to this inconsistent messaging. In general, airplanes were often understaffed as a result of covid and a lot of fluctuating markets around tourism. Many airports were just simply not able to accommodate the amount of traffic they got over the holiday season and this understaffing just exasperated every other problem.

(09:54):

When you think about this three-pronged approach of reliability, you'll find that it comes up anywhere or everywhere and anywhere. When you think about a company that you complain about, a service that you're unhappy with or conversely a service that you are extremely happy with and feel like you can really trust and depend on. You'll find that the factors motivating these feelings can be sorted into these three areas. So think about poor cell phone service. Yeah, there's aspects of maybe the network is spotty, you don't get good connection in certain buildings. Maybe there's aspects of obviously they're thinking more about the bottom line than the customer happiness. They're not reinvesting based on customer priorities. They're not making the effort to keep you on as a customer. But then a lot of the problems also emerge in the sociotechnical space where maybe support staff that you reach out to when you're having a problem isn't properly trained. Or if there's a larger outage there's inconsistent messaging around when things will be fixed because the staff isn't so unified and there isn't consistent messaging internally.

(11:03):

Same with a car. When you try to make a purchase of a reliable car, you're not just thinking about does this get good mileage? Is it robust? Does it have a lot of breakdowns? You're not just thinking about, does this company seem to understand the needs of the consumer? Are they innovative? But you're also thinking about how easy is this to fix? Are there mechanics I would trust to fix this car? Is there good service stations provided by the dealership? Same if you live in an apartment building and say an elevator goes down. Well, sure that's a system failure. But you want to trust that the apartment building is investing in your happiness and understands the urgency and need to get this fixed as soon as possible, and that the people that they've contracted to come out and fix it are trained, are technically competent, will know how to resolve it. So now that we've kind of seen how ubiquitous this is across all sorts of industries and services. Let's zoom back in on tech.

(12:08):

Let's think about how do we apply this sort of thinking to software development and software operations where often this area of social technical resilience gets overlooked. So this will be a very simplified example, and obviously in the real world the numbers don't pan out quite so easily and there's a lot more nuance involved. But hopefully it can be illustrative of how this line of thinking can make you reevaluate where your priorities are. So we have three services, A, B, and C, and we're going to consider them under these three factors. They're system health, they're user expectations and their sociotechnical resilience. So service A, it's in quite good health. It rarely experiences outages, it's able to deliver fast consistent results. Service B, it's okay. Sometimes it runs a little slowly, sometimes it goes down, but generally speaking it delivers what people are expecting. Service C though not so good. We've had a lot of outages with it recently. Sometimes it goes down completely overnight and generally speaking, it's not going to deliver anything that people request from it.

(13:26):

So let's think first, if you had some time to run experiments on this, some time to develop resources to help facilitate problem solving with this, if you had some time to refactor code bases. Obviously you're looking at service C first, right? That's the one that seems to be screaming out for the most attention. But let's contextualize this a little bit with the user's expectations for each service. With service A it's fairly popular, maybe about half of your users make use of it when they're experiencing your products. It's something like an auxiliary feature that provides more options when you're, they're surfing through your e-commerce catalog, let's say. Not something that's integral to the actual usage of the server, but a nice add-on. Service B, that one is something everyone uses. Let's say it's your login services. There's no way that a user is going to be able to make use of your offerings if they're not able to log in. Service C, let's say that that's actually in very low usage. Maybe less than 5% of your customers make use of it on any sort of regular basis.

(14:43):

It could be in this e-commerce example, some sort of feedback system where they can live chat with someone on the site and it's just not a very popular feature. So now you're looking at it again and you're thinking service B now kind of sticks out like a sore thumb. There's a lot of demand for it. There's a lot of expectations that it'll work consistently, and right now its health is only okay, we should be able to get that up to something in the green. But there's one more factor to consider, and that's the sociotechnical resilience of your teams. Let's say that service A is actually quite new, it's a new feature that's been introduced. So far it's been getting medium popularity, people are kind of playing around with these expanded search features. But more crucially, the internal operations teams that would be tasked with responding to outages for it or other incidents haven't had very much experience at all.

(15:47):

This is a new type of feature that uses a whole new code base and operates quite differently than anything else you've stood up before. So when something goes wrong, there could easily be a lot more panicking. It's a lot more reactive. They're scrambling to try to figure out what's wrong, how do we fix it? None of the things they do for other services might apply. There's not things set up already like run books or guidance or past incidents to draw on. Whereas service B and service C, they kind of fall in line to existing patterns. They've been around for a while, they've had their fair share of outages and engineers feel generally quite comfortable in dealing with any problems that arise. So now you're rethinking things again and you're thinking, well maybe service A is actually the one we should proactively focus on. Let's say you're running a chaos engineering experiment. You're going to simulate the outages of certain features and just see how well your team can react well with service B and service C. You've already gotten quite a lot of data from previous actual outages or previous experiments.

(16:54):

There's already a lot of training behind those services. You may not actually need to worry about putting additional effort into strengthening those services. Whereas service A, any sort of outage. Any sort of incident could be quite severe if the teams aren't prepared and motivated and confident to resolve it. So this isn't to say for sure work on service A, as much as is it to say service B or service C. But rather, you should just keep all of these things into consideration that when you're making these strategic choices. All three buckets should be weighing equally important in order to make your choice. So let's break it down a little bit further. What exactly lives inside of each of these three buckets in a technical context? Let's start with product health. This is generally the easiest one, it's the one that most organizations are already set up to monitor and capture. This is all of the things that you get from observability of your code of telementary, of monitoring tools. How stable is your code base? All of the data that's flowing through your system that you're able to skim and observe and capture.

(18:09):

Whether that's embedded in the actual code, whether that's probing your code and actually in a production environment using tools to simulate usage and gaining statistics from that. Your four golden signals of latency, error rate, traffic and saturation. This is stuff that generally is familiar to most engineers. It's the stuff that's easiest to quantify, easiest to capture and easiest to work from. Customer happiness is a little more complex, but more and more organizations are setting themselves up to be able to monitor this. To be able to get good qualitative and quantitative data on how customers are enjoying the service, what their expectations are from it and what areas are you exceeding or lacking. So generally speaking, how happy are your customers and what does the user experience look like? What steps does each typical group of users take? What buttons are they clicking? You can go as discreet as that, and at what stages is it most crucial for it to work consistently? And what stages are more extraneous? If they fail occasionally or they run slowly, the user isn't going to be too upset.

(19:29):

So yeah, what is most important to them and what are their expectations of each of those steps? Overall does the customer feel confident that they can trust in your product and your business? That they can start relying on the service day-to-day? Does the customer feel supported and informed that if things go wrong, do you have good outreach to these stakeholders? Do they have confidence that you've learned from each outage and that there's not going to be repeat incidents? Do your customers feel connected to you? Is there channels through which they can provide feedback and have questions answered? Finally, let's look at the sociotechnical resilience bucket. What sort of questions can we ask to start measuring this? So we can think about when an incident happens how effective is our team in dealing with it? Are there a lot of tedious manual steps? Is there a lot of time wasted in coordination and getting your feet under you and figuring out what to do? Is there clear service ownership? That when something goes wrong, your team immediately knows who ought to be called how things should escalate?

(20:39):

Who's the subject matter expert that's going to have the best chance at resolving? Are teams aligned on their priorities and responsibilities? Do you have a consistent definition of severity? Do you have a consistent understanding of how severity is defined and how novel incidents can be sorted and be given an appropriate triage and response? Are on-call loads balanced? Do you have certain teams that are super overworked and burnt out? Is there a more fair way to distribute services and time slots such that nobody is carrying by far the hardest shifts, the biggest loads? Are people burnt out? Can you get a sense of how much your engineers are actually working reactively on incidents? How much is that unplanned work adding on to their planned work? Are you accounting for that? Are people equipped with the tools and knowledge they need? This is a very important one. If a given service breaks, is there something that's been proactively prepared to teach people how to fix it? Are there guides? Are there run books?

(21:45):

Are there examples they can look at? Do they know who to contact? Have they learned from other people? Or is there still a lot of siloed tribal knowledge that hasn't been circulated? Does your team still function if somebody is suddenly away? Is there somebody that is relied upon heavily for a given type of incident that needs to share that knowledge and experience for the benefit of the whole team? So we can see this definition is quite all encompassing, it's quite holistic, and there's a lot of value in that I think inherently that there's never going to be something that slips under your radar. But even more important than I think the specific definition is to just have a definition that your entire organization can agree on. There's some benefits to that, first off this alignment. As we talked about before, it can be very troublesome when teams have conflicting priorities when they don't understand severity consistently.

(22:44):

So just having a singular definition that can serve as the foundation for all of this questioning and all of this definition just gets everyone on the same page and you know that you're going to be moving towards the same direction. It motivates impactful changes by encompassing the health of your product, the happiness of your customers and the readiness, confidence and happiness of your engineers. This one definition you know that if you're able to satisfy it, you're accounting for all of those three things and really what could be more important than those three pillars of your organization? Finally, it prioritizes where changes are most needed. So as we looked at in the technical example, analyzing your services through this three bucketed approach can really highlight areas where improvement is most needed and it can highlight them in a way that you know represents this greater organizational change. So it allows you to pick out services that require additional run books, that require additional training, that require perhaps chaos engineering experiments to test their failures. It kind of steers you on that right path consistently.

(24:06):

So we've mentioned before that one of the goals of any good definition of reliability is that it's something that should be measurable. You should be able to track it over time, you should be able to see how your changes are affecting it and know that you're heading in the right direction. So one way that you can start to measure these things, moving from kind of a qualitative quantity to something that actually is a trackable number on paper is by asking yourself some questions. So think about each type of incident. When you're working through the different steps of resolving that incident which ones are manual? Which ones actually take engineers devoting time that could be better spent elsewhere? What's toilsome? What's tedious and repetitive? What are their sources and where do they manifest most? You can actually kind of come up with a number for each type of incident. An outage to this server typically requires 10 minutes of this toilsome overhead.

(25:08):

How many incident hours has each engineer spent on call? This is different than just how many shifts do they have or how many hours do they have on call per shift. It's actually looking at how much time is spent nose down to the keyboard doing the active and often stressful work of resolving incidents. This will give you a better sense of your on-call load and balance. How much time has your team spent fixing each service? This is something that we talked about in the technical example. Have they had a lot of experience with this type of service when it goes down? Do they have processes in place from those past experiences? Are there examples that they can look at from previous incidents or is this something brand new to them? As you ask these questions and kind of find numbers that answer them, solutions will start to naturally appear. The goal isn't some sort of magic number that can just tell you everything, but that numbers can start motivating the story. They can start pointing out trends and outliers that can then lead to the more meaningful discussions and improvements.

(26:17):

So for example you might notice for this type of service there's a lot, we should start trying to automate some of these parts. We should maybe write some scripts or some more specific or automated run books to move people through this without so much toil. You might go, oh geez, this one team it's really always busy and we should maybe get some more people on it. This type of service is frequently having very severe outages and maybe the best solution for that right now is just to support the on-call team we have as we think about more long-term strategy. You might go, oh geez this type of service hasn't really gone down much before and that may sound great. But it just should make you a little concerned that when an outage inevitably occurs, the team might not be as ready for it as you'd hope. So we should proactively practice. We should run some KS engineering drills, simulate a few outages and see how quickly we can get through them. So as you can see, it's not all about just the number tells the full story.

(27:20):

But the numbers can start motivating these conversations to give you good strategic alignment. So in conclusion, our new reliability is taking your system's health. Everything that lives within your servers and your code base, contextualizing that based on your user's expectations to figure out what that acceptable level of health is. What is meeting their needs, what is keeping them happy and then all of that prioritized based on your engineer's sociotechnical resilience. What areas are they most equipped to deal with and what areas are still leaving them feeling a little shook? A little under under-prepared? We hope that this gives you a little bit of clarity on what improving reliability really means. That it's not just about refactoring code bases, it's not just about polling customers. But it's about making sure each individual on your incident response teams is ready to go no matter what goes wrong. Thank you so much for coming to my talk. These are some of the sites that I cited for some of the examples that motivated this. This topic is just endlessly exciting to me.

(28:39):

It feels like an area where we're only scratching the surface in terms of the most effective, holistic, and empathetic ways of thinking. I'm always excited to talk about these topics. You can find me on Twitter @EmilyArnott8, or reach out to me through Blameless. Thank you so much and have a wonderful day.