Troubleshooting Azure Functions cold start issues

Azure Functions is a serverless computing service from Microsoft Azure that allows you to build and deploy event-driven applications and services. With a serverless approach, you don’t have to manage infrastructure or servers, meaning Azure Functions can save you time, money, and resources.

However, serverless platforms across the cloud have one major drawback — cold starts. Cold starts can introduce significant latency to the response of a serverless function.

In this article, we’ll look at the conditions that cause cold starts for serverless functions and discuss how to detect and resolve them.

Eliminating cold starts in Azure Functions

Cold start refers to the time it takes for an instance to spin up from a shut-down state. Serverless functions are event-driven, running when the event trigger executes, and the instances that execute these functions shut down when you’re not using them. Then, they restart the next time the trigger functions run.

A lengthy cold start extends the function’s execution time, which increases response latency. Long cold starts make it difficult or impossible to use applications or services that expect fast API responses, like webhooks.

Identifying cold start conditions in Azure Functions

Installing API tracing tools in your serverless functions enables you to monitor cold start times. Azure Monitor is an all-inclusive tool that collects, monitors, and visualizes all the metrics you need to identify cold starts in Azure Functions. Azure Monitor’s Application Insights feature provides an application performance management (APM) service that you can install in your Azure Function App.

The Application Insights feature is enabled by default when you create a Function App, as shown below:

Creating a Function App Fig. 1: Creating a Function App

Once a function has been deployed, Invocation Traces can tell you when your function experiences a cold start. Click the Monitor sidebar item within the dashboard page to view the traces of your function invocations.

The image below shows the invocation trace for a boilerplate function using the HTTP trigger. As you can see, the first API request duration took 342 ms, while the following invocations functions took 5 to 9 ms because the instances were already running:

Invocation traces Fig. 2: Invocation traces

Through Application Insights, you can view more details of the function invocations and set up alerts for when the invocation duration crosses the specified threshold.

Click Run Query in Application Insights to retrieve more details about the invocations using the Kusto Query Language (KQL):

Running a query in Application Insights Fig. 3: Running a query in Application Insights

Paste the following code into the query editor to retrieve specific details about the function’s invocation over the past day:


requests | project timestamp, id, operation_Name, resultCode, duration, cloud_RoleName
| where timestamp > ago(1d)
| where cloud_RoleName =~ 'function-app-cold-start' and operation_Name =~ 'HttpTrigger1'
| order by timestamp desc
| take 20

Here are the query results, showing durations from 4.5512 up to 341.9981 ms:

Query results Fig. 4: Query results

The duration of 341.9981 ms likely indicates a cold start.

Azure hosting plans to reduce cold start times

Azure offers three hosting plans, two of which have features to help you reduce cold start times. The Premium plan “prewarms” instances to ensure your functions are always ready. The Dedicated plan has an always-on configuration option to run your functions continuously.

Azure lets you change your hosting plan via the Azure command-line interface (CLI) or the Azure portal.

Change your hosting plan via the Azure CLI

First, let’s look at how to change your function’s hosting plan via the Azure CLI using az functionapp plan create and az functionapp update.

Launch the terminal and execute the following command to create a Premium Function Service plan:


az functionapp plan create --location eastus --name PremiumPlan --number-of-workers 1 --resource-group function-
app-cold-start_group --sku EP1

Here is the JSON response from this command. The SKU object outlined in red contains information about the plan, like capacity, name, and tier:

The JSON response from creating a Premium service plan Fig. 5: The JSON response from creating a Premium service plan

Next, execute the following command (replacing --name and --resource-group with your function details) to update the function to use the Premium service plan you just created:


az functionapp update --name function-app-cold-start --resource-group function-app-cold-start_group --plan
PremiumPlan

And here’s the JSON response from updating your function app to use the Premium plan:

The JSON response from updating the Function App to use the Premium plan Fig. 6: The JSON response from updating the Function App to use the Premium plan

Change your hosting plan via the Azure portal

Open your web browser and go to the Function App dashboard in the Azure web portal.

Click Change App Service Plan on the side navigation menu of the Function App.

Click the Plan Type dropdown menu and select Function Premium from the list:

Changing the app service plan Fig. 7: Changing the app service plan

Next, click App Service Plan and select an existing Premium plan from the dropdown list or create a new plan.

The screengrab below shows the App Service Plan dropdown with the PremiumPlan option that you created with the Azure CLI:

Changing the app service plan again Fig. 8: Changing the app service plan again

Finally, click OK to save and upgrade the function.

How dependencies affect cold starts

During a cold start, Azure must download deployment packages to the instances executing the function. The time it takes to download this package depends on how big it is, which is largely determined by the number of dependencies the function has. Limiting the cold start time entails ensuring the deployment package remains as small as possible.

Optimizing dependencies

One way to keep your dependencies small is to use the built-in modules available to your function’s runtime. For example, the Node.js runtime provides the https module for you to establish a secure connection to another external service over an HTTPS connection rather than installing another dependency like Axios for the same use.

The Azure Insights SDK for .NET also ships with the DependencyTrackingTelemetryModule. This module uses the Microsoft.ApplicationInsights.DependencyCollector NuGet package, which collects metrics on your dependencies so you can monitor them via Application Insights. Static code analysis tools, such as NDepend, can also display your dependencies in a graph, so you can inspect and refactor your code to remove unused dependencies.

Leveraging Dependency Injection

Dependency injection (DI) is a programming design pattern that allows you to write highly reusable and maintainable code components. Although Azure Functions has primary support for dependency injection with the .NET runtime using the built-in Core dependency injection features, you can also imitate DI in other runtimes using similar patterns.

Azure Functions offers three service lifetimes that define how the service objects created by the DI container will behave. These service lifetimes are scoped, transient, and singleton:

  • With the scoped service lifetime, the container creates a service object every time the function host is created.
  • With the transient service lifetime, the container creates a service object every time the function host is invoked.
  • With the singleton service lifetime, the container reuses service objects across multiple function host executions.

Singleton limits the number of times it must reconnect by reusing connection instances across all function host executions. So, to reduce the cold starts while using DI, opt for the singleton service lifetime.

Function code

Cold-start optimization aims to speed up execution startup by reducing the operations the code performs at startup and, if possible, finding operations that can be reused across multiple executions.

Having unused modules, or modules that are only used when certain conditions are met, increases the time it takes to start your function. By using the lazy evaluation technique, you can optimize the code to only perform certain operations when needed or to load only the specific parts of the modules that are needed. In .NET, you can perform lazy evaluation using the lazy initialization feature.

When writing the code for your functions, it’s a good idea to split operations across multiple functions to keep them lightweight. With Azure Durable Functions, you can take advantage of various chaining patterns when executing related functions.

Conclusion

This article gave you an overview of cold start issues in Azure Functions.

First, we looked at how to identify cold starts in serverless functions and how to identify and resolve them within Azure Functions.

Next, we looked at how to eliminate or shorten cold starts. Switching to Azure’s Premium or Dedicated function hosting plans can reduce or eliminate cold starts. Refactoring the code for your function and its dependencies can reduce the cold start time. You can also optimize dependencies by keeping the deployment package as small as possible and using the built-in modules available to your function's runtime. These actions will also limit cold start times.

By taking these steps, you can significantly reduce cold start times and keep the performance of your serverless Azure Functions optimal.

Was this article helpful?

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us