Wednesday, June 11, 2025

Cuando el código funciona, pero no tiene tests: ¿y ahora qué?

Seguramente te ha pasado alguna vez. Te dan acceso al repositorio de un nuevo proyecto. Lo abres con curiosidad, esperas encontrar una estructura bien organizada, quizás alguna carpeta de tests. Comienzas a revisar los archivos y, para tu sorpresa, no hay ni una sola prueba. Ni unitarias. Ni de integración. Ni siquiera una prueba olvidada en un rincón del código. Nada. Pero lo más curioso es que ese sistema lleva tres años en producción. Funciona. Se usa a diario. Y sin embargo, todos en el equipo le tienen miedo. Nadie quiere tocar nada porque el más mínimo cambio puede romper algo importante… y nadie sabría con certeza qué fue.

Frente a este escenario, la pregunta es inevitable: ¿por dónde empiezo? ¿Cómo puedo empezar a testear sin volverme loco ni romper lo que ya está funcionando? Esta duda no solo la tienen los desarrolladores que recién se suman a un proyecto. Muchas empresas se enfrentan a este dilema en algún momento de su vida tecnológica. Y aunque cada contexto tiene sus particularidades, la respuesta de fondo suele ser la misma: si no puedes testear, probablemente es porque el código no fue diseñado para ser testeado.

Esto es algo más común de lo que parece. Cuando un sistema crece sin una estrategia clara de pruebas, lo que suele pasar es que se acumula lo que se conoce como complejidad accidental. Hay demasiado acoplamiento entre clases y módulos, poca cohesión interna, muchas responsabilidades mezcladas en los mismos componentes. El código se vuelve rígido, difícil de dividir y casi imposible de probar en pequeñas unidades. Y aquí viene una verdad incómoda: no hay atajos mágicos que te permitan hacer testing si el diseño del sistema no lo permite. Puedes intentarlo, claro, pero es como querer medir la temperatura con una regla: simplemente no va a funcionar.

Entonces, ¿qué hacer en estos casos? La mejor forma de empezar es con pruebas de extremo a extremo, también conocidas como tests end-to-end (E2E). Estas pruebas se enfocan en validar que el sistema funciona correctamente desde el punto de vista del usuario, sin importar cómo está construido por dentro. Piensa en ellas como una caja negra: no necesitas saber cómo están organizadas las clases, qué métodos se llaman, ni qué librerías usa el sistema. Solo te importa que, si ingresas ciertos datos por la interfaz, el resultado sea el esperado.

Los tests end-to-end te dan una ventaja enorme al inicio. Aunque suelen ser más lentos y costosos de mantener, en este contexto cumplen una función clave: crear una red de seguridad. Con ellos puedes detectar si rompiste algo importante al hacer un cambio. Te dan confianza para empezar a refactorizar, para mover piezas internas del sistema sin miedo a dejarlo inutilizable.

Una vez que tienes esa base de pruebas automáticas funcionando, el siguiente paso es comenzar a mejorar el diseño del sistema. No hace falta reescribir todo desde cero. De hecho, eso sería un error. Lo que sí puedes hacer es aplicar pequeñas mejoras: dividir responsabilidades, separar lógicas en distintas clases, introducir interfaces, eliminar dependencias directas, aplicar patrones de diseño donde tenga sentido. Estos cambios van haciendo que el código se vuelva cada vez más modular, más limpio… y, lo más importante, más testeable.

Cuando alcanzas un punto en el que puedes aislar comportamientos o componentes, entonces puedes empezar a escribir pruebas unitarias y de integración. Estos tipos de tests son más rápidos, más baratos de mantener, y te permiten obtener feedback casi inmediato. Ya no necesitas levantar toda la aplicación para probar algo; basta con ejecutar unos pocos tests y ver si todo sigue en orden. Este es el momento en el que verdaderamente comienza a cambiar la cultura del proyecto: pasas de tener miedo a tocar el código, a tener confianza para mejorarlo continuamente.

En resumen, no hay más ciencia que esta: empieza por donde puedas, construye una base segura con pruebas de extremo a extremo, refactoriza con cuidado hacia un diseño más limpio, y luego introduce tests más pequeños y específicos. Es un camino gradual, pero muy poderoso. No se trata de hacer testing por cumplir, sino de transformar la forma en que desarrollas y mantienes el software.

Así que si estás frente a un proyecto sin pruebas y con años de historia encima, no te frustres. No estás solo. Muchos hemos estado ahí. Empieza poco a poco, con paciencia, y verás cómo el sistema se vuelve cada vez más mantenible… y tú, cada vez más tranquilo al trabajar en él.

Monday, April 14, 2025

Week #2: Azure App Service

Requirements:

  • Create an Azure App Service Web App
  • Configure and implement diagnostics and logging
  • Deploy code and containerized solutions
  • Configure settings including Transport Layer Security (TLS), API settings, and service connections
  • Implement autoscaling
  • Configure deployment slots

I. Explore Azure App Service

1.1. Examine Azure App Service

🚀 What Is Azure App Service?

Azure App Service is a fully managed platform for building, deploying, and scaling web apps, RESTful APIs, and mobile backends. It supports multiple programming languages and frameworks, and runs on both Windows and Linux environments.


🔑 Key Features

  • Auto Scaling: Automatically adjusts resources based on demand, allowing you to scale up/down (change resource size) or out/in (change the number of instances) as needed.

  • Container Support: Deploy and run containerized web apps using images from Azure Container Registry or Docker Hub. Supports multi-container apps, Windows containers, and Docker Compose.

  • Continuous Integration/Deployment (CI/CD): Integrates with Azure DevOps, GitHub, Bitbucket, FTP, or local Git repositories for automated code deployment and synchronization.

  • Deployment Slots: Create multiple deployment environments (e.g., staging, production) to test changes before swapping them into production.

  • App Service on Linux: Host web apps natively on Linux using built-in images for languages like .NET Core, Java, Node.js, Python, and PHP, or deploy custom Linux containers.


🛡️ App Service Environment (ASE)

ASE provides a fully isolated and dedicated environment for securely running App Service apps at high scale. Unlike the shared infrastructure of standard App Service, ASE offers dedicated compute resources for a single customer, enhancing security and scalability.


1.2. Examine Azure App Service plans

🧩 What Is an Azure App Service Plan?

An App Service plan defines the set of compute resources for your web app to run. When you create an App Service plan in a specific region, Azure allocates a set of compute resources for that plan in that region. All apps assigned to that plan run on those resources.

Each plan specifies:

  • Operating System: Windows or Linux

  • Region: e.g., West US, East US

  • VM Instances: Number and size (Small, Medium, Large)

  • Pricing Tier: Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3, Isolated, IsolatedV2


💰 Pricing Tiers Overview

App Service plans come in various tiers, each offering different features and capabilities:

  • Free & Shared: These are basic tiers that run apps on the same Azure VM as other customers' apps. They have CPU quotas and are intended for development and testing purposes.

  • Dedicated Compute (Basic, Standard, Premium, PremiumV2, PremiumV3): These tiers run apps on dedicated Azure VMs. Only apps within the same App Service plan share these resources. Higher tiers offer more VM instances for scaling.

  • Isolated (Isolated, IsolatedV2): These tiers run apps on dedicated Azure VMs within dedicated Azure Virtual Networks, providing network isolation and maximum scale-out capabilities.


🔁 Scaling and Resource Sharing

In the Free and Shared tiers, apps receive CPU minutes on a shared VM instance and cannot scale out. In other tiers:

  • Apps run on all VM instances configured in the App Service plan.

  • Multiple apps in the same plan share the same VM instances.

  • Features like diagnostic logs, backups, and WebJobs consume resources on these VMs.

The App Service plan acts as the scale unit for the apps. If the plan is set to run five VM instances, all apps in the plan run on all five instances. Autoscaling settings apply to all apps within the plan.


🔄 Adjusting Plans for App Needs

You can scale your App Service plan up or down at any time by changing its pricing tier. Consider isolating your app into a separate App Service plan if:

  • The app is resource-intensive.

  • You want to scale the app independently from others.

  • The app requires resources in a different geographical region.

This approach allows for better resource allocation and control over your apps.



1.3. Deploy to App Service

🚀 Deployment Methods in Azure App Service

Azure App Service offers both automated and manual deployment options to suit various development workflows.

🔄 Automated Deployment (Continuous Deployment)

Automated deployment allows for rapid and consistent delivery of updates with minimal user disruption. Azure supports continuous deployment from several sources:

  • Azure DevOps Services: Integrate your code repository, build processes, testing, and release pipelines to automatically deploy to Azure Web Apps.

  • GitHub: Connect your GitHub repository to Azure so that changes pushed to the production branch are automatically deployed.

  • Bitbucket: Similar to GitHub integration, enabling automated deployments from your Bitbucket repositories.

🛠️ Manual Deployment

For more control or simpler applications, manual deployment methods include:

  • Git: Configure your App Service with a Git URL to push code directly from your local repository.

  • Azure CLI: Use the az webapp up command to package and deploy your app, with the option to create a new App Service web app if needed.

  • Zip Deploy: Utilize tools like curl to upload a ZIP file containing your application files to App Service.

  • FTP/S: Employ traditional FTP or FTPS protocols to transfer your application files to Azure.


🎯 Deployment Slots

Deployment slots are live apps with their own hostnames, allowing for staged deployments and testing before moving to production.

  • Staging Environment: Deploy your app to a staging slot to validate changes without affecting the production environment.

  • Swap Operation: Once validated, swap the staging slot with the production slot to seamlessly transition without downtime.

  • Continuous Deployment to Slots: Assign different branches (e.g., testing, QA, staging) to specific slots for streamlined testing and approvals.


🐳 Container Deployments

For applications packaged as containers, Azure App Service supports deploying custom containers:

  1. Build and Tag: Create and tag your container image, preferably avoiding the "latest" tag to ensure traceability.

  2. Push to Registry: Upload the tagged image to a container registry like Azure Container Registry.

  3. Deploy to Slot: Configure your App Service slot to pull the specific image tag, facilitating controlled rollouts and rollbacks.


🧩 Sidecar Containers

Azure App Service allows adding up to nine sidecar containers to your custom container app. These sidecars can provide auxiliary services such as monitoring, logging, or configuration without tightly coupling them to your main application container.


1.4. Explore authentication and authorization in App Service

🔐 Built-in Authentication and Authorization

Azure App Service offers integrated authentication and authorization features, often referred to as "Easy Auth." This allows you to secure your web apps, APIs, mobile backends, and Azure Functions with minimal or no code changes.


🌐 Supported Identity Providers

App Service supports federated identity, enabling integration with various third-party identity providers. The default providers include:

  • Microsoft Entra ID

  • Facebook

  • Google

  • X (formerly Twitter)

  • GitHub

  • Apple (preview)

Each provider has a specific sign-in endpoint, such as /.auth/login/aad for Microsoft Entra ID.


⚙️ How It Works

When authentication is enabled:

  • An authentication and authorization middleware component intercepts incoming HTTP requests before they reach your application.

  • This middleware handles user authentication, token validation, session management, and injects identity information into HTTP request headers.

  • The module operates separately from your application code and can be configured via Azure Resource Manager settings or configuration files.

On Linux and containerized environments, this module runs in a separate container, isolated from your application code.


🔄 Authentication Flows

There are two primary authentication flows:

  • Server-Directed Flow (Without Provider SDK): Your application delegates sign-in to App Service, which redirects users to the provider's login page. This is common for browser-based applications.

  • Client-Directed Flow (With Provider SDK): Your application handles sign-in using the provider's SDK, obtains an authentication token, and submits it to App Service for validation. This approach suits REST APIs, Azure Functions, JavaScript clients, and native mobile apps.


By leveraging Azure App Service's built-in authentication and authorization, you can streamline the process of securing your applications, reduce development overhead, and focus more on delivering core functionality.


1.5. Discover App Service networking features

🌐 Azure App Service Networking Overview

Azure App Service provides a range of networking features to control inbound and outbound traffic for your applications. These features vary based on the deployment type:

  • Multitenant App Service: Hosts applications in shared environments across various pricing tiers (Free, Shared, Basic, Standard, Premium, PremiumV2, PremiumV3).

  • App Service Environment (ASE): Offers a single-tenant, isolated environment for applications requiring enhanced security and scalability.


🔒 Inbound Networking Features

These features help manage and secure incoming traffic to your applications:

  • App-assigned address: Provides a dedicated IP address for your app, useful for IP-based SSL requirements.

  • Access restrictions: Allows you to define rules to permit or deny traffic based on IP addresses or address ranges.

  • Service endpoints: Enables secure access to Azure services by extending your virtual network's private address space.

  • Private endpoints: Assigns a private IP address from your virtual network to your app, ensuring traffic remains within the Azure backbone network.


🔁 Outbound Networking Features

These features control how your app communicates with external resources:

  • Hybrid Connections: Facilitates secure outbound connections from your app to on-premises resources or other networks.

  • Virtual Network Integration: Allows your app to access resources within an Azure Virtual Network, supporting scenarios like accessing databases or services in a VNet.

  • Gateway-required Virtual Network Integration: Connects your app to a virtual network using a VPN gateway, suitable for accessing resources in different regions or classic VNets.


📌 Deployment Considerations

  • Multitenant App Service: While offering cost-effective hosting, it requires additional configuration for advanced networking features.

  • App Service Environment (ASE): Provides enhanced network isolation and is ideal for applications with strict compliance or security requirements.



II. Configure web app settings

2.1. Configure application settings

🔧 What Are Application Settings?

In Azure App Service, application settings are essentially environment variables that your application can access at runtime. These settings allow you to:

Store configuration values like database connection strings, API keys, or feature flags.

Override settings defined in your application's code or configuration files, enabling different configurations for development, staging, and production environments.

Manage settings securely, as they are encrypted at rest.


🛠️ How to Add or Edit Application Settings

You can manage application settings through the Azure Portal:

  1. Navigate to your App Service in the Azure Portal.

  2. Go to "Settings" > "Configuration".

  3. Under the "Application settings" tab, you can:

    • Add a new setting by clicking "+ New application setting".

    • Edit an existing setting by clicking the edit icon next to the setting.

    • Delete a setting by clicking the delete icon.

When adding or editing settings:

  • Names can include letters, numbers, periods (.), and underscores (_).

  • Values are strings and can include special characters, but be mindful of escaping characters as needed.

  • Slot setting: You can mark a setting as "slot-specific" if you want it to stick to a particular deployment slot (e.g., staging or production).

Remember to click "Save" after making changes. Note that updating application settings will cause your app to restart to apply the new configurations.


🧰 Advanced Editing

For bulk editing:

  • Click on "Advanced edit" in the "Application settings" tab.

  • You'll see a JSON representation of your settings:

    json
    [ { "name": "Setting1", "value": "Value1", "slotSetting": false }, { "name": "Setting2", "value": "Value2", "slotSetting": true } ]
  • Modify the JSON as needed and click "OK", then "Save" to apply changes.


🔗 Connection Strings

Connection strings can also be managed similarly:

  • Under the "Connection strings" tab, you can add, edit, or delete connection strings.

  • Each connection string has:

    • Name: Identifier for the connection string.

    • Value: The actual connection string.

    • Type: Specifies the type of database (e.g., SQLServer, MySQL, PostgreSQL).

    • Slot setting: Determines if the connection string is slot-specific.

At runtime, connection strings are available as environment variables with specific prefixes:

  • SQLCONNSTR_ for SQL Server

  • MYSQLCONNSTR_ for MySQL

  • SQLAZURECONNSTR_ for Azure SQL

  • POSTGRESQLCONNSTR_ for PostgreSQL

  • CUSTOMCONNSTR_ for custom types

For example, a connection string named MyDb of type SQLServer would be accessible via the environment variable SQLCONNSTR_MyDb.


📝 Tips and Best Practices

  • Use slot-specific settings for configurations that differ between deployment slots.

  • Avoid hardcoding sensitive information in your code; use application settings or Azure Key Vault references instead.

  • Be cautious with app restarts: Changing application settings causes the app to restart, which might lead to temporary downtime.


2.2. Configure general settings

🧱 Stack Settings

  • Runtime Stack & Version: Select the programming language (e.g., .NET, Node.js, Python) and its version that your app uses.

  • Startup Command: For Linux or custom container apps, specify a startup command or file if needed.


🖥️ Platform Settings

  • Platform Bitness: Choose between 32-bit or 64-bit architecture (applicable to Windows apps).

  • FTP State: Decide whether to allow only FTPS or disable FTP entirely.

  • HTTP Version: Set to 2.0 to enable HTTP/2 support, enhancing performance for HTTPS traffic.

  • Web Sockets: Enable if your app requires real-time communication (e.g., using SignalR or socket.io).

  • Always On: Keeps your app running continuously, preventing it from going idle after periods of inactivity. Essential for apps with continuous WebJobs or those triggered by CRON expressions.

  • ARR Affinity: Ensures that a user's session is consistently routed to the same app instance. Turn off for stateless applications.

  • HTTPS Only: Redirects all HTTP requests to HTTPS, ensuring secure communication.

  • Minimum TLS Version: Specify the lowest TLS version your app will accept, enhancing security.

🐞 Debugging

  • Remote Debugging: Enable this feature to debug your app remotely. Note that it automatically turns off after 48 hours to maintain security.


🔐 Client Certificates

  • Incoming Client Certificates: Require clients to present certificates for mutual TLS authentication, adding an extra layer of security.


✅ Key Takeaways

  • Adjusting these settings allows you to tailor your app's environment to meet specific requirements.

  • Some features, like "Always On," may necessitate upgrading to a higher pricing tier.

  • Proper configuration enhances your app's performance, security, and reliability.


2.3. Configure path mappings

🗂️ What Are Path Mappings in Azure App Service?

Path mappings let you define how your app handles virtual directories and handler mappings. This is useful for organizing content and managing how requests are routed.


📁 Virtual Directories

  • A virtual directory maps a URL path to a physical folder in your app’s file system.

  • This allows you to serve content from a specific folder when a user navigates to a particular path.

For example:

  • A virtual directory /media could map to a folder /site/wwwroot/media in your app.

This is helpful when your app needs to:

  • Separate content by folder (e.g., images, videos)

  • Serve static files directly



🔀 Handler Mappings

Handler mappings allow you to:

  • Associate file extensions (like .php, .py, or .custom) with specific executable paths or frameworks.

  • Customize how certain file types are processed by the app.

This is especially useful for:

  • Running apps in different programming languages (e.g., Python, PHP)

  • Integrating legacy systems


🔧 How to Configure Path Mappings

  1. Go to your App Service in the Azure portal.

  2. Navigate to Settings > Configuration > Path mappings.

  3. You can add:

    • Virtual applications and directories

    • Handler mappings

Each entry includes:

  • Virtual Path: The URL path

  • Physical Path: Folder on the file system

  • Permissions: Whether it’s an application or just content


Key Takeaways

  • Path mappings give you control over routing and content structure in your app.

  • They’re ideal for serving static files or configuring custom handlers.

  • Configuration is done through the Azure Portal, under the Path mappings section.



2.4. Enable diagnostic logging

🛠️ Types of Logs in Azure App Service

Azure App Service provides several logging options to help you monitor and troubleshoot your applications:

  1. Application Logging:

    • Purpose: Captures log messages generated by your application code.

    • Platforms: Windows and Linux.

    • Storage Options:

      • Filesystem: Temporary storage; logs are stored in the App Service file system.

      • Blob Storage: Persistent storage; logs are stored in Azure Storage blobs.

    • Log Levels: You can set the verbosity level—Error, Warning, Information, or Verbose—to control the amount of detail captured.

  2. Web Server Logging:

    • Purpose: Records raw HTTP request data in the W3C extended log file format.

    • Platforms: Windows.

    • Storage Options: App Service file system or Azure Storage blobs.

  3. Detailed Error Messages:

    • Purpose: Saves copies of the .html error pages generated when your application encounters HTTP errors (status code 400 or greater).

    • Platforms: Windows.

    • Storage: App Service file system.

  4. Failed Request Tracing:

    • Purpose: Provides detailed tracing information on failed requests, including a trace of the IIS components used to process the request and the time taken in each component.

    • Platforms: Windows.

    • Storage: App Service file system.

  5. Deployment Logging:

    • Purpose: Helps determine why a deployment failed.

    • Platforms: Windows and Linux.

    • Storage: App Service file system.


⚙️ How to Enable Logging

For Windows Apps:

  1. Navigate to your app in the Azure portal.

  2. Select App Service logs under the Monitoring section.

  3. Enable the desired logging options:

    • Application Logging (Filesystem): For temporary debugging; turns off after 12 hours.

    • Application Logging (Blob): For long-term logging; requires a blob storage container.

    • Web Server Logging: Choose between File System and Storage.

    • Detailed Error Messages and Failed Request Tracing: Toggle On as needed.

  4. Set the Level of detail for application logging.

  5. Click Save to apply the settings.

For Linux or Container Apps:

  1. Navigate to your app in the Azure portal.

  2. Select App Service logs under the Monitoring section.

  3. Enable Application Logging (File System).

  4. Specify the Quota (MB) and Retention Period (Days) for the logs.

  5. Click Save to apply the settings.


🧪 Adding Log Messages in Code

You can instrument your application code to send log messages:

  • ASP.NET Applications:

    • Use the System.Diagnostics.Trace class:

      System.Diagnostics.Trace.TraceError("An error occurred.");
  • ASP.NET Core Applications:

    • Utilize the Microsoft.Extensions.Logging.AzureAppServices logging provider.

  • Python Applications:

    • Use the OpenCensus package to send logs to the application diagnostics log.

📡 Streaming Logs

To view logs in real-time:

  1. Ensure the desired log types are enabled.

  2. In the Azure portal, navigate to your app.

  3. Select Log stream under the Monitoring section.

  4. View the live log output as your application runs.


📂 Accessing Log Files

  • For Filesystem Logs:

    • Access via the Kudu console at https://<app-name>.scm.azurewebsites.net/DebugConsole.

    • Navigate to the LogFiles directory to view logs.

  • For Blob Storage Logs:

    • Use Azure Storage Explorer or similar tools to access the blob container where logs are stored.


Key Takeaways

  • Azure App Service provides robust logging capabilities to help you monitor and troubleshoot your applications.

  • You can enable various types of logs depending on your needs and the platform your app is running on.

  • Instrumenting your code with appropriate logging statements enhances the observability of your application.

  • Real-time log streaming and access to historical logs facilitate effective debugging and monitoring.

For a more detailed walkthrough, you can refer to the full module here: Enable diagnostic logging

If you have any specific questions or need further assistance with these settings, feel free to ask!


2.5. Configure security certificates

In Azure App Service, TLS/SSL certificates help secure your web applications by encrypting the data transferred between your app and clients. This module explains how to configure and manage these certificates.


📜 Types of Certificates in Azure App Service

  1. Private Certificates

    • Used to secure custom domain names (e.g., www.yourdomain.com).

    • Can be:

      • Purchased from Azure

      • Imported manually (from a Certificate Authority or your organization)

    • Stored securely in Azure's certificate store.

  2. Public Certificates

    • Used for outbound client authentication.

    • Public certificates are stored and referenced by your code when connecting to external services that require them.


🧩 How to Configure TLS/SSL Settings

  1. Go to your App Service in the Azure portal.

  2. Navigate to "TLS/SSL settings".

  3. You can manage:

    • Bindings: Link your custom domains to certificates.

    • Protocols: Specify minimum TLS version (recommended: TLS 1.2 or higher).

    • Certificates: Upload, create, or renew certificates.


📥 Upload or Import Certificates

  • Navigate to "Certificates (Private Key)" in the portal.

  • Click Upload Certificate, choose the .pfx file, and enter the password.

  • Azure stores it securely and makes it available to your app.


🔗 Bind a Certificate to a Custom Domain

To activate HTTPS on your custom domain:

  1. Go to TLS/SSL bindings.

  2. Select your domain.

  3. Choose the uploaded certificate.

  4. Save the changes.


Key Takeaways

  • TLS/SSL certificates encrypt traffic, securing communication with your app.

  • Use private certificates for custom domains and public certificates for outgoing connections.

  • Certificates can be uploaded, purchased, or automatically renewed through Azure.

  • Properly managing certificates is crucial for security and compliance.


III. Scale apps in Azure App Service

3.1. Examine scale out options

🚀 What Is Autoscaling?

Autoscaling in Azure App Service automatically adjusts the number of instances running your web app based on demand. This ensures optimal performance and cost-efficiency by scaling out (adding instances) during high load and scaling in (removing instances) when demand decreases.


📊 Factors Influencing Autoscaling

Autoscaling decisions are based on specific metrics and conditions:

  • CPU Usage: High CPU utilization can trigger scaling out to handle increased processing demands.

  • Memory Usage: If your app consumes a significant amount of memory, autoscaling can add instances to distribute the load.

  • HTTP Queue Length: A growing queue of HTTP requests indicates high traffic, prompting autoscaling to add instances for better throughput.

  • Custom Metrics: You can define custom metrics relevant to your application's performance to guide autoscaling decisions.


⚙️ Configuring Autoscaling

To set up autoscaling:

  1. Define Rules: Specify conditions under which scaling should occur, such as CPU usage exceeding 70%.

  2. Set Instance Limits: Determine the minimum and maximum number of instances to run.

  3. Choose Metrics: Select which metrics (CPU, memory, etc.) will trigger scaling actions.

  4. Schedule Scaling: Optionally, set schedules for scaling actions based on predictable usage patterns.


🧠 Best Practices

  • Monitor Performance: Regularly review your app's performance metrics to fine-tune autoscaling rules.

  • Avoid Over-Provisioning: Set realistic maximum instance counts to prevent unnecessary resource usage.

  • Combine Metrics: Use multiple metrics to make more informed scaling decisions.

  • Test Scaling Rules: Simulate load to ensure your autoscaling configuration responds appropriately.


3.2. Identify autoscale factors

⚙️ What Are Autoscale Conditions and Rules?

In Azure App Service, autoscale conditions and rules define when and how your web app should scale in or out, based on performance metrics or schedules.

📏 Key Elements of Autoscale Rules

  1. Metric

    • The performance indicator you're monitoring (e.g., CPU usage, memory, HTTP queue length).

  2. Threshold

    • The value that must be met or exceeded to trigger a scale action (e.g., CPU > 70%).

  3. Time Aggregation

    • Defines how data is aggregated (e.g., average, min, max) over a time window.

  4. Operator

    • The comparison used in the rule (e.g., greater than, less than).

  5. Duration

    • How long the condition must be true before scaling occurs.

  6. Action

    • What to do when the rule is triggered:

      • Scale out: add more instances

      • Scale in: remove instances

      • Change the instance count

🗓️ Scheduled-Based Scaling

You can also set recurring schedules:

  • Scale based on expected traffic patterns (e.g., more users during work hours).

  • Example: Scale to 5 instances from 8 AM to 6 PM, then scale down to 2 instances after hours.


🧠 Best Practices

  • Combine metric-based and schedule-based rules for better control.

  • Avoid overly aggressive scaling—set reasonable thresholds and durations.

  • Always test and monitor how your app responds to scaling actions.


✅ Key Takeaways

  • Autoscale rules help maintain performance and manage costs.

  • Use conditions (metrics + thresholds) to create smart scaling rules.

  • Add schedules to handle predictable usage patterns.

  • Configure via the Azure portalARM templates, or Azure CLI.


3.3. Enable autoscale in App Service

🧭 Steps to Enable Autoscale in Azure Portal

  1. Go to your App Service in the Azure portal.

  2. Select “Scale out (App Service plan)” under the “Settings” section.

  3. Click “Custom autoscale” to start creating your autoscale configuration.


🛠️ How to Configure Autoscale

  • Set Target Resource: Choose the App Service or App Service Plan to autoscale.

  • Set Scale Conditions: Create rules based on:

    • Metrics (e.g., CPU usage > 70%)

    • Time schedule (e.g., scale out during work hours)

  • Set Instance Limits:

    • Minimum, maximum, and default instance counts.

  • Add Rule: Define conditions for scaling in and out using:

    • Metric name, operator, threshold, duration, direction (increase/decrease), and instance count.


📋 Example Rule

Condition:
If CPU usage > 70% for 10 minutesIncrease instance count by 1

Reverse Rule:
If CPU usage < 30% for 10 minutesDecrease instance count by 1


Key Takeaways

  • Autoscale helps ensure high availability and cost control.

  • You can define metric-based and time-based scaling rules.

  • Azure portal provides an easy UI to configure everything.

  • It's important to test your autoscale settings under different loads.



3.4. Explore autoscale best practices

🔍 1. Understand Your App's Behavior

  • Analyze how your app uses CPU, memory, and handles traffic spikes.

  • Use Application Insights or Azure Monitor to identify typical usage patterns.


📈 2. Choose the Right Metrics

  • Use relevant metrics like:

    • CPU Percentage

    • Memory Usage

    • HTTP Queue Length

  • Avoid using just one metric—combine multiple metrics for smarter scaling decisions.


⏱️ 3. Set Appropriate Thresholds and Durations

  • Don’t react to short spikes.

  • Example: Instead of scaling on 1 minute of high CPU, use 5-10 minutes to avoid false positives.

  • Be conservative when scaling in to avoid dropping too many instances too quickly.


🕒 4. Use Schedule-Based Scaling for Predictable Loads

  • Set recurring schedules to scale up or down based on known usage times (e.g., business hours).

  • Combine schedules with metric-based rules for flexibility.


🚫 5. Avoid Aggressive Scaling

  • Don’t scale too fast or too often—this can lead to instability.

  • Set cooldown periods between scale actions to give time for effects to show.


🧪 6. Test and Monitor

  • Simulate load to test autoscale behavior before going live.

  • Continuously monitor performance and adjust rules as your app evolves.


🧠 Key Takeaways

  • Good autoscaling is proactive, data-driven, and well-tested.

  • Combine metrics and schedules.

  • Use Azure Monitor, alerts, and logging to stay informed.

  • Autoscale should enhance performance, not cause disruptions.


IV. Explore Azure App Service deployment slots







Cuando el código funciona, pero no tiene tests: ¿y ahora qué?

Seguramente te ha pasado alguna vez. Te dan acceso al repositorio de un nuevo proyecto. Lo abres con curiosidad, esperas encontrar una estru...