3/20/21

Application Insights for Single Page Applications

    Application monitoring is a fundamental service that any application should have to understand the performance and health of it. When it comes to Single Page Applications (SPA) written on Angular, React or similar frameworks, monitoring becomes crucial because the application runs on the user’s browser.  Ideally, we want to be able to log errors and track custom events to better understand the application performance without having to code an API which consumes the telemetry information.

ozkary-appsinghts-integration

Application Insights

    Application Insights (AppInsights) is an Application Performance Management (APM) service part of the Azure Monitor services. It is used to monitor running applications.  It is widely used on different platforms like .Net, Node.js, Python and client-side applications written with JavaScript frameworks like React, Angular and others. It is also used by mobile application to help manage crash reporting from all the devices where the application is running from.

    On the cloud, AppInsights provides data analytics tools that are used to understand user behavior and application performance.  The analysis of the data leads to the continuous improvement of the application usability and quality. In addition, visualization tools like PowerBI can be used to tap into the data and create dashboards that can provide insights to the DevOps and Development teams as well as UX designer for usability cases.

How does it work on a SPA?

    At the application level, there is an NPM package that must be installed in the Single Page application to help use the AppInsights services to track events, errors, and API calls. Those events are sent to the Azure monitor service in the cloud via API calls. The API is provided by the monitor service on Azure, and the location of the endpoint is defined in the NPM package.

Note: AppInsights automatically tracks telemetry data like page views, request performance, user sessions and unhandled exceptions for crash reporting. For custom error and event handling, continue to read this article.  

    For the Azure monitor service to resolve the tenant and AppInsights instance information, we need to get the instrumentation key from the application insights instance. This information can be found on the overview blade of the instance associated to your application. To create the instance, visit the resource that is hosting the SPA, click on the Application Insights link and enable the integration. That should provision the AppInsights instance for the application.

Client configuration

    On the client application, open Visual Studio Code and open a terminal console. On the console, install the AppInsights NPM package by entering this command:

 

npm i –save @microsoft/applicationinsights-web

 

 

    Once the package is installed, we can add the instrumentation key to the application as an environment variable. The application environment file (.env) is the right place to add this key. This file uses a key=value format to set those variables.  Most SPAs have an environment file to host application-level variables which can be accessed by using the process object.  If the project does not have this file, one can be created manually. For the variables to be loaded properly, the application needs to be restarted.  Also depending on the current framework and version, like React or Angular, the NPM package env-cmd maybe installed. This package handles the parsing and loading of the variables from the file when the application first loads.

Note: In newer versions of the JavaScript framework, there is no need to add the NPM package.

Note:  Depending on the framework, the variable names should follow a particular naming convention. For example, React requires that the variables are prefixed with the tag REACT_APP

As an example, the instrumentation key should be set with the following format:

 

REACT_APP_INSIGHTSKEY=key-value-here

 

     The variable value can then be accessed from anywhere in the code by using the process object.

 

const key = process.env.INSIGHTSKEY;

 

Build the logger

    After setting the configuration, we can now implement a service where we can centralize the logging of errors and events. Usually, the logger or exception manager service on the application is the right place for that. This way anywhere in the app, we can import the logger and use it to log errors and events to the cloud.  Review a simple implementation of how this service could be implemented using TypeScript.

    After reviewing the code, we can notice all the steps taken to initialize the service and what is exported for other components to call. The code first imports the ApplicationInsights dependency. It then instantiates the service with the instrumentation key, which is read from the environment variables. The service exports three functions, init, error, track.

    The init function initializes or bootstrap the AppInsights service. This allows it to track and send telemetry information like page visits and performance metrics automatically. The error function is used to send exception and custom error messages. The track function sends custom events. It uses generics for the data parameter, so it can support events with primitive and complex types.

How to log events

    We should now be ready to look at our application to figure out what we need to log.  For example, we could log handled errors and some important telemetry about our app. For telemetry, we can log the navigation, click events and other custom events that are important to track.

    Each component and/or service of the application, should be handling errors and custom events. The custom events should use a tag name and some measure or message that you want to track.  It is important to define this information properly, so it can be shown on the analytic tools hosted on Azure.

Custom errors

    Custom errors are handled exceptions or unexpected results by the application. These errors are usually handled on a try/catch block or some business rule validation. To log the error, just use the logger service and call the error function, passing the Error object as parameter.

Custom events

    Custom events are the business events that track application critical messages that can be measured as goal or conversion on the application. Some examples may include customer registration, profile changes and other measurable events that should be reported on the analytical tools. Use the track function of the logger service to log events. Send a tag, for grouping, and a JSON document with some information about the event.

Verifying the data

    After deploying the application and running some test that can trigger some errors or custom events, verify that the messages are sent correctly.  The first verification is to use the browser console, network tab.  We should see some traffic with the track request and VisualStudio endpoint. We are looking for the status to be 200 which should indicate that the message made it home.

    Next, head over to the Azure Portal and open the Application Insights associated with the application. There are multiple tools to use there, so experiment and learn from them. A tool to use and query data is available from the Logs link. This tool list all the tables that are available. To query the custom events, use the custom events table and query the data with a SQL like script.

ozkary-appinsights-logs


    For visual statistical analysis, we can use the Events tool. This presents the data on charts, which enables us to visualize the data from the application.

Conclusion

    It is very valuable for our client-side apps to be able to report telemetry information to the cloud. This can enable teams to learn about the application performance and usability. Azure Application Insights provides a solution that enables all applications, built on different platforms, to easily integrate with the service. It is a complete solution because it provides the pipeline to send telemetry data from the applications as well as the analytical tools to view, monitor and learn from the telemetry data.

Thanks for reading.

Originally published by ozkary.com

2/20/21

Azure Static Web App GitHub Actions CICD

Static Web App (SWA) is a software as a service (SaaS) solution hosted by Azure cloud which enables us to use Content Delivery Network (CDN) to host single page applications (SPA) built with JavaScript frameworks like React, Angular and others.

In addition to the provisioning of the SPA, a serverless function endpoint is created as part of the application. This enables the JavaScript code to call APIs within the same domain to avoid Cross-Domain problems. The default route used by the client-side code is set to /api by default.

While this SASS solution manages the hosting concerns for our apps, we still need to be able to manage the DevOps concerns for building and deployment of the application. For that process, we can rely on the use of GitHub Continuous Integration Continuous Delivery (CICD) pipelines to manage the build and deployment process of our applications thus making this architecture a fully automated turn-key solution for any static web application.

ozkary gitub cicd


What are we learning?

In this article, we look at automating the deployment of the Azure resources to create a SWA using the Azure CLI. This way, we have a repeatable process to deploy other apps. We also link our SWA with a GitHub branch, so we can create a GitHub action which builds and deploys our application the moment we merge a pull-request into the branch.

To follow along, make sure to have all the requirements available on your workstation. This includes Azure CLI, Visual Studio Code and a GitHub repository with a sample project and token. The token is used to access the repository with the permissions to create the GitHub Action workflow file and API secrets.

https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token

https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

To get started, we need to open our GitHub Repo with Visual Studio Code (VSC). We then need to open a terminal console right from VSC, so we can enter the CLI commands. When the terminal is ready, we can type the following command to make sure that Azure CLI is correctly installed.

 

az –version

 

 

Azure CLI Commands Reference:

https://docs.microsoft.com/en-us/cli/azure/

This command should return the current version of the Azure CLI. If this is not the case, review the Azure CLI installation from the Microsoft website. If the CLI is correctly installed, we can next look at the following Bash script, so we can understand what information we need to provide to create the SWA and link it to the GitHub action.

Note: Bash script can run from a Linux terminal or Windows using WSL with the Linux subsystem.

Review the CLI script

This script prompts the user for information to execute a series of commands to create the resources. In general, these are the commands and their purpose:

Command

Notes

az login

Login to Azure


az account show

Shows the current account for verification


az account set

Sets the default subscription which host the resource to be created


az group create

Creates a resource group with location information like East/West


az staticwebapp create

Creates the SWA and GitHub action to automate the deployment after a push is done on the provided branch


az staticwebapp appsettings

Sets configuration settings for the application. These settings are server-side settings which can be used by the serverless functions that are created automatically with our SWA



After reviewing the code, we can download the gist and run it from Visual Studio Code.  To run the Bash file on a terminal window, enter the following command:


bash filename


Follow and enter the information as prompted. Once all the commands are executed, we should do a git pull from the repository branch. The pull downloads the CICD workflow file that was created when the SWA resource was provisioned.

Review the CICD workflow

Once the git pull is complete, we should notice in Visual Studio Code explorer that a new folder (github) and yml file is created.  Open the file, so we can review the commands which the workflow executes to build and deploy our code. From the yml file, we should see the workflow name, branch name, jobs and build and deploy configuration. In general, we should change the workflow name to something meaningful, like the branch or resource name. This name is visible on the GitHub Actions tab, so we want to be able to easily identify what action is associated with a resource.

On the Build and Deploy configuration section of the workflow, we want to review three settings:

Setting

Notes

app_location

This should be set to the location of the source code. This is usually the root folder. Change this to match the project location.

 

api_location

This by default uses the path /api which is the route use to call the serverless functions. This should also be changed to match the project configuration.

 

app_artifact_location

This is set to the build folder location. In the case of React and Angular projects, this folder is usually named build. If this is not the case in the current project, update this to match the project configuration.

 

If this information is not correct, the GitHub action fails with a “Unable to find a default file in the artifact folder”.

 

The error is visible in the Actions' area of GitHub.  There, we can find all the workflow executions every time we push code changes into the repository branch.

 


See this example workflow yml file for more information.

ozkary-github-action

After making changes to the YML file, we can create a PR or push directly the changes to the branch.  This should trigger the action, and we should see our changes work and see how the build is deployed to the Azure resources.  

Once the build is complete, open the Azure console and visit the Static Web App resources. Open the one that was just created and click on the URL, which can be found on the overview tab. If all is well, the site should load on the browser.

Automation for all

There are many options to host your SPA, but it is important to be able to automate the build and deployment process, as well as the provisioning of the resources that need to be created on any cloud platform. Azure Static Web Apps with GitHub Actions provide an excellent process to manage the automation of all our DevOps concerns for our Single Page Apps.

Thanks for reading.

Originally published by ozkary.com

1/16/21

Create a read-only user on Azure SQL Server

Database permissions to enable users to either manage data and/or data definitions object like tables, views and other is a key security concern on a database. In some cases, there is the need to allow a user to access the data only. To do that, we need to create a read-only user on the database. With this article, we look at the steps that are needed to create a read-only user on an Azure SQL Server database.


Note: To follow this article, an Azure subscription with a database already deployed is recommended.

Find the Server Information:

We should start this process by first getting the SQL Server instance URL address from the Azure Console. Login the Azure console and select or search for the SQL Server resources.  Look at the results and select the server hosting the database that needs the additional user profile.

After selecting the server, make sure a server is selected not a database instance, we should see the server information overview. From this view, we can find the server URL name and the administration login account. The password is not visible from the view, so we should get that information from our security software like Key Vault or wherever the passwords are kept. We need this information to be able to login into the database remotely, so we can add the new user profile.

But before we login to the server, we also need to add the client IP address to the server firewall configuration. This enables remote client application to reach the database.  From the server information page, search or select “Firewall and virtual networks”.  We can then click the “+ Add Client IP” button. This reads the client IP address from browser. This information is available because we are using a browser to access the console.  This adds our current IP address information to the firewall rules to enable the access. Once that looks correct, we should now press save to make sure the update is made.

Ready to Connect

We should have all the required information to connect to our database remotely using tools like Visual Studio Code (VSCode) or SQL Server Management Studio (SSMS).  VSCode is a development tool that can target many platforms and languages. SSMS is designed specifically to work on data platforms like SQL Server.

Once your preferred tool is open, we are ready to login, we should connect to the target environment and database by using the server URL and admin credentials. Once the connection is set, we can open a query window, so we can add the following code:

Note:  Make sure to change the login, username and target database to match your environment.

 

-- select master database to create the login profile

USE master;

GO

 

-- creates the login account

CREATE LOGIN [rptLogin] WITH password='add-pw-here';

-- DROP LOGIN rptLogin;

 

-- enable the access to login to the database

CREATE USER [rptUser] FROM LOGIN [rptLogin] WITH DEFAULT_SCHEMA=[dbo];

-- DROP USER [rptUser];

 

-- move to the target database context

USE mydb;

GO

 

-- create the user on the target database

CREATE USER [rptUser] FROM LOGIN [rptLogin] WITH DEFAULT_SCHEMA=[dbo];

 

-- add the data reader role to the user

EXEC sys.sp_addrolemember @rolename = N'db_datareader', @membername = N'rptUser'

 

Gitub Gist


Before we run the script, let us break down the code and understand what is going on.

Create Login Account

To create an account to access the database, we must first create a “Login” profile on the master database which host the system database configuration and security for all the databases. We do this by first switching to the master database context using the “USE” command. We then create the login profile “rptLogin” with the default schema information “dbo”.  We also add a user profile “rptUser” to the master database from the login recently created to enable the user to login to the server even if not database permission has been granted yet to this user profile. This step is optional. It is more important to create the user in the target database.

Create Database Read-Only User

At this point, there is a login account, but the user cannot access any database. This leads us to the next step. We need to switch context to the target database by using the “USE” command. We can then create the same user “rptUser” under this database security context. This is the step to grant the user access to a database.   To make this user read-only, we need to assign a role. This is done by granting this user the “db_datareader” role under the selected database context.

Now that we have understanding on what the script would do for us, we can execute the script and create the read-only user account. if there are no error messages, we should be able to use the login credentials to access the database.  To login with the new account, open another database connection and use the login credentials “rptLogin”.  Please notice that we need to use the login account as this is the profile with an assigned password.  

Conclusion

In cases when users required to access the database for read-only purposes, we can create user accounts with the “db_datareader” role on a specific database.  With this role, the user would only be able to read data by running Select statements. Any attempt to insert, update or delete operations are not allowed as this requires the “db_datawriter” role. By only granting the “db_datareader” role, we limit the access thus making our database more secure from possible wrong operations.

Thanks for reading

Originally published by ozkary.com