2/11/23

React Suspense Fallback Keeps Rendering When Using Lazy-Loaded Routes


When a React application is loading a new lazy-loaded route or a component, it is a common practice to show a loading component with an animation to provide feedback to the user. In React, we use the Suspense component to handle this scenario, but there are times when the fallback component never unloads, and it keeps rendering on the browser.


When a Suspense fallback component never unloads, it is because there must be a child component that keeps rendering. This causes the Suspense component to continue to run, thinking that a child component in still in suspense or loading state.  This gives us the wrong sense that the Suspense component is misbehaving, when it is in fact the child components that is at fault. For a deeper understanding, let’s take a look at a real example.



ozkary react suspend component


Suspense with Declarative Routes


To demonstrate this problem, let’s first take a look at an implementation of a React application that loads some route information in a declarative approach, which is simple enough and does not introduce any rendering problems. We should also notice how the components are lazy-loaded to help us do code splitting for performance improvements on the load time and enable us to trigger the Suspense component automatically.

👍 This is a typical approach when creating apps with a simple routing structure.

Suspense with a Router Component


Let’s now take a look at a more complex scenario where we need to load the route information from a JSON configuration file. This introduces a variation on the process by adding the routes using a function. 



This code change introduces new behavior, and it causes the component to re-render multiple times. We can trace that by adding a console.log operation before returning the content. If we look at the browser console, we should see the output of our console.log call. From the application standpoint, this behavior is noticeable because the fallback component continues to show on the browser as the Suspense component does not detect that the child component is done rendering. Now that we understand the problem, how should we correct it?


👍 A child component continues to be in suspense until it stops rendering.


Adding State Management


To correct this behavior, we should clearly understand the root cause. Since we introduce a dynamic way to load the routes, the component has no way to understand its current state. It only knows that some data is being loaded every time it calls the function, and the data seems new or different. To avoid this, we need to add state management to the component, which is a React best practice when writing data driven components. Let’s refactor our code to see how state management can make a difference.


By looking at our new implementation, we can see that the route collection is now managed in a state variable. We also use an useEffect hook to load the data. This enables us to initiate the state of the component with some valid data. We also track the route collection as a dependency, and since there are no changes to the data, the state does not change, and the component completes its suspense state thus allowing the app to complete its rendering process.


Conclusion


The React Suspense component is a great feature to provide feedback to users while a component is rendering. When we add dynamic data to the application, we should understand that this impacts the state management process, which is used to signal when a component is in a suspense state. Depending on how nested your components are, a child component can continue to render, causing the Suspense fallback UI to continue to load non-stop. 


👍 To diagnose a component, you can add a suspense component around it, and we should notice only that component content to continue to use the suspense fallback animation.


Thanks for reading.



Send question or comment at Twitter @ozkary

Originally published by ozkary.com

1/14/23

Use Remote Dev Container with GitHub Codespaces


As Software Engineers, we usually work with multiple projects in parallel. This forces us to configure our work stations with multiple software development tools, which eventually leaves our workstation performing poorly. To overcome this problem, we often use virtual machine (VM) instances that run in our workstations or a cloud provider like Azure. Setting up those VMs also introduces some overhead into our software development process. As engineers, we want to be able to accelerate this process by using a remote development environment provider like GitHub Codespaces.


ozkary-github-codespaces-layers

What is GitHub Codespaces?


GitHub Codespaces is a cloud hosted development environment that is associated to a GitHub repository. Each environment or Dev Container is cloud hosted in a Docker container with the core dependencies that are required for that project. The container is hosted on a VM running Ubuntu Linux. The hardware is also configurable. It starts with a minimum of 2 cores, 8 GB of RAMs and 32 GB of storage, which should be a good foundation to run small projects. In addition, the hardware resources can be increased up to 32 cores, 64 GB RAM and 128 GB of storage which matches a very good workstation configuration.


👍 There are monthly quotas for using the remote environments of 120 hrs for personal accounts, and 180 hrs for the PRO account.


How to use Codespaces?


Codespaces leverages the Secure Shell (SSH) protocol, which provides a secure channel between client and server.  This protocol is used to provide remote access to resources like VMs that are hosted on cloud platforms. This protocol makes it possible for browsers and IDE tools like VS Code to connect remotely and manage the projects.


When using the browser, the VS Code Browser IDE is loaded. We can also use a local installation of VS Code or any IDE that support SSH. The development process works the same way as if running locally with the exception that the files are hosted remotely, and we can also use a terminal window to execute build commands within the VM space.


How to start a project with GitHub Codespaces


We can start a Codespaces environment right from GitHub. Open a repo on GitHub and click on the Code tab and then click the Code button from the toolbar. This opens up the options to create a new Codespaces, connect to an existent one, or even configure your Codespaces resources, more on this later. 


👍 You can use this repo if you do not have one https://github.com/ozkary/Data-Engineering-Bootcamp


ozkary-create-github-code-space


When you add a new environment, GitHub essentially provisions a VM on Azure. It loads a Docker image with some of the dependencies of your project. For example, if your code is .Net Core or a TypeScript with React project, a Docker image with those dependencies is built and provisioned into the VM.


👍 The Docker images are preconfigured. We can also build a custom image to meet specific requirements.


Once the environment is provisioned, we can open the project using any of the options listed on the image below. I prefer to use my local VS Code instance, as I often have all the tools needed to work on my projects. Once the project is open on VS Code, the project connection is cached, and we only need to open VS Code again to load the remote project. The browser feature is also very useful, so do take it for a spin and see how you like it.


ozkary how to open github codespaces



Use a Terminal to Manage the Project


When the project is open remotely, we can run common activities like adding additional dependencies, building and debugging the project. Since the environment is running on Ubuntu, we can open a terminal window on VS Code. This enables us to run the CLI commands that we need in order to manage our project. 


In the case of Web projects, we can run the project remotely using our browser. Even though the project runs remotely on the VM, port forwarding is used for secured remote access, so we can open our local browser and load the app. We can see the forwarded ports for our application on the ports tab of VS Code.


ozkary vscode port forwarding




Managing your Codespaces Instance

In some cases, we may see some performance issue on our remote environment. If this is the case, we need to inspect the current instance configuration and if possible upgrade the resources. Since this is an Ubuntu instance, we can use the terminal to run commands like lscpu to check the current configuration like cpu’s and memory. We can also use the Codespaces command toolbar and change the machine type, which provides a quick shortcut to change the machine type or configure the container. 


The Dev Container can also be customized by making changes to the devcontainer.json file. Additional customization can be done by building a custom Docker image to meet specific development environment requirements.


👍 When the Dev Container is changed, the VM requires a re-start, which is done automatically


ozkary github codespaces vscode commands

Summary


By leveraging the use of remote managed development environments, software engineers can save time by not having to work on a development environment configuration, we can instead use GitHub Codespaces to quickly provision Dev Containers that get us up and running in a short time, thus allowing us to focus on our development tasks instead of environment managing tasks.


Thanks for reading.



Send question or comment at Twitter @ozkary

Originally published by ozkary.com

10/15/22

API CORS Support with Azure API Management

In a Single Page Applications architecture, APIs are used for the application integration with a backend service. When both the SPA and API are hosted on the same domain name, the integration is simple, and the client application can call the APIs directly. When the SPA and API are hosted on different domains, we have to create Cross Domain Access Sharing CORS policies to authorize the client app to access the API. Otherwise, the client application is blocked from calling the APIs.

In cloud platform scenarios, the API is accessible via a gateway, which it is often used to protect the access to internal APIs by enforcing security policies like CORS and IP whitelisting. A very common use case is illustrated below:

okary-apim-cors

On this diagram, we have an SPA application hosted in the app.ozkary.com domain. The app needs to integrate with an internal API that is not available via a public domain. To enable the access to the API, a gateway is used to accept inbound public traffic. This gateway is hosted on a different domain name, api.services.com. Right away, we can expect to have a cross domain problem, which we have to resolve. On the gateway, we can apply policies to allow an inbound request to reach the internal API.

To show a practical example, we need to first review what is CORS and why it is important for security purposes. We can then talk about why we should use an API gateway and how to configure policies to protect an API.

What is CORS?

Cross-Origin Resource Sharing is a security feature supported by modern browsers to enable applications hosted on a particular domain to access resources hosted on different domains. In this case, the resource that needs to be shared is an API via a web request. The way a browser enforces this process is by creating a preflight request to the server before actually sending the request. This is the process to check if the client application is authorized to make the actual request.

When the app is ready to make a request to the API, the browser sends an OPTIONS request to the API, a preflight request. If the cross-origin server has the correct CORS policy, an HTTP 200 status is returned. This authorizes the browser to send the actual request with all the request information and headers.

okary-apim-cors-preflight


For the cross-origin server to be configured properly, the policies need to include the client origin or domain, the web methods and headers that should be allowed. This is very important because it protects the API from possible exploits from unauthorized domains. It also controls the operation that can be called. For example, a GET request may be allowed, but a POST request may not be allowed. This level of configuration helps with the authorization of certain operations, like read only, at the domain level. Now that we understand the importance of CORS, let's look at how we can support this scenario using an API gateway.

What is Azure API Management?

The Azure API Management service is a reverse proxy gateway that manages the integration of APIs by providing the routing services, security and other infrastructure and governance resources. It provides the management of cross-cutting concerns like security policies, routing, document (XML, JSON) transformation, logging in addition to other features. An APIM can host multiple API definitions which are accessible via an API suffix or route information. Each API definition can have multiple operations or web methods. As an example, our service has a telemetry and audit API. Each of those APIs have two operations to GET and POST information.

  • api.services.com/telemetry
    • GET, POST operations
  • api.services.com/audit
    • GET or POST operations

For our purpose, we can use the security features of this gateway to enable the access of cross-origin client applications to internal APIs that are only accessible via the gateway. This can be done by adding policies at the API level or to each API operation. Let's take a look at what that looks like when we are using the Azure Portal.

ozkary-azure-apim-setup


We can add the policy for all the operations, or we can add it to each operation. Usually, when we can create an API, all the operations should have the same policies. For our case, we apply the policy at the API level, so all the operations are covered under the same policy. But exactly, what does this policy looks like? Let's review our policy and understand what is really doing.

For our integration to work properly, we need to configure the following information:

  • Allow the app.ozkary.com domain to call the API by using the allowed-origins policy. This shows as the access-control-allow-origin header on the request response.
  • Allow the OPTIONS, GET AND POST HTTP methods by using the allowed-methods policy. This shows as the access-control-allow-methods header on the request response.
  • Allow the headers Content-Type and x-requested-by by using the allowed-headers policy. This shows as the access-control-allow-headers header on the request response.

Note: The request response can be viewed using the browser development tools, network tab.

This policy governs the cross-origin apps and what operations the client can use. As an example, if the client application attends to send a PUT or DELETE operation, it will be blocked because those methods are not defined in the CORS policy. It is also important to note, that we could use a wildcard character (*) for each policy, but this essentially indicates that any cross-origin app can make any operation call. Therefore, there is really no security, which is not a recommended approach. The use of wildcards should be done only during the development effort, but it should not be used in production environments.

After Adding the Policy CORS Does not Work

In some cases, even when the policy is configured correctly, we may notice that the policy is not taking effect, and the API request is not allowed. When this is the case, we should look at the policy configuration in all the levels. In Azure APIM, there are three levels of configuration:

  • All APIs - One policy to all the API definitions
  • All Operations - All the operations under one API definition
  • Each Operation - One specific operation

We may think that the configuration at the operation level should take precedence, but this is not the case if there is a <base/> entry, as this indicates to use the parent configuration and apply it to this policy. To help prevent problems like this, make sure to review the high level configurations, and if necessary remove the <base/> entry at the operation level.

Conclusion

A CORS policy is a very important security configuration when dealing with applications and APIs that are hosted in different domains. As resources are shared across domain, it is important to clearly define what cross-origin clients can access the API. When using a cloud platform to protect an API, we can use an API gateway to help us manage the cross-cutting concerns like a CORS policy. This helps us minimizes risks and provides enterprise quality infrastructure.


Send question or comment at Twitter @ozkary

Originally published by ozkary.com

9/17/22

Create API Mocks with Azure APIM and Postman Echo Service

When kicking off a new API integration project, we often star by getting the technical specifications on the external API. The specifications should often come in the form of an OpenAPI Specification or a JSON schema definition. It is often the case, that the external API may not be available for the implementation effort to start. This however should not block our development effort because we can create API mocks without much development effort. The mocks can echo back our original request with the same JSON document model or with some modifications.

We are going to work on a simple telemetry digest API and see how we can create a mock. But before, we look at the solution, let’s review some important concepts. This should help us have more background on what we are trying to achieve and understand the tooling that we are using.

What is OpenAPI Specifications

The OpenAPI Specifications (OAS) is a technical standard for defining RESTful API in a declarative document, which allows us to clearly understand the contract definitions, and the operations that are available on that service. The OpenAPI Specification was formerly known as the Swagger Specifications, but it has been adapted as a technical standard, and it was renamed.

The specification is often written using YAML, which is a human-readable text file that is heavily used on infrastructure configuration and deployments. In our case, we will be using this following YAML, to understand a simple service.

ozkary-openapi

👍 Use Swagger.io to create an OpenAPI specificastion doument like the one above.

What is Postman and the Echo APIs

Postman is a development tool that enables developers to test APIs without having to do any implementation work. This is a great approach as it enables the development team to test the API and clearly understand the technical specifications around security and contract definitions.

Postman provides several services. There is a client application that should be used to create a portfolio of API requests. There is also an ECHO API service that enables team to create mocks by sending the request to that service, which, in turn, echoes back the request with additional information.

When an external API is not available, we can use the ECHO API to send our HTTP operations to quickly create realistic mocks for our implementation effort as this integrates with external services, and we can make changes to the JSON response to mock our technical specifications.

👍 Note:  Use https://postman-echo.com/post for the Echo API

 

ozkary-postman


What is Azure API Management

The Azure API Management service is a reverse proxy gateway that manages the integration of APIs by providing the routing services, security and other infrastructure and governance resources.  It provides the management of cross-cutting concerns like security policies, routing, document (XML, JSON) transformation, logging in addition to other features.

For our purpose, we can use the YAML specification that was previously defined to create a new API definition, as this is supported by Azure APIM. By importing this document, a new API is provisioned with the default domain (or custom domain for production environments) on the service plus the API routing suffix and version, which defines the RESTful route on the URL. An example of this would be:

api-ozkary.azure-api.net/telemetry/v1/mock

In the lifecycle of every request, APIM enable us to inspect the incoming request, forward the request to a backend or external service and inspect or transforms the outbound response back to the client application. The inbound and outbound process are the steps in the API request lifecycle that we can leverage to create our API mocks.

ozkary-apim-steps


Look at a Simple API Use Case

We can now move forward to talk about our particular use case. By looking at the YAML document, we can see that our API is for a simple telemetry digest that we should send to an external API.

Each telemetry record should be treated as a new record; therefore, the operation should be an HTTP POST. As the external service processed the request, the same document should be returned to the client application with the HTTP status code of 201, which means that the record was created.

For our case, the Postman Echo API adds additional data elements to our document. Those elements are not needed for our real implementation, so we will need to apply a document transformation on the outbound steps of the request lifecycle to return a document that meets our technical specifications.

As you can see on the image below, the response from the Postman echo service returns our request data in the data property, but it also adds other information which may not be relevant to our project.

Create a Mock to Echo the Requests

Once the YAML is imported into Azure APIM, we can edit the API policies from the portal. To mock our simple telemetry digest, we need to add policies to both the inbound and outbound processing steps. The inbound step is used to change the inbound request parameters, headers and even the document format.  In this case, we need to change the backend service using the set-backend-service policy and send the request to the postman-echo.com API. We also need to rewrite the URI using the rewrite-uri policy and remove the API URL prefix. Otherwise, that will be appended automatically to our request to the Echo API, which will cause a 404 not found HTTP error.

When the response comes back from the Echo API, we need to transform the document in the outbound processing step. In this case, we need to serialize the body of the response, read the data property, which holds the original request, and return only that part of the document. For this simple implementation, we are using a C# script to do that. We could also use a liquid template to do something similar. Liquid templates provide a declarative way to transform the JSON response. It is a recommended approach when we need to rename properties and shape the document differently, which in some cases can get very complex. When using the C# code approach the code can get very hard to maintain.

👍 Note: The C# capabilities on Azure API are very limited. When applicable, the use of liquid templates is recommended.

Conclusion

With every new integration project, there is often the need to mock the APIs, so the implementation effort can get going. There is no need to create some API mock project that requires a light implementation and some deployment activities. Instead, we can use Azure API Management and Postman echo APIs to orchestrate our API mocks. By taking this approach, we accelerate and unblock our development efforts using enterprise quality tooling.

Thanks for reading.

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

8/20/22

Improve User Experience with React Code Splitting and Lazy Loading Routes

When building single page applications, SPA. The application load time performance is very important as this improves the user experience. As development teams mostly focus on functional requirements, there is a tendency to skip some of the non-function requirements like performance improvements. The result is that when a web application is loaded, all the resources including views that are not visible on the home page are downloaded in a single bundle. This is referred as eagerly loading, and this approach often causes a slow load time as all the resources need to be downloaded before the user can interact with the application.


ozkary-lazy-loading-routes

To avoid this performance issue, we want to only load the resources that are needed at the time that the user is requesting it, on demand. As an example, only load the resources for the home page without loading other page resources, thus improving the load time. This is usually called lazy loading. To support this, we need to load chunks of the application on demand.  A chunk is basically a JavaScript or CSS file that packages only the containers, components and dependencies that are needed for that view to render.

To lazy load the different views for an application, we need to implement the concept of Code Splitting, which basically enables us to split the code bundle into chunks, so each container view and dependencies can be downloaded only as the user is requesting it. This greatly improves the app performance because the chunk size is small compared to the entire code bundle.

Importing Container Views and Routing

A simple yet very important approach to improve load time performance is to lazy load the routes. This is a code split process, which breaks down each container view into a separate chunk. In addition, components within these containers can also be lazy loaded to break down further the size of each chunk.

To get started, let’s look at what the navigation configuration of React application looks like, so we can review what takes place when a user loads the application.

ozkary-react-container-views

In this example, we should notice that our React app has three main containers, which are basically the pages or views that the user can load from the app.  These containers are usually in the container folders of the project file structure. This path is important because it is needed to associate them to a route.

👍 Pro Tip: It is a best practice to plan your folder structure and create a folder for each container, components, elements, and services.

To loads those views, we need to import them and map them to an application route. This should be done on the application starting point, which should be the App.tsx file. The code to do that looks like this:

In this code, we are using the import directives to load each container view. Each of those views is then mapped to an application route directive. When using import directives, there is no optimization, so we should expect that when this app loads on the browser, all the views should be loaded in a single bundle. To clearly see this, let’s use the browser dev tools to inspect how this look at the network level.


ozkary-app-loading-single-bundle


By doing a network inspection, we can see that there is a bundle.js file. This file has a 409kb size. In the example of a simple app, this is not bad at all, but for real world apps, this bundle size may be much bigger than that, and eventually it impacts the load time. A benefit of using a single bundle is that there are no additional trips to download other file chunks, but this approach will not let your application scale and perform acceptably over time.

Lazy Loading Container Views

Now, we should be able to understand that as the app continuous to grow, there is potential performance challenge, so the question is how can be optimized the loading of our application? The simple answer is that we need to Code Split the bundle into smaller chunks. A quick approach is to Lazy Loading the routes. This should enable us to improve the load time with very small code changes. Let modify our previous code and look at the performance difference.

In the updated version of our code, we are now using the lazy direct to delay the import of the container view only when the user requests that route. The rest of the code remains the same because we are still using the same container references and mapping them to a route. OK, let’s run the app and do another network inspection, so we can really understand the improvement.


ozkary-app-lazy-loading-routes


In this last trace, we can see there still a bundle file with roughly the same size of the file as before. This bundle file contains the optimization code to map a route to a particular bundle chunk. When a particular route is loaded, home route is loaded by default, the chunk for that view is downloaded, notice the src_container_Home_index_tsx.chunk.js. As the user navigates to other routes, the additional chunks are downloaded on demand, notice the Analytics and Admin chunks.

Final Thoughts

With this simple app, we may not be able to truly appreciate the optimization that has been done by just deciding to lazy load the containers. However, in real-world applications, the size of a single bundle will quickly get big enough to impact the usability of the application as users will have to wait a few or several seconds before the app is clickable. This is referred to as Load Time.

In addition, build tools for framework like React show performance warnings when loading the application in the development environment, as it tracks some performance indicators like load time. Also, it is a good practice to use a tool like Lighthouse, in the browser dev tools, to run a report and measure performance indicators like load time, render time and others.

ozkary-app-performance-report


👍 Pro Tip: Always use a performance tool to measure performance and other industry best practices for web applications.

With a bit of performance planning, we can feel confident that we are building an app that will scale and perform as additional business requirements are added, and the app will provide a much better user experience by improving the overall load time.

Send questions or comments at Twitter @ozkary

Originally published by ozkary.com