5/14/22

Improve App Performance with In-Memory Cache and Real-Time Integration


In the presentation, we discuss some of the performance problems that exists when using an API to SQL Server integration on a high transaction systems with thousands of concurrent clients and several client tools that are used for statistical analysis.

ozkary-telemetry

Telemetry Data Story

Devices send telemetry data via API integration with SQL Server. These devices can send thousands of transactions every minute.  There are inherit performance problems with a disk-based database when there are lots of writes and reads on the same table of a database. 

To manage the performance issues, we start by moving away from a polling system into a real-time integration using Web Sockets. This enables the client application to receive events on a bidirectional channel, which in turns removes the need to have to poll the APIs at a certain frequency.

To continue to enhance the system, we introduce the concept of an enterprise in-memory cache, Redis. The in-memory cache can be used to separate the reads and writes operations from the storage engine. 

At the end, we take a look at a Web farm environment with a load balancer, and we discuss the need to centralize the socket messages using Redis Publish and Subscribe feature. This enables all client with a live connection to be notified of the changes in real-time.

ozkary-redis-integration

Database Optimization and Challenges

Slow Queries  on disk-based storage
  • Effort on index optimization
  • Database Partition strategies
  • Double-digit millisecond  average speed (physics on data disks)
Simplify data access strategies
  • Relational data is not optimal for high data read systems (joins?)
  • Structure needs to be de-normalized
  • Often views are created to shape the data, date range limit

Database Contention
  • Read isolation levels (nolock)
  • Reads competing with inserts

Cost to Scale
  • Vertical and horizontal scaling up on resources
  • Database read-replicas to separate reads and writes
  • Replication workloads/tasks
  • Data lakes and data warehouse

What is Socket.io, WebSockets?

Enables real-time bidirectional communication.
Push data to clients as events take place on the server
Data streaming
Connection starts as HTTP is them promoted to WebSockets 


Why Use a Cache?

  • Data is stored in-memory
  • Sub-millisecond average speed
  • Cache-Aside Pattern
    • Read from cache first (cache-hit) fail over to database (cache miss)
    • Update cache on cache miss
  • Write-Through
    • Write to cache and database
    •  Maintain both systems updated
  • Improves app performance
  • Reduces load on Database

What is Redis?

  • Key-value store, keys can contain strings (JSON), hashes, lists, sets, & sorted sets
  • Redis supports a set of atomic operations on these data types (available until commited)
  • Other features include transactions, publish/subscribe, limited time to live -TTL 
  • You can use Redis from most of today's programming languages (Libs)
Code Repo

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

4/30/22

Visual Studio Code C++ Development

Visual Studio Code (VSCode) is a software development tool that is used to program in multiple programming languages. It is also a cross-platform integrated development environment tool (IDE) which runs on Linux, Windows and MacOS.  To use VSCode for a particular programming language, we need to install the corresponding extensions, which enables VSCode to load all the tools to support the selected language. When programming in C++ , we need to install the VSCode extension as well as a compiler that can enable to compile the source code into machine code.

ozkary-vscode-c++

Install the Extension

VSCode works with extensions, which are libraries to support languages and features. To be able to code in C++, we need to install the C++ extension. This can be done by searching for C++ from the Extensions view. From the search result, select the C/C++  extension with intellisense, debugging and code browsing. Click on the install button.

When reading the details of this extension, we learn that it is a cross-platform extension. This means that it can run on multiple operating systems (OS). It uses the MSVC and GCC compilers on Windows. The GCC compiler on Linux, and Clang on macOS. C++ is a compiled language, which means that the source code must be compiled into machine code to run on our machines.

Verify the Compiler

The extension does not install the compiler, so we need to make sure that a compiler is installed. To verify this, we can open a terminal from VSCode and type the command to check the compiler version.

 

// for Linux and Windows

g++ --version

// macOS

clang –version

 

The output of that command should show the compiler version. If instead, the message is command not found, this means that there is no compiler install, and you can move forward with installing the correct one for your target OS. Use GCC for Linux and Windows (or MinGW-x64), and clang for macOS.

Write a Hello World Sample

Once the compiler is ready on your workstation, we can move forward and start writing some code. Let’s start by creating a simple Hello World app using C++.  To do that, follow these steps:

  • Create a new folder. This is the project folder.
  • Open the folder with VSCode
  • Add a new file, name it helloworld.cpp

We should notice the CPP file extension. This is the extension use for C++ files. The moment we create the file, the extension that we previously installed should identify it and provide the programming language support.

Now, we can add the following code to the file. This code shows some basics of a C++ application.

  • Use include to import library support to the app.
  • Use using to bring of the operations into the global scope.
  • Declare the main() application entry point
  • Use the standard console output to display our message
  • Exit and stop the code execution

Compile and Run the Code

We should now have our simple Hello World app code written. The next step is to compile and run the application.  We can do that by following these steps from the terminal window:

Note: Run these commands from the folder location

 

// compiles the code and creates the output file which is a standalone executable


g++ ./helloworld.cpp -o appHelloWorld 

// runs the application

./appHelloWorld

 

The first command compiles the source code into machine code. It links the libraries, include declarations, to the output file or assembly. By looking at the project folder, we should see that a new file was created.

After the code is compiled, we can run the application from the terminal. The app should run successful and display the hello message. We should notice that this is a standalone application. It does not require any runtime environment like JavaScript, Python and other programming languages require.

Conclusion

VSCode is an integrated development environment tool that can enable us to work with different programming languages. It is also a cross-platform IDE, which enables programmers with different operating systems to use this technology. To work with a specific programming language, we need to install the corresponding extension. In the case of C++, we also need to install the compiler for the specific operating system.  Let me know if you are using C++ with VSCode already and if you like or dislike the experience.


Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/28/22

Visual Studio Code Online - Quick Intro

Visual Studio Code (VSCode) Online is a browser hosted IDE for software development purposes. It works similarly as the full version of VSCode.  You can access VSCode Online by visiting https://vscode.dev.  

ozkary vscode online


After the IDE is loaded on your browser, you can connect to any GitHub repo, including repos from other services. As the project loads, you are able to interact with the files associated to the project. These files can be JavaScript, TypeScript, CSharp or any other programming language associated to the project.

As a developer, you are able to browse the files, make edits commit and push back the changes to your repo. In addition, you can debug, do code comparison or load other add-ons to enable other development activities.

This service is not meant to replace your development environment, but is an additional tool to enable your work. Do take a look, and let me know what you think by sending my a message at Twitter @ozkary

Take a look at this video for a quick demo of the tool.



Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/12/22

Orchestrate Multiple API Calls in a Single Request with an API Gateway

When building apps, a common integration pattern is the use of the microservice architecture. This architecture enables us to create lightweight loosely-couple services, which the app can consume and process the information for specific purposes or workflows.

Sometimes, we do not control these microservices, and they can be designed in such a way that the information is fragmented in multiple steps. This basically means that to get a specific domain model, we may need to orchestrate a series of steps and aggregate the information, thus presenting a bit of an architectural concern.

Unfortunately, orchestration of microservices on the app leads to code complexity and request overhead, which in turns leads to more defects, maintenance problems and slow user experience. Ideally, the domain model should be defined in one single microservice request, so the app can consume it properly. 

For these cases, a good approach is to use an orchestration engine that can handle the multiple requests and document transformation. This enables the app to only make one single request and expect a well-defined domain model. This approach also abstracts the external microservices from the app, and applies JSON document transformation, so the application never has to be concerned with model changes.

To handle this architecture concern, we look at using an API Gateway to manage the API orchestration, security and document transformation policy which handles the document aggregation and domain model governance.

Client to Provider Direct Integration

See the image below for a comparison between an architecture where the app calls microservices directly. This forces the application to send multiple requests. It then needs to aggregate the data and transform it into the format that it needs. There are a few problems with this approach. There is traffic over head from the browser to the provider as multiple requests are made. The app is also aware of the provider endpoint, and it needs to bind to the JSON documents from the provider. By adding all these concerns to the app, we effectively must build more complex code in the app.

Client to Gateway Proxy Integration

On the other approach. The app integrates directly to our gateway. The app only makes one single request to the gateway, which in turns orchestrate the multiple requests. In addition, the gateway handles the document transformation process and the security concerns.  This helps us remove code complexity from the app. It eliminates all the extra traffic from the browser to the provider. The overhead of the network traffic is moved to the gateway, which runs on much better hardware.

ozkary api orchestration

We should clarify that this approach is recommended only when the microservices have fragmented related data. Usually a microservice handles a single responsibility, and the data is independent of other microservices.

Let me know what you have done when you have faced a similar integration and what is the result of your story.

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

2/12/22

Reduce code complexity by letting an API Gateway handle disparate services and document transformation

Modern Web applications use the microservice architecture for their API service integration. These APIs are often a combination of internal and external systems. When the system is internal, there is better control of the API endpoints and contract payload which the front-end components consume.  When there are external systems, there is no real control as the endpoints and contracts can change with new version updates.

There are also cases when the integration to these external integrations must be done with multiple providers to have some redundancy. Having to integrate with multiple providers, forces the application to manage different endpoints and contracts that have different structure. For these cases, how does the client application know what API endpoint to call? How does it manage the different structure and formats, JSON or XML, on both the request and response contracts? What is the approach when a new external service is introduced? Those are concerning questions that an API Gateway can help manage.

What is an API Gateway?

An API Gateway is an enterprise cloud solution that integrates client applications to back-end services or APIs. It works as a reverse proxy which forwards inbound requests to internal or external services. This approach abstracts the service's endpoint details from the application; therefore, an application only needs to be aware of the gateway endpoint information.  

When dealing with disparate services, an application must deal with the different contracts and formats, JSON, XML, for the request and subsequent response. Having code on the application to manage those contracts, leads to unmanageable and complex transformation code. A gateway provides transformation policies that enables the client application to only send and receive one contract format for each operation. The gateway transformation pipeline processes the request and maps it to the contract schema required by the service. The same takes place with the response, as the payload is also transformed into the schema expected by the client application. This isolates all the transformation process in the gateway and removes that concern from the client.

API Settings

To better understand how an API Gateway can help our apps avoid a direct connection to the services, we should learn about how those services and their operations should be configured.  To help us illustrate this, let’s think of an integration with two disparate providers, as shown on the image below.


ozkary API Gateway


The client apps can be using the APIs from either Provider A or B. Both providers are externally located in a different domain, so to manage the endpoint information, the apps are only aware of the gateway base URL.  This means that independently of how many providers we may add to this topology, the clients always connect to the same endpoint.  But wait, this still leaves us with an open question. How is the routing to a specific provider handled?

Operation Routing

Once we have the base URL for the gateway endpoint, we need to specify the routing to the API and specific operation. To set that, we first need to add an API definition to the gateway. The API definition enables us to add an API suffix to the base URL. This suffix is part of the endpoint route information and precedes the operation route information.

An API definition is not complete unless we add the operations or web actions which handle the request/response. An operation defines the resource name, HTTP method and route information that the client application uses to call the API endpoint in the gateway. Each route maps to an operation pipeline which forward requests to the provider’s API endpoint and then sends the response back to the client. In our example, the routing for the operation of Provider A looks as follows:

ozkary API Gateway Operation Pipeline

This image shows us how an API has a prefix as well as operations. Each of the operations is a route entry which completes the operation URL path. This information, plus the base URL, put together handles the routing of a client request to a particular operation pipeline, which runs a series of steps to transform the documents and forward the request to the provider’s operation.

Note: By naming the operations the same within each API, only the API suffix should change. From the application standpoint, this is a configuration update via a push update or a function proxy configuration update.

Operation Pipeline

The operation pipeline is a meta-data driven workflow. It is responsible for managing the mapping of the routing information and execution of the transformation policies for both the request and response. The pipeline has four main steps: Frontend, Inbound, Backend and Outbound.

The Frontend steps handles the Open API specifications JSON document. It defines the hostname, HTTP schemes, and security requirements for the API. It also defines, for each operation, the API route, HTTP method, request parameters or model schema for both the request and response. The models are the JSON contracts that the client application sends and receives.

The Inbound step runs the transformation policies. This includes adding header information, rewrites the URL to change the operation route into the route for the external API. It also handles the transformation of the operation request model into the JSON or XML document for the external API. As an example, this is the step that transform a JSON payload into SOAP by adding the SOAPAction header and SOAP envelope into the request.

The Backend step defines the base URL for the target HTTP endpoint. Each operation route is appended to the backend base URL to send the request to the provider. On this step, security credentials or certificated can be added.

Lastly, the Outbound step, like the Inbound step, handles header and document transformation before the response is sent back to the client application. It transforms the JSON or XML payload into the JSON model defined by the Frontend schema configuration. This also the place to add error handling document standards for the application to handle and log accordingly independently of the provider.

This is an example of a transformation policy which shows an inbound request transformed to SOAP and outbound response transformed to JSON.

Conclusion

In Microservice architecture, a client application can be introduced to disparate APIs which support multiple document structure and different endpoints, as these API services are hosted in different domains. To avoid complex code which deal with multiple document formats and endpoints, an API Gateway can be used instead. This enables us to use meta-data driven pipelines to manage that complexity away from the app. This should enable the development teams to focus on app design and functional programming instead of writing code to manage infrastructure concerns.

Have you faced this challenge before, and if so, what did you do to resolve it? If you use code in your app, what did you learn from that experience?

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

1/15/22

Static Web Apps SPA Handle 404 Page Not Found

Single Page Applications (SPA) handle the client-side routing or navigation of pages without the need to send a post back or request to the server hosting the application. To enable the client-side routing, applications build using React or Angular and others bundle the application for a single download of all the page resources. This bundle download allows the routing service to load the next view container and components when the user selects a new page, essentially loading a new route, without having to make a request. Depending on the size of the application, this approach can lead to initial slow load time on the browser.

ozkary lazy loading 404 errors

To address the initial slow load time, a build optimization can be done that can enable the loading of the content associated with a route to be done on demand, which requires a request be sent to the server. The optimization is handled by using a lazy loading approach, in which the route content is downloaded as an application chunk from the hosting environment. This means that instead of downloading the entire app during the initial download, only an index of URLs pointing to chunk files that are associated to the route is downloaded. This can also include chunk files for CSS, JS and even image downloads. As a route is loaded, the index lookup provides the URL of the chunk to download, thus making the app load faster. When another route is loaded, a new chunk is downloaded.

Client-Side vs Server-Side Routing

The client-side routing works as designed when the navigation operations are done normally by the user clicking on menu options or call-to-action buttons.  In some cases, a user may have to reload or refresh the web application. This action forces the browser to make a request to the server.  Once the request makes to the server, the server-side routing rules are applied. If the server routes definitions do not have a handler for all the routes defined on the client-side, the server fails to find the content that is requested and responds with an HTTP 404 error, which means that a page is not found.

By default, the server-side routing knows to always return the index page hosting the SPA.  This is usually mapped to the root of the domain or / path. However, when a user starts to navigate the app, the routes change to match a different path. For the server to be able to understand what to send back when this new path is loaded, we need to configure the server in a way that it knows to return the index page for all the routes, not just the root path. For Static Web Apps (SWA) which are hosted on CDN resources, this is done using a configuration setting for the application. These settings enable the configuration of the routing and security policy for the application. Let’s use an example to review that in more detail.

Static Web Apps Settings

Imagine that we have an SPA app with the following client-site route configuration:

ozkary-spa-routes-404-error

The above routing information is typical of an app that has a home, about and contact page. The SPA provides the navigation elements or call-to-action buttons, so the user can select any of those pages.  When the user is on the home page which is defined by the home route or /, a page reload does not cause any problem as this path is usually associated to the index page of the application on the server routing configuration.

When the user navigates to the other route locations, for example /about, a page reloads sends a post back to the server, and if there is no handling for that page the 404 error is created. Depending on the hosting resource, the 404 page can be system generated, which takes the user away from the application experience altogether.

To manage this concern, the latest release of SWA provides the staticwepapps.config.json file. This file is required to be able to configure the server-side route and security information for the app. It is also used to override the behavior of some HTTP operations, as well as configuring HTTP responses. For the scope of this conversation, we focus on the routing configuration only.

Note: At the time of this writing, the routes.json file has been deprecated for the staticwebapps.config.json. The configuration between these files has some differences, so carefully review the options being used, just renaming routes.json will lead to problems on the application behavior.

Routing and Response Overrides

The routes' configuration enables us to add server-side routing rules with security, for authentication and authorization requirements, as well redirect rules for a request.  To avoid the 404 error, the SWA configuration should have an entry for every client-side route configuration. For our specific example, this means that we should add the routes for the /about and /contact-us route. This is done by adding route entries in the routes' collection, as shown below:

 

"routes": [

    {

      "route": "/",

      "allowedRoles": ["anonymous"]

    }, {

      "route": "/about",

      "allowedRoles": ["anonymous"]

    },  {

      "route": "/contact-us",

      "allowedRoles": ["anonymous"]

    }

],

The routes on the server-side configuration lists all the client-side configuration and add a security role to enable anonymous users to access the route.  

Do We Need to Map All the Routes?

We do not only if we use a fallback policy. Our routing configuration only has three separate routes, and thus it is simple to manage. This however is not reflective of a complex app which can have several route entries. In addition, having to add every single client-side entry on the server is error-prone, as a route can be configured improperly or just forgotten to be included.

The Static Web App team also figure as much, so a fallback setting was introduced on more recent releases. This setting allows the server configuration to “fallback” to the default route when a route is not defined. This works just as good because for SPA, we always want the index page to be sent to the client. As the page is loaded on the browsers, the SPA routing service identifies the route information and download the chunk files associated to the route. This fallback setting looks as follows:

 

"navigationFallback": {

    "rewrite": "index.html",

    "exclude": ["/images/*.{png,jpg,gif}", "/css/*"]

  },

 

 

On the fallback setting, we should note that we are doing a rewrite operation instead of redirect. This is efficient because a rewrite is handled on the server, so there is no client-side trip and another request as the redirect operation creates.  We should also notice that we do not want to rewrite missing resources to the index page. To avoid this, we exclude all the images and CSS files. The order on how these settings is configured on the file is also relevant. Usually, the settings that come later on the document take precedence and override the previous settings. To help on this, we can look at a complete staticwepapps.config.json.

Are There Client-Side Page Not Found Errors?

Yes, there are these errors as well. These errors are different from an HTTP 404 error. These client-side errors indicate that there is some call-to-action element on the app that has a path with no routing configuration. This is not a server error, so there should also be a fallback to handle that problem on the client routing configuration. This is often done by creating a wildcard route entry as the last route step, so when a route is not found, it can load the page not found component.

Conclusion

SPA application routing is done on the client-side of the application, but when the user does an operation that can cause the SPA to bootstrap again, a server-side request is made. If the hosting environment web server, CDN do not have routing configuration or a fallback to handle new routes added to the client side, the hosting environment returns a 404-error page that takes the user completely out of the application. Therefore, it is important to add routing to both the client and server and manage route issues on both ends. This should help us guards against 404 page not found errors.

Have you experienced similar problems with your apps? 

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

8/14/21

Web Development with Nodejs NPM NVM

When building web applications using JavaScript frameworks like React or Angular, there are tools that enable the installation of these frameworks and aid in the development process. These tools are essential to be able to build, test and manage some dependencies in our projects. It is those package dependencies that we continuously need to keep up with the latest update. To update those packages, we use the NPM CLI tool, which runs on the NodeJS runtime environment.

ozkary nodejs, npm, nvm

When we need to update a package, we may face issues that a package or a new version of that package is not supported (see error below) for the current version of Node.js and that we need to update to a new version. In this article, we discuss the tools that are used to manage the software development process and how to best update NodeJS using the command line interface (CLI).

 

npm WARN notsup Unsupported engine for create-react-app@5.0.0: wanted: {"node":">=14"} (current: {"node":"12.16.1","npm":"6.14.4"})

npm WARN notsup Not compatible with your version of node/npm: create-react-app@5.0.0

 

This error message indicates that a particular required version of Node.js is not in the system and node version 14 is a dependency.

What is Node.js?

Node.js is a cross-platform JavaScript runtime environment. It is used by software engineers to build server-side and client-side web applications.  When building client applications with popular frame works like React, Angular and others, Node.JS provides the runtime environment for the JavaScript applications to run. It also enables the build and test tools that are used during the implementation effort of those applications.

JavaScript's applications are made of several libraries or packages that can be added to the project. Those libraries are mostly refereed as packages, and to install them, developers use a CLI tool to install and configure them.

What is NPM?

Node Package Manager (NPM) is a tool that runs on the NodeJS runtime environment. It comes with the installation of Node.js. Its purpose is to download and install packages for a particular project. Those packages and respective versions are tracked on a JSON file on the root folder of the project. With NPM, we can also install other CLI tools that can be specific for scaffolding startup codebase for a particular JavaScript framework. Some examples include, but not limited to, are: yeoman, create-react-app, angular CLI.

NPM has many commands, but the install command is the most basic and most important one, as this is the one that enables us to install and update packages. Let’s look at some of these commands:

Command

Description

 

$ npm install package-name  –save

 

Installs a package latest version and saves the reference the package.json file

 

$ npm install package-name

 

Installs a package but does not store any reference information

 

$ npm update package-name

 

Updates a package with a new release. NPM decides what version to select

 

$ npm install package-name@latest

 

To have better control on what version to install, we can provide the version number or latest release flag right after the package name, separated by the @ character

 

$ npm install -h

 

Shows help information on running the install command

 

$ npm run script-name

 

Runs a script command defined on the package.json for build, test, starting the project

 

$ npm install -g npm@next

 

This command is used to install the next version of NPM. The -g flag should be used to install this globally in the system

 

What is package JSON?

Package.json is a metadata file which host all the project related information like project name, licensing, authors, hosting, location and most importantly information to track project dependencies and scripts to run.

When installing NPM packages to a project, the information is saved on a file at the root of the project, package.json. This file maintains project information and all the package dependencies.  By tracking the package dependencies, a development environment can be easily created it. The developers only need to clone the repo or codebase and use NPM to download all the dependencies by typing the following command from the root folder of the project:

 

$ npm install

 

 

*Note:  package.json must exist in the same folder location where this command is typed

The script area of the package.json file provide commands that can be used to create production quality builds, test plan execution, coding standard validation and running the application. These are essential command for the day-to-day development operations and integration with CICD tools like GitHub Actions or Jenkins.

Keep Node.js Updated

With a better understanding of Node.js and the purpose of NPM for the development process, we can now discuss how to deal with situations when NPM fails to install a package because our Node.js installation is behind a few versions, and we need to upgrade it.

What is NVM?

The best way to update Node.js is by using another CLI tool, Node Version Manager (NVM). This tool enables us to manage multiple versions of Node.js in our development workspace. It is not a required tool, but it is useful because it enables us to upgrade and test the application to the latest releases, which can help us identify compatibility issues with the new runtime or NPM packages. It also enables us to downgrade to previous version to help us verify when a feature started to break.

To install NVM on Linux, we can run the following command:

 

$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.34.0/install.sh | bash

 

Once the tool is installed, we can run a few commands to check the current Node.js version, install a new one and switch between versions. Let us review those commands:

Command

Description

 

$ nvm version

 

Shows the selected node version

 

$ nvm –version

 

Shows the nvm cli version

 

$ nvm ls

 

Lists all the Node.js versions installed

 

$ nvm use version-number

 

Selects a Node.js version to use

 

$ nvm install version-number

 

Installs a Node.js version

 

To install a new version of Node.js, we can use the install command. This downloads and install a new version. After the version is installed, the environment should default to the new version of Node. If the environment was encountering the unsupported Node.js version error, we can run the NPM command that failed again, and since the new version is installed, it should be able to install the new package.

Conclusion

When working with JavaScript frameworks like React or Angular, the Node.js runtime environment must be installed and kept with the latest version.  When new NPM packages need to be installed on our projects, we need to make sure that they are compatible with the current version of the runtime environment, Node.js. If this is not the case, the NPM package fails to install, and we need to update the runtime with the next or latest version. Using tools like NPM and NVM, we can manage the different versions of packages and runtime respectively.  Understanding the purpose of these tools and how to use them is part of the web development process, and it should help us keep the development environment up to date.

Have you used these CLI tools before? Do you like to use visual tools instead?
Send question or comment at Twitter @ozkary

Originally published by ozkary.com

7/17/21

App Branding Strategy with GitHub Branches and Actions

Branding applications is a common design requirement. The concept is simple. We have an application with functional core components, but based on the partner or client, we need to change the theme or look of the design elements to match that of the client’s brand. Some of these design elements may include images, content, fonts, and theme changes. There are different strategies to support an app branding process, either in the build process or at runtime. In this article, we discuss how we can support a branding strategy using a code repository branching strategy and GitHub build actions.

Branching Strategy

A code repository enables us to store the source code for software solutions. Different branches are mostly used for feature development and production management purposes. In the case of branding applications, we want to be able to use branches for two purposes. The first is to be able to import the assets that are specific to the target brand. The second is to associate the branch to build actions which are used to build and deploy the branded application.

To help us visualize how this process works, let’s work on a typical branding use case. Think of an app for which there is a requirement to support two different brands, call them brand-a and brand-b. With this in mind, we should think about the design elements that need to be branded. For our simple case, those elements include the app title, logo, text, or messaging in JSON files, fonts, and the color theme or skin.

We now need to think of the build and deployment requirements for these two brands. We understand that each brand must be deployed to a different hosting resource with different URLs, let’s say those sites are hosted at brand-a.ozkary.com and brand-b.ozkary.com, These could be Static Web App or CDN hosting resources.

With the understanding that the application needs to be branded with different assets and must be built and deployed to different hosting sites, we can conclude that a solution will be to create different branches which can help us implement the design changes to the app and at the same time, enable us to deploy them correctly by associating a GitHub build action to each branch.

Branching Strategy Strategy for Branding Apps
GitHub Actions

GitHub Actions makes it easy to automate Continuous Integration / Continuous Delivery (CI/CD) pipelines.  It is essentially a workflow that executes commands from a YML file to run actions like unit test, NPM build or any other commands that can be executed on the CLI to build the application.

A GitHub Action or workflow is triggered when there is a pull request (PR) on a branch. This is basically a code merge into the target branch. The workflow executes all the actions that are defined by the script. The typical build actions would be to pull the current code, move the files to a staging environment, run the build and unit test commands, and finally push the built assets into the target hosting location.

A GitHub Action is the great automation tool to meet the branding requirements because it enables us to customize the build with the corresponding brand assets prior to building the application. There is however some additional planning, so before we can work on the build, we need to define the implementation strategy to support a branding configuration.

Implementation Strategy

When coding a Web application with JavaScript frameworks, a common pattern is to import components and design elements into the container or pages of the application from their folder/path location. This works by either dynamically loading those files at runtime or loading them a design/build time.

The problem with loading dynamic content at runtime is that this requires that all the different brand assets be included in the build. This often leads to a big and slow build process as all those files need to be included. The design time approach is more effective as the build process would only include those specific features into the build, making the build process smaller and faster.

Using the design time approach does require a strategy. Even though, we could make specific file changes on the branch, to add the brand-a files as an example, and commit them, this is a manual process that is error prompt. We instead need an approach that is managed by the build process. For this process to work, we need to think of a folder structure within our project to better support it. Let’s review an approach.

Ozkary Branching Strategy Branding Folders

After reviewing the image of the folder structure, we should notice that the component files import the resources from the same specific folder location, content. This off course is not enough to support branding, but by looking carefully, we should see that we have brand resources outside the src folder of the project in the brands' folder. There are also additional folders for each brand with the necessary assets.

This way this works is that only the files within the src and public folders are used for the build process. Files outside the src folder should not be included in the build, but they are still within source control.  The plan is to be able to copy the brand files into the src/content folder before the build action takes place. This is where we leverage a custom action on the GitHub workflow.

Custom Action

GitHub Actions enable us to run commands or actions during the build process. These actions are defined as a step within the build job, so a step to meet the branding requirements can be inserted into the job, which can handle copying the corresponding files to the content folders. Let’s look at a default workflow file that is associated to a branch, so we can see clearly how it works.

Ozkary Branching Strategy Build Action

By default, the workflow has two steps, it first checks out or pull all the files from the code repo. It then executes the build commands that are defined in the package.json file. This is the steps that generates the build output, which is deployed to the hosting location. The logical step here is to insert a step or multiple steps to copy the files from all the brand subfolders. After making this suggested change, the workflow file should look as follows:

Ozkary Branching Strategy Custom Action

The new steps just copy files from the target brand folder into the src and public folders.  This should enable the build process to find those brand specific files and build the application with the new logo, fonts, and theme. The step to copy the fonts does some extra work. The reason is that the font files have different font family names, so we want to be able to find all the files and delete them first. We can then move forward and copy the new files.

It is important to notice that the SASS files, SCSS extension, are key players on this process. Those are the files that provide variable and font information to support the new color theme, styles, and fonts. When using SASS, the rest of the components only import those files and use the variables for their corresponding styles. This approach minimizes the number of files that need to be customized. The _font.scss file, for example, handles the font file names for the different brands, as those files are named differently.

 For cases where SASS is not used, it is OK to instead copy over the main CSS files that defines the color theme and style for the app, but the point should be to minimize the changes by centralizing the customization just by defining variables instead of changing all the style files as this can become hard to manage.

Conclusion

Branding applications is a common design requirement which can become difficult to manage without the right approach. By using a branching strategy and GitHub custom action, we can manage this requirement and prevent build problems by distributing the branded assets in different directory to keep the build process small. This approach also helps  eliminate the need to have developers make code commits just to make import reference changes.

Thanks for reading.

Gist Files

Originally published by ozkary.com