5/14/22

Improve App Performance with In-Memory Cache and Real-Time Integration


In the presentation, we discuss some of the performance problems that exists when using an API to SQL Server integration on a high transaction systems with thousands of concurrent clients and several client tools that are used for statistical analysis.

ozkary-telemetry

Telemetry Data Story

Devices send telemetry data via API integration with SQL Server. These devices can send thousands of transactions every minute.  There are inherit performance problems with a disk-based database when there are lots of writes and reads on the same table of a database. 

To manage the performance issues, we start by moving away from a polling system into a real-time integration using Web Sockets. This enables the client application to receive events on a bidirectional channel, which in turns removes the need to have to poll the APIs at a certain frequency.

To continue to enhance the system, we introduce the concept of an enterprise in-memory cache, Redis. The in-memory cache can be used to separate the reads and writes operations from the storage engine. 

At the end, we take a look at a Web farm environment with a load balancer, and we discuss the need to centralize the socket messages using Redis Publish and Subscribe feature. This enables all client with a live connection to be notified of the changes in real-time.

ozkary-redis-integration

Database Optimization and Challenges

Slow Queries  on disk-based storage
  • Effort on index optimization
  • Database Partition strategies
  • Double-digit millisecond  average speed (physics on data disks)
Simplify data access strategies
  • Relational data is not optimal for high data read systems (joins?)
  • Structure needs to be de-normalized
  • Often views are created to shape the data, date range limit

Database Contention
  • Read isolation levels (nolock)
  • Reads competing with inserts

Cost to Scale
  • Vertical and horizontal scaling up on resources
  • Database read-replicas to separate reads and writes
  • Replication workloads/tasks
  • Data lakes and data warehouse

What is Socket.io, WebSockets?

Enables real-time bidirectional communication.
Push data to clients as events take place on the server
Data streaming
Connection starts as HTTP is them promoted to WebSockets 


Why Use a Cache?

  • Data is stored in-memory
  • Sub-millisecond average speed
  • Cache-Aside Pattern
    • Read from cache first (cache-hit) fail over to database (cache miss)
    • Update cache on cache miss
  • Write-Through
    • Write to cache and database
    •  Maintain both systems updated
  • Improves app performance
  • Reduces load on Database

What is Redis?

  • Key-value store, keys can contain strings (JSON), hashes, lists, sets, & sorted sets
  • Redis supports a set of atomic operations on these data types (available until commited)
  • Other features include transactions, publish/subscribe, limited time to live -TTL 
  • You can use Redis from most of today's programming languages (Libs)
Code Repo

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

4/30/22

Visual Studio Code C++ Development

Visual Studio Code (VSCode) is a software development tool that is used to program in multiple programming languages. It is also a cross-platform integrated development environment tool (IDE) which runs on Linux, Windows and MacOS.  To use VSCode for a particular programming language, we need to install the corresponding extensions, which enables VSCode to load all the tools to support the selected language. When programming in C++ , we need to install the VSCode extension as well as a compiler that can enable to compile the source code into machine code.

ozkary-vscode-c++

Install the Extension

VSCode works with extensions, which are libraries to support languages and features. To be able to code in C++, we need to install the C++ extension. This can be done by searching for C++ from the Extensions view. From the search result, select the C/C++  extension with intellisense, debugging and code browsing. Click on the install button.

When reading the details of this extension, we learn that it is a cross-platform extension. This means that it can run on multiple operating systems (OS). It uses the MSVC and GCC compilers on Windows. The GCC compiler on Linux, and Clang on macOS. C++ is a compiled language, which means that the source code must be compiled into machine code to run on our machines.

Verify the Compiler

The extension does not install the compiler, so we need to make sure that a compiler is installed. To verify this, we can open a terminal from VSCode and type the command to check the compiler version.

 

// for Linux and Windows

g++ --version

// macOS

clang –version

 

The output of that command should show the compiler version. If instead, the message is command not found, this means that there is no compiler install, and you can move forward with installing the correct one for your target OS. Use GCC for Linux and Windows (or MinGW-x64), and clang for macOS.

Write a Hello World Sample

Once the compiler is ready on your workstation, we can move forward and start writing some code. Let’s start by creating a simple Hello World app using C++.  To do that, follow these steps:

  • Create a new folder. This is the project folder.
  • Open the folder with VSCode
  • Add a new file, name it helloworld.cpp

We should notice the CPP file extension. This is the extension use for C++ files. The moment we create the file, the extension that we previously installed should identify it and provide the programming language support.

Now, we can add the following code to the file. This code shows some basics of a C++ application.

  • Use include to import library support to the app.
  • Use using to bring of the operations into the global scope.
  • Declare the main() application entry point
  • Use the standard console output to display our message
  • Exit and stop the code execution

Compile and Run the Code

We should now have our simple Hello World app code written. The next step is to compile and run the application.  We can do that by following these steps from the terminal window:

Note: Run these commands from the folder location

 

// compiles the code and creates the output file which is a standalone executable


g++ ./helloworld.cpp -o appHelloWorld 

// runs the application

./appHelloWorld

 

The first command compiles the source code into machine code. It links the libraries, include declarations, to the output file or assembly. By looking at the project folder, we should see that a new file was created.

After the code is compiled, we can run the application from the terminal. The app should run successful and display the hello message. We should notice that this is a standalone application. It does not require any runtime environment like JavaScript, Python and other programming languages require.

Conclusion

VSCode is an integrated development environment tool that can enable us to work with different programming languages. It is also a cross-platform IDE, which enables programmers with different operating systems to use this technology. To work with a specific programming language, we need to install the corresponding extension. In the case of C++, we also need to install the compiler for the specific operating system.  Let me know if you are using C++ with VSCode already and if you like or dislike the experience.


Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/28/22

Visual Studio Code Online - Quick Intro

Visual Studio Code (VSCode) Online is a browser hosted IDE for software development purposes. It works similarly as the full version of VSCode.  You can access VSCode Online by visiting https://vscode.dev.  

ozkary vscode online


After the IDE is loaded on your browser, you can connect to any GitHub repo, including repos from other services. As the project loads, you are able to interact with the files associated to the project. These files can be JavaScript, TypeScript, CSharp or any other programming language associated to the project.

As a developer, you are able to browse the files, make edits commit and push back the changes to your repo. In addition, you can debug, do code comparison or load other add-ons to enable other development activities.

This service is not meant to replace your development environment, but is an additional tool to enable your work. Do take a look, and let me know what you think by sending my a message at Twitter @ozkary

Take a look at this video for a quick demo of the tool.



Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/12/22

Orchestrate Multiple API Calls in a Single Request with an API Gateway

When building apps, a common integration pattern is the use of the microservice architecture. This architecture enables us to create lightweight loosely-couple services, which the app can consume and process the information for specific purposes or workflows.

Sometimes, we do not control these microservices, and they can be designed in such a way that the information is fragmented in multiple steps. This basically means that to get a specific domain model, we may need to orchestrate a series of steps and aggregate the information, thus presenting a bit of an architectural concern.

Unfortunately, orchestration of microservices on the app leads to code complexity and request overhead, which in turns leads to more defects, maintenance problems and slow user experience. Ideally, the domain model should be defined in one single microservice request, so the app can consume it properly. 

For these cases, a good approach is to use an orchestration engine that can handle the multiple requests and document transformation. This enables the app to only make one single request and expect a well-defined domain model. This approach also abstracts the external microservices from the app, and applies JSON document transformation, so the application never has to be concerned with model changes.

To handle this architecture concern, we look at using an API Gateway to manage the API orchestration, security and document transformation policy which handles the document aggregation and domain model governance.

Client to Provider Direct Integration

See the image below for a comparison between an architecture where the app calls microservices directly. This forces the application to send multiple requests. It then needs to aggregate the data and transform it into the format that it needs. There are a few problems with this approach. There is traffic over head from the browser to the provider as multiple requests are made. The app is also aware of the provider endpoint, and it needs to bind to the JSON documents from the provider. By adding all these concerns to the app, we effectively must build more complex code in the app.

Client to Gateway Proxy Integration

On the other approach. The app integrates directly to our gateway. The app only makes one single request to the gateway, which in turns orchestrate the multiple requests. In addition, the gateway handles the document transformation process and the security concerns.  This helps us remove code complexity from the app. It eliminates all the extra traffic from the browser to the provider. The overhead of the network traffic is moved to the gateway, which runs on much better hardware.

ozkary api orchestration

We should clarify that this approach is recommended only when the microservices have fragmented related data. Usually a microservice handles a single responsibility, and the data is independent of other microservices.

Let me know what you have done when you have faced a similar integration and what is the result of your story.

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

2/12/22

Reduce code complexity by letting an API Gateway handle disparate services and document transformation

Modern Web applications use the microservice architecture for their API service integration. These APIs are often a combination of internal and external systems. When the system is internal, there is better control of the API endpoints and contract payload which the front-end components consume.  When there are external systems, there is no real control as the endpoints and contracts can change with new version updates.

There are also cases when the integration to these external integrations must be done with multiple providers to have some redundancy. Having to integrate with multiple providers, forces the application to manage different endpoints and contracts that have different structure. For these cases, how does the client application know what API endpoint to call? How does it manage the different structure and formats, JSON or XML, on both the request and response contracts? What is the approach when a new external service is introduced? Those are concerning questions that an API Gateway can help manage.

What is an API Gateway?

An API Gateway is an enterprise cloud solution that integrates client applications to back-end services or APIs. It works as a reverse proxy which forwards inbound requests to internal or external services. This approach abstracts the service's endpoint details from the application; therefore, an application only needs to be aware of the gateway endpoint information.  

When dealing with disparate services, an application must deal with the different contracts and formats, JSON, XML, for the request and subsequent response. Having code on the application to manage those contracts, leads to unmanageable and complex transformation code. A gateway provides transformation policies that enables the client application to only send and receive one contract format for each operation. The gateway transformation pipeline processes the request and maps it to the contract schema required by the service. The same takes place with the response, as the payload is also transformed into the schema expected by the client application. This isolates all the transformation process in the gateway and removes that concern from the client.

API Settings

To better understand how an API Gateway can help our apps avoid a direct connection to the services, we should learn about how those services and their operations should be configured.  To help us illustrate this, let’s think of an integration with two disparate providers, as shown on the image below.


ozkary API Gateway


The client apps can be using the APIs from either Provider A or B. Both providers are externally located in a different domain, so to manage the endpoint information, the apps are only aware of the gateway base URL.  This means that independently of how many providers we may add to this topology, the clients always connect to the same endpoint.  But wait, this still leaves us with an open question. How is the routing to a specific provider handled?

Operation Routing

Once we have the base URL for the gateway endpoint, we need to specify the routing to the API and specific operation. To set that, we first need to add an API definition to the gateway. The API definition enables us to add an API suffix to the base URL. This suffix is part of the endpoint route information and precedes the operation route information.

An API definition is not complete unless we add the operations or web actions which handle the request/response. An operation defines the resource name, HTTP method and route information that the client application uses to call the API endpoint in the gateway. Each route maps to an operation pipeline which forward requests to the provider’s API endpoint and then sends the response back to the client. In our example, the routing for the operation of Provider A looks as follows:

ozkary API Gateway Operation Pipeline

This image shows us how an API has a prefix as well as operations. Each of the operations is a route entry which completes the operation URL path. This information, plus the base URL, put together handles the routing of a client request to a particular operation pipeline, which runs a series of steps to transform the documents and forward the request to the provider’s operation.

Note: By naming the operations the same within each API, only the API suffix should change. From the application standpoint, this is a configuration update via a push update or a function proxy configuration update.

Operation Pipeline

The operation pipeline is a meta-data driven workflow. It is responsible for managing the mapping of the routing information and execution of the transformation policies for both the request and response. The pipeline has four main steps: Frontend, Inbound, Backend and Outbound.

The Frontend steps handles the Open API specifications JSON document. It defines the hostname, HTTP schemes, and security requirements for the API. It also defines, for each operation, the API route, HTTP method, request parameters or model schema for both the request and response. The models are the JSON contracts that the client application sends and receives.

The Inbound step runs the transformation policies. This includes adding header information, rewrites the URL to change the operation route into the route for the external API. It also handles the transformation of the operation request model into the JSON or XML document for the external API. As an example, this is the step that transform a JSON payload into SOAP by adding the SOAPAction header and SOAP envelope into the request.

The Backend step defines the base URL for the target HTTP endpoint. Each operation route is appended to the backend base URL to send the request to the provider. On this step, security credentials or certificated can be added.

Lastly, the Outbound step, like the Inbound step, handles header and document transformation before the response is sent back to the client application. It transforms the JSON or XML payload into the JSON model defined by the Frontend schema configuration. This also the place to add error handling document standards for the application to handle and log accordingly independently of the provider.

This is an example of a transformation policy which shows an inbound request transformed to SOAP and outbound response transformed to JSON.

Conclusion

In Microservice architecture, a client application can be introduced to disparate APIs which support multiple document structure and different endpoints, as these API services are hosted in different domains. To avoid complex code which deal with multiple document formats and endpoints, an API Gateway can be used instead. This enables us to use meta-data driven pipelines to manage that complexity away from the app. This should enable the development teams to focus on app design and functional programming instead of writing code to manage infrastructure concerns.

Have you faced this challenge before, and if so, what did you do to resolve it? If you use code in your app, what did you learn from that experience?

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

1/15/22

Static Web Apps SPA Handle 404 Page Not Found

Single Page Applications (SPA) handle the client-side routing or navigation of pages without the need to send a post back or request to the server hosting the application. To enable the client-side routing, applications build using React or Angular and others bundle the application for a single download of all the page resources. This bundle download allows the routing service to load the next view container and components when the user selects a new page, essentially loading a new route, without having to make a request. Depending on the size of the application, this approach can lead to initial slow load time on the browser.

ozkary lazy loading 404 errors

To address the initial slow load time, a build optimization can be done that can enable the loading of the content associated with a route to be done on demand, which requires a request be sent to the server. The optimization is handled by using a lazy loading approach, in which the route content is downloaded as an application chunk from the hosting environment. This means that instead of downloading the entire app during the initial download, only an index of URLs pointing to chunk files that are associated to the route is downloaded. This can also include chunk files for CSS, JS and even image downloads. As a route is loaded, the index lookup provides the URL of the chunk to download, thus making the app load faster. When another route is loaded, a new chunk is downloaded.

Client-Side vs Server-Side Routing

The client-side routing works as designed when the navigation operations are done normally by the user clicking on menu options or call-to-action buttons.  In some cases, a user may have to reload or refresh the web application. This action forces the browser to make a request to the server.  Once the request makes to the server, the server-side routing rules are applied. If the server routes definitions do not have a handler for all the routes defined on the client-side, the server fails to find the content that is requested and responds with an HTTP 404 error, which means that a page is not found.

By default, the server-side routing knows to always return the index page hosting the SPA.  This is usually mapped to the root of the domain or / path. However, when a user starts to navigate the app, the routes change to match a different path. For the server to be able to understand what to send back when this new path is loaded, we need to configure the server in a way that it knows to return the index page for all the routes, not just the root path. For Static Web Apps (SWA) which are hosted on CDN resources, this is done using a configuration setting for the application. These settings enable the configuration of the routing and security policy for the application. Let’s use an example to review that in more detail.

Static Web Apps Settings

Imagine that we have an SPA app with the following client-site route configuration:

ozkary-spa-routes-404-error

The above routing information is typical of an app that has a home, about and contact page. The SPA provides the navigation elements or call-to-action buttons, so the user can select any of those pages.  When the user is on the home page which is defined by the home route or /, a page reload does not cause any problem as this path is usually associated to the index page of the application on the server routing configuration.

When the user navigates to the other route locations, for example /about, a page reloads sends a post back to the server, and if there is no handling for that page the 404 error is created. Depending on the hosting resource, the 404 page can be system generated, which takes the user away from the application experience altogether.

To manage this concern, the latest release of SWA provides the staticwepapps.config.json file. This file is required to be able to configure the server-side route and security information for the app. It is also used to override the behavior of some HTTP operations, as well as configuring HTTP responses. For the scope of this conversation, we focus on the routing configuration only.

Note: At the time of this writing, the routes.json file has been deprecated for the staticwebapps.config.json. The configuration between these files has some differences, so carefully review the options being used, just renaming routes.json will lead to problems on the application behavior.

Routing and Response Overrides

The routes' configuration enables us to add server-side routing rules with security, for authentication and authorization requirements, as well redirect rules for a request.  To avoid the 404 error, the SWA configuration should have an entry for every client-side route configuration. For our specific example, this means that we should add the routes for the /about and /contact-us route. This is done by adding route entries in the routes' collection, as shown below:

 

"routes": [

    {

      "route": "/",

      "allowedRoles": ["anonymous"]

    }, {

      "route": "/about",

      "allowedRoles": ["anonymous"]

    },  {

      "route": "/contact-us",

      "allowedRoles": ["anonymous"]

    }

],

The routes on the server-side configuration lists all the client-side configuration and add a security role to enable anonymous users to access the route.  

Do We Need to Map All the Routes?

We do not only if we use a fallback policy. Our routing configuration only has three separate routes, and thus it is simple to manage. This however is not reflective of a complex app which can have several route entries. In addition, having to add every single client-side entry on the server is error-prone, as a route can be configured improperly or just forgotten to be included.

The Static Web App team also figure as much, so a fallback setting was introduced on more recent releases. This setting allows the server configuration to “fallback” to the default route when a route is not defined. This works just as good because for SPA, we always want the index page to be sent to the client. As the page is loaded on the browsers, the SPA routing service identifies the route information and download the chunk files associated to the route. This fallback setting looks as follows:

 

"navigationFallback": {

    "rewrite": "index.html",

    "exclude": ["/images/*.{png,jpg,gif}", "/css/*"]

  },

 

 

On the fallback setting, we should note that we are doing a rewrite operation instead of redirect. This is efficient because a rewrite is handled on the server, so there is no client-side trip and another request as the redirect operation creates.  We should also notice that we do not want to rewrite missing resources to the index page. To avoid this, we exclude all the images and CSS files. The order on how these settings is configured on the file is also relevant. Usually, the settings that come later on the document take precedence and override the previous settings. To help on this, we can look at a complete staticwepapps.config.json.

Are There Client-Side Page Not Found Errors?

Yes, there are these errors as well. These errors are different from an HTTP 404 error. These client-side errors indicate that there is some call-to-action element on the app that has a path with no routing configuration. This is not a server error, so there should also be a fallback to handle that problem on the client routing configuration. This is often done by creating a wildcard route entry as the last route step, so when a route is not found, it can load the page not found component.

Conclusion

SPA application routing is done on the client-side of the application, but when the user does an operation that can cause the SPA to bootstrap again, a server-side request is made. If the hosting environment web server, CDN do not have routing configuration or a fallback to handle new routes added to the client side, the hosting environment returns a 404-error page that takes the user completely out of the application. Therefore, it is important to add routing to both the client and server and manage route issues on both ends. This should help us guards against 404 page not found errors.

Have you experienced similar problems with your apps? 

Send question or comment at Twitter @ozkary

Originally published by ozkary.com