5/14/22

Improve App Performance with In-Memory Cache and Real-Time Integration


In the presentation, we discuss some of the performance problems that exists when using an API to SQL Server integration on a high transaction systems with thousands of concurrent clients and several client tools that are used for statistical analysis.

ozkary-telemetry

Telemetry Data Story

Devices send telemetry data via API integration with SQL Server. These devices can send thousands of transactions every minute.  There are inherit performance problems with a disk-based database when there are lots of writes and reads on the same table of a database. 

To manage the performance issues, we start by moving away from a polling system into a real-time integration using Web Sockets. This enables the client application to receive events on a bidirectional channel, which in turns removes the need to have to poll the APIs at a certain frequency.

To continue to enhance the system, we introduce the concept of an enterprise in-memory cache, Redis. The in-memory cache can be used to separate the reads and writes operations from the storage engine. 

At the end, we take a look at a Web farm environment with a load balancer, and we discuss the need to centralize the socket messages using Redis Publish and Subscribe feature. This enables all client with a live connection to be notified of the changes in real-time.

ozkary-redis-integration

Database Optimization and Challenges

Slow Queries  on disk-based storage
  • Effort on index optimization
  • Database Partition strategies
  • Double-digit millisecond  average speed (physics on data disks)
Simplify data access strategies
  • Relational data is not optimal for high data read systems (joins?)
  • Structure needs to be de-normalized
  • Often views are created to shape the data, date range limit

Database Contention
  • Read isolation levels (nolock)
  • Reads competing with inserts

Cost to Scale
  • Vertical and horizontal scaling up on resources
  • Database read-replicas to separate reads and writes
  • Replication workloads/tasks
  • Data lakes and data warehouse

What is Socket.io, WebSockets?

Enables real-time bidirectional communication.
Push data to clients as events take place on the server
Data streaming
Connection starts as HTTP is them promoted to WebSockets 


Why Use a Cache?

  • Data is stored in-memory
  • Sub-millisecond average speed
  • Cache-Aside Pattern
    • Read from cache first (cache-hit) fail over to database (cache miss)
    • Update cache on cache miss
  • Write-Through
    • Write to cache and database
    •  Maintain both systems updated
  • Improves app performance
  • Reduces load on Database

What is Redis?

  • Key-value store, keys can contain strings (JSON), hashes, lists, sets, & sorted sets
  • Redis supports a set of atomic operations on these data types (available until commited)
  • Other features include transactions, publish/subscribe, limited time to live -TTL 
  • You can use Redis from most of today's programming languages (Libs)
Code Repo

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

4/30/22

Visual Studio Code C++ Development

Visual Studio Code (VSCode) is a software development tool that is used to program in multiple programming languages. It is also a cross-platform integrated development environment tool (IDE) which runs on Linux, Windows and MacOS.  To use VSCode for a particular programming language, we need to install the corresponding extensions, which enables VSCode to load all the tools to support the selected language. When programming in C++ , we need to install the VSCode extension as well as a compiler that can enable to compile the source code into machine code.

ozkary-vscode-c++

Install the Extension

VSCode works with extensions, which are libraries to support languages and features. To be able to code in C++, we need to install the C++ extension. This can be done by searching for C++ from the Extensions view. From the search result, select the C/C++  extension with intellisense, debugging and code browsing. Click on the install button.

When reading the details of this extension, we learn that it is a cross-platform extension. This means that it can run on multiple operating systems (OS). It uses the MSVC and GCC compilers on Windows. The GCC compiler on Linux, and Clang on macOS. C++ is a compiled language, which means that the source code must be compiled into machine code to run on our machines.

Verify the Compiler

The extension does not install the compiler, so we need to make sure that a compiler is installed. To verify this, we can open a terminal from VSCode and type the command to check the compiler version.

 

// for Linux and Windows

g++ --version

// macOS

clang –version

 

The output of that command should show the compiler version. If instead, the message is command not found, this means that there is no compiler install, and you can move forward with installing the correct one for your target OS. Use GCC for Linux and Windows (or MinGW-x64), and clang for macOS.

Write a Hello World Sample

Once the compiler is ready on your workstation, we can move forward and start writing some code. Let’s start by creating a simple Hello World app using C++.  To do that, follow these steps:

  • Create a new folder. This is the project folder.
  • Open the folder with VSCode
  • Add a new file, name it helloworld.cpp

We should notice the CPP file extension. This is the extension use for C++ files. The moment we create the file, the extension that we previously installed should identify it and provide the programming language support.

Now, we can add the following code to the file. This code shows some basics of a C++ application.

  • Use include to import library support to the app.
  • Use using to bring of the operations into the global scope.
  • Declare the main() application entry point
  • Use the standard console output to display our message
  • Exit and stop the code execution

Compile and Run the Code

We should now have our simple Hello World app code written. The next step is to compile and run the application.  We can do that by following these steps from the terminal window:

Note: Run these commands from the folder location

 

// compiles the code and creates the output file which is a standalone executable


g++ ./helloworld.cpp -o appHelloWorld 

// runs the application

./appHelloWorld

 

The first command compiles the source code into machine code. It links the libraries, include declarations, to the output file or assembly. By looking at the project folder, we should see that a new file was created.

After the code is compiled, we can run the application from the terminal. The app should run successful and display the hello message. We should notice that this is a standalone application. It does not require any runtime environment like JavaScript, Python and other programming languages require.

Conclusion

VSCode is an integrated development environment tool that can enable us to work with different programming languages. It is also a cross-platform IDE, which enables programmers with different operating systems to use this technology. To work with a specific programming language, we need to install the corresponding extension. In the case of C++, we also need to install the compiler for the specific operating system.  Let me know if you are using C++ with VSCode already and if you like or dislike the experience.


Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/28/22

Visual Studio Code Online - Quick Intro

Visual Studio Code (VSCode) Online is a browser hosted IDE for software development purposes. It works similarly as the full version of VSCode.  You can access VSCode Online by visiting https://vscode.dev.  

ozkary vscode online


After the IDE is loaded on your browser, you can connect to any GitHub repo, including repos from other services. As the project loads, you are able to interact with the files associated to the project. These files can be JavaScript, TypeScript, CSharp or any other programming language associated to the project.

As a developer, you are able to browse the files, make edits commit and push back the changes to your repo. In addition, you can debug, do code comparison or load other add-ons to enable other development activities.

This service is not meant to replace your development environment, but is an additional tool to enable your work. Do take a look, and let me know what you think by sending my a message at Twitter @ozkary

Take a look at this video for a quick demo of the tool.



Send question or comment at Twitter @ozkary

Originally published by ozkary.com

3/12/22

Orchestrate Multiple API Calls in a Single Request with an API Gateway

When building apps, a common integration pattern is the use of the microservice architecture. This architecture enables us to create lightweight loosely-couple services, which the app can consume and process the information for specific purposes or workflows.

Sometimes, we do not control these microservices, and they can be designed in such a way that the information is fragmented in multiple steps. This basically means that to get a specific domain model, we may need to orchestrate a series of steps and aggregate the information, thus presenting a bit of an architectural concern.

Unfortunately, orchestration of microservices on the app leads to code complexity and request overhead, which in turns leads to more defects, maintenance problems and slow user experience. Ideally, the domain model should be defined in one single microservice request, so the app can consume it properly. 

For these cases, a good approach is to use an orchestration engine that can handle the multiple requests and document transformation. This enables the app to only make one single request and expect a well-defined domain model. This approach also abstracts the external microservices from the app, and applies JSON document transformation, so the application never has to be concerned with model changes.

To handle this architecture concern, we look at using an API Gateway to manage the API orchestration, security and document transformation policy which handles the document aggregation and domain model governance.

Client to Provider Direct Integration

See the image below for a comparison between an architecture where the app calls microservices directly. This forces the application to send multiple requests. It then needs to aggregate the data and transform it into the format that it needs. There are a few problems with this approach. There is traffic over head from the browser to the provider as multiple requests are made. The app is also aware of the provider endpoint, and it needs to bind to the JSON documents from the provider. By adding all these concerns to the app, we effectively must build more complex code in the app.

Client to Gateway Proxy Integration

On the other approach. The app integrates directly to our gateway. The app only makes one single request to the gateway, which in turns orchestrate the multiple requests. In addition, the gateway handles the document transformation process and the security concerns.  This helps us remove code complexity from the app. It eliminates all the extra traffic from the browser to the provider. The overhead of the network traffic is moved to the gateway, which runs on much better hardware.

ozkary api orchestration

We should clarify that this approach is recommended only when the microservices have fragmented related data. Usually a microservice handles a single responsibility, and the data is independent of other microservices.

Let me know what you have done when you have faced a similar integration and what is the result of your story.

Send question or comment at Twitter @ozkary

Originally published by ozkary.com

2/12/22

Reduce code complexity by letting an API Gateway handle disparate services and document transformation

Modern Web applications use the microservice architecture for their API service integration. These APIs are often a combination of internal and external systems. When the system is internal, there is better control of the API endpoints and contract payload which the front-end components consume.  When there are external systems, there is no real control as the endpoints and contracts can change with new version updates.

There are also cases when the integration to these external integrations must be done with multiple providers to have some redundancy. Having to integrate with multiple providers, forces the application to manage different endpoints and contracts that have different structure. For these cases, how does the client application know what API endpoint to call? How does it manage the different structure and formats, JSON or XML, on both the request and response contracts? What is the approach when a new external service is introduced? Those are concerning questions that an API Gateway can help manage.

What is an API Gateway?

An API Gateway is an enterprise cloud solution that integrates client applications to back-end services or APIs. It works as a reverse proxy which forwards inbound requests to internal or external services. This approach abstracts the service's endpoint details from the application; therefore, an application only needs to be aware of the gateway endpoint information.  

When dealing with disparate services, an application must deal with the different contracts and formats, JSON, XML, for the request and subsequent response. Having code on the application to manage those contracts, leads to unmanageable and complex transformation code. A gateway provides transformation policies that enables the client application to only send and receive one contract format for each operation. The gateway transformation pipeline processes the request and maps it to the contract schema required by the service. The same takes place with the response, as the payload is also transformed into the schema expected by the client application. This isolates all the transformation process in the gateway and removes that concern from the client.

API Settings

To better understand how an API Gateway can help our apps avoid a direct connection to the services, we should learn about how those services and their operations should be configured.  To help us illustrate this, let’s think of an integration with two disparate providers, as shown on the image below.


ozkary API Gateway


The client apps can be using the APIs from either Provider A or B. Both providers are externally located in a different domain, so to manage the endpoint information, the apps are only aware of the gateway base URL.  This means that independently of how many providers we may add to this topology, the clients always connect to the same endpoint.  But wait, this still leaves us with an open question. How is the routing to a specific provider handled?

Operation Routing

Once we have the base URL for the gateway endpoint, we need to specify the routing to the API and specific operation. To set that, we first need to add an API definition to the gateway. The API definition enables us to add an API suffix to the base URL. This suffix is part of the endpoint route information and precedes the operation route information.

An API definition is not complete unless we add the operations or web actions which handle the request/response. An operation defines the resource name, HTTP method and route information that the client application uses to call the API endpoint in the gateway. Each route maps to an operation pipeline which forward requests to the provider’s API endpoint and then sends the response back to the client. In our example, the routing for the operation of Provider A looks as follows:

ozkary API Gateway Operation Pipeline

This image shows us how an API has a prefix as well as operations. Each of the operations is a route entry which completes the operation URL path. This information, plus the base URL, put together handles the routing of a client request to a particular operation pipeline, which runs a series of steps to transform the documents and forward the request to the provider’s operation.

Note: By naming the operations the same within each API, only the API suffix should change. From the application standpoint, this is a configuration update via a push update or a function proxy configuration update.

Operation Pipeline

The operation pipeline is a meta-data driven workflow. It is responsible for managing the mapping of the routing information and execution of the transformation policies for both the request and response. The pipeline has four main steps: Frontend, Inbound, Backend and Outbound.

The Frontend steps handles the Open API specifications JSON document. It defines the hostname, HTTP schemes, and security requirements for the API. It also defines, for each operation, the API route, HTTP method, request parameters or model schema for both the request and response. The models are the JSON contracts that the client application sends and receives.

The Inbound step runs the transformation policies. This includes adding header information, rewrites the URL to change the operation route into the route for the external API. It also handles the transformation of the operation request model into the JSON or XML document for the external API. As an example, this is the step that transform a JSON payload into SOAP by adding the SOAPAction header and SOAP envelope into the request.

The Backend step defines the base URL for the target HTTP endpoint. Each operation route is appended to the backend base URL to send the request to the provider. On this step, security credentials or certificated can be added.

Lastly, the Outbound step, like the Inbound step, handles header and document transformation before the response is sent back to the client application. It transforms the JSON or XML payload into the JSON model defined by the Frontend schema configuration. This also the place to add error handling document standards for the application to handle and log accordingly independently of the provider.

This is an example of a transformation policy which shows an inbound request transformed to SOAP and outbound response transformed to JSON.

Conclusion

In Microservice architecture, a client application can be introduced to disparate APIs which support multiple document structure and different endpoints, as these API services are hosted in different domains. To avoid complex code which deal with multiple document formats and endpoints, an API Gateway can be used instead. This enables us to use meta-data driven pipelines to manage that complexity away from the app. This should enable the development teams to focus on app design and functional programming instead of writing code to manage infrastructure concerns.

Have you faced this challenge before, and if so, what did you do to resolve it? If you use code in your app, what did you learn from that experience?

Send question or comment at Twitter @ozkary

Originally published by ozkary.com