7/27/19

Angular NgFor binding to Iterables Explained

When building Angular components, we often use data structures like arrays and hashtable to build a collection of objects.  This enables us to use HTML templates to iterate the collection and display the object properties using the ngFor directive. It is here that we come across a behavior that seems to confuse some developers. In some cases, ngFor raises the only support iterables error when iterating over these collections. The question is why do we get this error? 



To provide an answer, we must first talk about the differences between an array and a hashtable. Let’s first look at the following code snippets:

interface Vehicle {
   id: number
   year: number;
   make: string;
   model: string;
}

interface VehicleList {
   [index: number]: Vehicle;
}

 inventoryArray:Vehicle[] = [
      {id:
0,year:2018,make:'nissan',model:'xterra'}
      ,{id:
1,year:2018,make:'nissan',model:'altima'}
      ,{id:
2,year:2018,make:'nissan',model:'maxima'}
 ];
   
 inventoryList:VehicleList = {
    
"0":{id:0,year:2018,make:'nissan',model:'xterra'}
    ,
"1":{id:1,year:2018,make:'nissan',model:'altima'}
    ,
"2":{id:2,year:2018,make:'nissan',model:'maxima'
 };

In the previous code, we first define the Vehicle interface. This is the object that we display on the template. We also define a VehicleList interface which provides us the ability to use a Hashtable with an number as key as we are using the id property.

Once the interfaces are defined, we can create collections with two different data structures. We need to look at this in detail as the differences may not be too clear. We first declare the inventoryArray which is of type Array of Vehicles (items in an array [] square brackets). We also create the inventoryList (items in an object notation {} brackets) which is an object that has keys matching the vehicle id.

Both of the collections have the same objects, but the way to iterate them is different. Let’s take a look at the concept a bit closer.

What are Iterables?

Iterable is a data structure that allows access to its elements in a sequential way by providing an iterator which acts as a pointer to the elements. This is supported by Arrays, so we can access its elements using a for...of  (notice not a for...in) loop. Hashtable entries are accessible as object properties, so they are not iterable natively.

Angular ngFor uses the for...of implementation to iterate the elements. This is why when an object is used with that directive, the error “only supports binding to Iterables” is raised. We can see that by looking at the template implementation in which we use the ngFor directive with the component inventory property.

@Component({
  selector:
'app',
  template:
`
      <table>
    <thead>
    <tr><th>Id</th><th>Year</th><th>Make</th><th>Model</th></tr>
     </thead>
     <tbody>
    <tr *ngFor="let car of inventory">
    <td>{{car.id}}</td>
    <td>{{car.year}} </td>
    <td>{{car.make}} </td>
    <td> {{car.model}}</td>
    </tr>
    </tbody>
    </table> 
  `
,
})
class
HomeComponent {
...
}

Not that we understad more about iterables and for...of loop, we can take a look at our code and identify areas where this problem can surface. If we work with object properties instead of arrays of objects, how can we address this problem without having to refactor a lot of code. Well, we can do this by a simple approach on the component code. Let’s review that solution.

Component Approach 

The approach here is to transform the Hashtable data structure into an array.  This can be done by using the Object constructor values method which basically does a for...in loop and returns the object property values without the keys. This essentially changes the data structure to an array which we can assign to inventory property that is used on the template to display the data.

this.inventoryList = {
    
"0":{id:0,year:2018,make:'nissan',model:'xterra'}
    ,
"1":{id:1,year:2018,make:'nissan',model:'altima'}
    ,
"2":{id:2,year:2018,make:'nissan',model:'maxima'}
};
   
this.inventory  =
Object.values(this.inventoryList);

See in Action






Conclusion

As we implement new solutions, we need to be mindful of the framework specifications and define our data structures in a way that is compliant with the framework. When we try to refactor code from previous frameworks like AngularJS, we need to identify some of the areas that can cause problems and refactor them tactically with minimum code changes. 

Thanks for reading.


Originally published by ozkary.com

7/7/19

Firebase Provided authentication credentials are invalid

When trying to access a Firebase realtime database from a Firebase function or a client application with the incorrect permissions, we often get the “Provided authentication credentials are invalid” error which does now allow us to access the database.  

@firebase/database: FIREBASE WARNING: Provided authentication credentials for the app named "[DEFAULT]" are invalid. This usually indicates your app was not initialized correctly. Make sure the "credential" property provided to initializeApp() is authorized to access the specified "databaseURL" and is from the correct project.

GCP Firebase Cloud Functions
In the case of a Firebase function, we use a service account, but we may still get this error. In this article, we take a look at how we can go about identifying the problem and how to resolve it by looking at our service accounts and associated permissions or roles.

Let's Identify the Service Accounts

When running a Firebase function, the default service account that is used in the context of the function usually goes by the name of {project-id}@appspot.gserviceaccount.com.  To confirm what account is used, we can add a line of code to the function handler that outputs the process environment variables to the function logs. This way, we can review the context of the request and the identity that is used.


const functions = require('firebase-functions');
const admin = require('firebase-admin');
admin.initializeApp();

exports.addEvent = functions.https.onRequest((request, response) => {
    console.log(process.env);


When we look at the code, we can notice that we are not using a specific credential when we make a call to admin.initializeApp(). Therefore, the identity in the context, @appspot service account, is used by default. We can change the default identity by explicitly using the admin SDK account or provide the @appspot context the correct permissions. Let’s first look at the SDK account.

What About the Admin SDK Account?

Another service account that is available from the Firebase project, is the Admin SDK account which can be found under the project settings, service account information. In order to use this account, we need to download a JSON (serviceAccKey.json) file and add that to our function project. Once that is done, we can refactor or function to use the following code:

const functions = require('firebase-functions');
const admin = require('firebase-admin');
const svcAcc = require('/serviceAccKey.json')
admin.initializeApp({
credential:admin.credential.cert(svcAcc),
});

In this new version, we are explicitly using a service account. This should resolve the problem unless there are deeper permissions problems in our project. This is where it gets a bit more complicated because even though we are using the SDK service account, we continue to see the invalid credentials error. Let’s do a deeper dive into permissions.

What about IAM?

At this point, we know which account we are using, @appspot or @adminsdk. We next need to make sure that the account has the correct project permissions. For that, we need to look at the IAM roles for that account. 

IAM stands for Identity and Access Management. This is basically the identity management system that governs the permissions for our GCP (Google Cloud Platform) projects. This page is available from the Google cloud console at this location:  https://console.cloud.google.com/iam-admin

From the IAM console, select the corresponding project, and look for the service account in question. We now need to make sure that account has been granted the corresponding project role (editor or owner) to enable access to the database.



Conclusion

Like in any project, we should explicitly know which service account is used in our Firebase functions and make sure what access level/role that account has on the project. Once the permissions have been validated and configure properly, we should no longer see this error.

Thanks for reading.


Originally published by ozkary.com

6/22/19

Convert SQL Table to Redis Cache Structure

When adding a distributed cache like Redis to a solution, it is important to define the data model and sets in a way that enables us to easily fetched from the cache while maintaining a similar structure to our SQL database to minimize the impact on our client applications. The challenge is to define the approach that can work within the technical specifications of the cache, yet it supports our solution. 
Review the SQL Data Model
When building a good model on the cache, we need to understand how the data is retrieved from the database from the client applications. We also need to learn how to leverage the Redis key/value structure to facilitate the same data access. We can practice these principles by looking at a data model from devices that measure telemetry information like temperature and noise.



By looking ate entity above, we can notice that we can query the data/measures from this table by using the date (processed) and deviceId dimensions by using a SQL query like the one shown next:


SELECT [telemetryId]
     ,[deviceId]
     ,[temperature]
     ,[humidity]
     ,[sound]
     ,[processed]
     ,[created]
 FROM [dbo].[Telemetry] (nolock)
 WHERE processed > dateadd(MM,-1, getdate())
 AND  deviceId = '0ZA-112'
 ORDER BY telemetryId DESC

We can look at the output of this query to help us understand what data needs to be stored in the cache. However,  the first question that comes to mind is how do we do this using Redis? To answer that, let’s try to understand how the database is structured and then apply those concepts to Redis.
Data Structure on Redis 
To build our data structure on Redis, we first start by mapping the database, schema and table names to a Redis Key that we can use to pull the data. For example, our table is in the database device with a table named telemetry. Using that information, we can create a namespace like device.dbo.telemetry. We can use this namespace to construct our Redis key, or if the schema name is not meaningful (i.e. dbo), we can shorten the namespace by removing the schema name part and use this instead:  device.telemetry.
Now that we have a good Redis key, the next challenge is the ability to select the record by deviceId. On SQL server, this is done by doing a lookup for the deviceId column as was shown on the previous SQL statement.
The way we can do this on Redis is by appending the deviceId to our existent key. This generates an extended key that includes the deviceId value with this format device.telemetry.deviceId (deviceId is replaced by the value). By setting this key with the unique value, we can now query the cache for the information associated with a specific device.
How do we query by dates?
The next part of the SQL query predicate enables us to select data by using a date. That gets a bit more challenging since a date range does not match the concept of a key, so we cannot just do device.telemetry.deviceId.datetime as this limits the key to a specific date. However, Redis supports the concept of storing a sorted set or list with a specified key and a score number that can be sorted in an ascending way (older dates to most recent dates).
But what do we need a score when we need to fetch the information by date and time? Well, in programming languages, a date time stamp can be represented as a numeric value in milliseconds. This does enable us to convert a DateTime field into a numeric field and use it as a score within Redis by using the following command.

zadd device.telemetry.0ZA-112 1565379958535 "my data here"

By storing data with this numeric value/score, we can query the DateTime numeric value by using this command zrangebyscore (or zrevrangebyscore to reverse the order - descending). This command enables us to select data using a date range similar to how we would do in SQL with a between operator on the where clause. The long numbers are just DateTime values that have been converted using the native JavaScipt Date.getTime() API.

zrangebyscore device.telemetry.0ZA-112 1565379958535 1565379958999

How do we store the rest of the fields/measures?
Now that we have a way to pull data for a specific deviceId and a date range, how do we associate the data with that information? Well, this is done by serializing the rest of the information into a JSON payload and storing it into the cache as a string value. When we add a record to a sorted list with a score, we can associate data with that score as shown next:

zadd device.telemetry.0ZA-112 1565379958535 '{"telemetryId": 2076,"deviceId": "0ZA-112","temperature": 34.000}’

We can now query all the information within a particular DateTime range for a selected deviceId.
Selecting the data from multiple devices and date range:
What if we want to select the telemetry data for all devices within a particular date range? The answer to this requires us to review our approach.  When we add the deviceId to the Redis key, we have set a constraint on the sorted list, and we can only query by the deviceId.  The command zrangebyscore does not support the use of wildcard character (*).  We can, however, change the key and remove the deviceId. This results in a new key with this namespace: device.telemetry. 
By removing the deviceId from the key, we can now run the same operations with a date range, and all the devices that processed information within the date range should be selected. If the data is too massive, we can accept the trade-off of memory usage for performance by creating two sorted sets, one with the deviceId and another one without. This really depends on the particular use case. Always keep in mind, this is a cache, and the data should expire within a time constraint.
Conclusion
It looks like we have done a good job of reproducing our SQL data models into Redis structure that can support the same operations that we do on SQL. We have also been able to work within the specifications of the Redis commands to set our data structure in a format that enables us to retrieve the data with similar operations that we support on SQL.

Originally published by ozkary.com

6/8/19

Nodejs HTTPS ECONNRESET


The https library in Nodejs enables us to make https calls to APIs under SSL/TLS protocols (encrypted channels).  When there is a communication problem in the channel, the channel is dropped from the server and the ECONNRESET error is raised on the client-side.


In Nodejs, this exception usually has the following exception detail with code source (this changes with new versions).


{ Error: read ECONNRESET
   at TLSWrap.onStreamRead (internal/stream_base_commons.js:111:27) 
errno: 'ECONNRESET', 
code: 'ECONNRESET', syscall: 'read' }

This error is confusing because it is created by the TLSWrap.onStreamRead which may lead us to think that there is a TLS problem in communication. This is often a case when the client support version TLS 1.1 and the server only has a more recent version like TLS 1.3.  

Before we start making all kinds of changes, we need to make sure that when we make a request, we do not leave the request open.  When a request is made, we must always call the end() method of the ClientRequest object. This method is inherited (extended in TypeScript) from the WritableStream class.

When we make a request and do not call the end method, a socket connection is left open (internal implementation of HTTPS with TLS writable stream). Since there is not an explicit socket connection sending data to the server, the request will just hang, and the server will eventually just reset the connection thus the ECONNRESET exception on the client.

Take a look at this code snippet with the corresponding operations to terminate the request and handle other possible communication errors with the server.  Take notice of the timeout and end operations.



So if you see this error, just make sure to end the request. If this is already done, and the error continues to show, we need to know what version of TLS is supported by the client, so the server can accept the connection properly. Review the secureProtocol option on the http.RequestOption options and read about tls.createSecureContext here.   

Thanks for reading.


Originally published by ozkary.com

5/25/19

JSON Web Token

A JSON Web Token (JWT) is commonly used to package information that grants users with claims to a system. This includes user information and permissions to a resource. The token is often exchange between the server and client view header information. 

 JSON Web Token Format:



  • A JWT token consists of three main segments
    • Header
    • Payload with claims
    • Signature
  • These three segments are encoded using Base64, then concatenated with periods as separators.
  • The header segment provides information on the token type and algorithm
  • The payload segment contains an expiration date and the claims associated to the user
    • The claims provide information about the user and permissions
  • The signature is used to verify the token
  • The token is NOT encrypted so anyone with it can read all the properties
  • The token is signed by the server so if any of the values are changed, the server will reject it

Decoding a Token:

The image below shows a token with the base64 string on the left, and the the three decoded segments on the right.

What is a Claim?

  • Claims are statements about a subject
    • User information like name, email, address
    • Organization departments, groups
    • Roles or permissions to areas of a system
    • Contain claims groups for an application to enable button, menus, routes
  • Claims are issued by a provider (Security Token Service - STS)
    • Packaged in a security token
    • Applications use this token and parse the claims
    • Claims are mapped to areas of the application to enable the permissions


Authorization Header


The token is exchanged between the server and client as an authorization header. The server sends the base64 string. The client needs to process this information, and when the client application needs to send a request to the server, it must add the Authorization Bearer header as shown below. This is what enables the access to the application.



This is just an overview of what a security token is and its purpose. There are other areas to learn about how to decode and apply those claims to secure the different areas of an application.

Thanks for reading.

Originally published by ozkary.com