Firebase Provided authentication credentials are invalid

When trying to access a Firebase realtime database from a Firebase function or a client application with the incorrect permissions, we often get the “Provided authentication credentials are invalid” error which does now allow us to access the database.  

@firebase/database: FIREBASE WARNING: Provided authentication credentials for the app named "[DEFAULT]" are invalid. This usually indicates your app was not initialized correctly. Make sure the "credential" property provided to initializeApp() is authorized to access the specified "databaseURL" and is from the correct project.

GCP Firebase Cloud Functions
In the case of a Firebase function, we use a service account, but we may still get this error. In this article, we take a look at how we can go about identifying the problem and how to resolve it by looking at our service accounts and associated permissions or roles.

Let's Identify the Service Accounts

When running a Firebase function, the default service account that is used in the context of the function usually goes by the name of {project-id}@appspot.gserviceaccount.com.  To confirm what account is used, we can add a line of code to the function handler that outputs the process environment variables to the function logs. This way, we can review the context of the request and the identity that is used.

const functions = require('firebase-functions');
const admin = require('firebase-admin');

exports.addEvent = functions.https.onRequest((request, response) => {

When we look at the code, we can notice that we are not using a specific credential when we make a call to admin.initializeApp(). Therefore, the identity in the context, @appspot service account, is used by default. We can change the default identity by explicitly using the admin SDK account or provide the @appspot context the correct permissions. Let’s first look at the SDK account.

What About the Admin SDK Account?

Another service account that is available from the Firebase project, is the Admin SDK account which can be found under the project settings, service account information. In order to use this account, we need to download a JSON (serviceAccKey.json) file and add that to our function project. Once that is done, we can refactor or function to use the following code:

const functions = require('firebase-functions');
const admin = require('firebase-admin');
const svcAcc = require('/serviceAccKey.json')

In this new version, we are explicitly using a service account. This should resolve the problem unless there are deeper permissions problems in our project. This is where it gets a bit more complicated because even though we are using the SDK service account, we continue to see the invalid credentials error. Let’s do a deeper dive into permissions.

What about IAM?

At this point, we know which account we are using, @appspot or @adminsdk. We next need to make sure that the account has the correct project permissions. For that, we need to look at the IAM roles for that account. 

IAM stands for Identity and Access Management. This is basically the identity management system that governs the permissions for our GCP (Google Cloud Platform) projects. This page is available from the Google cloud console at this location:  https://console.cloud.google.com/iam-admin

From the IAM console, select the corresponding project, and look for the service account in question. We now need to make sure that account has been granted the corresponding project role (editor or owner) to enable access to the database.


Like in any project, we should explicitly know which service account is used in our Firebase functions and make sure what access level/role that account has on the project. Once the permissions have been validated and configure properly, we should no longer see this error.

Thanks for reading.

Originally published by ozkary.com


Convert SQL Table to Redis Cache Structure

When adding a distributed cache like Redis to a solution, it is important to define the data model and sets in a way that enables us to easily fetched from the cache while maintaining a similar structure to our SQL database to minimize the impact on our client applications. The challenge is to define the approach that can work within the technical specifications of the cache, yet it supports our solution. 
Review the SQL Data Model
When building a good model on the cache, we need to understand how the data is retrieved from the database from the client applications. We also need to learn how to leverage the Redis key/value structure to facilitate the same data access. We can practice these principles by looking at a data model from devices that measure telemetry information like temperature and noise.

By looking ate entity above, we can notice that we can query the data/measures from this table by using the date (processed) and deviceId dimensions by using a SQL query like the one shown next:

SELECT [telemetryId]
 FROM [dbo].[Telemetry] (nolock)
 WHERE processed > dateadd(MM,-1, getdate())
 AND  deviceId = '0ZA-112'
 ORDER BY telemetryId DESC

We can look at the output of this query to help us understand what data needs to be stored in the cache. However,  the first question that comes to mind is how do we do this using Redis? To answer that, let’s try to understand how the database is structured and then apply those concepts to Redis.
Data Structure on Redis 
To build our data structure on Redis, we first start by mapping the database, schema and table names to a Redis Key that we can use to pull the data. For example, our table is in the database device with a table named telemetry. Using that information, we can create a namespace like device.dbo.telemetry. We can use this namespace to construct our Redis key, or if the schema name is not meaningful (i.e. dbo), we can shorten the namespace by removing the schema name part and use this instead:  device.telemetry.
Now that we have a good Redis key, the next challenge is the ability to select the record by deviceId. On SQL server, this is done by doing a lookup for the deviceId column as was shown on the previous SQL statement.
The way we can do this on Redis is by appending the deviceId to our existent key. This generates an extended key that includes the deviceId value with this format device.telemetry.deviceId (deviceId is replaced by the value). By setting this key with the unique value, we can now query the cache for the information associated with a specific device.
How do we query by dates?
The next part of the SQL query predicate enables us to select data by using a date. That gets a bit more challenging since a date range does not match the concept of a key, so we cannot just do device.telemetry.deviceId.datetime as this limits the key to a specific date. However, Redis supports the concept of storing a sorted set or list with a specified key and a score number that can be sorted in an ascending way (older dates to most recent dates).
But what do we need a score when we need to fetch the information by date and time? Well, in programming languages, a date time stamp can be represented as a numeric value in milliseconds. This does enable us to convert a DateTime field into a numeric field and use it as a score within Redis by using the following command.

zadd device.telemetry.0ZA-112 1565379958535 "my data here"

By storing data with this numeric value/score, we can query the DateTime numeric value by using this command zrangebyscore (or zrevrangebyscore to reverse the order - descending). This command enables us to select data using a date range similar to how we would do in SQL with a between operator on the where clause. The long numbers are just DateTime values that have been converted using the native JavaScipt Date.getTime() API.

zrangebyscore device.telemetry.0ZA-112 1565379958535 1565379958999

How do we store the rest of the fields/measures?
Now that we have a way to pull data for a specific deviceId and a date range, how do we associate the data with that information? Well, this is done by serializing the rest of the information into a JSON payload and storing it into the cache as a string value. When we add a record to a sorted list with a score, we can associate data with that score as shown next:

zadd device.telemetry.0ZA-112 1565379958535 '{"telemetryId": 2076,"deviceId": "0ZA-112","temperature": 34.000}’

We can now query all the information within a particular DateTime range for a selected deviceId.
Selecting the data from multiple devices and date range:
What if we want to select the telemetry data for all devices within a particular date range? The answer to this requires us to review our approach.  When we add the deviceId to the Redis key, we have set a constraint on the sorted list, and we can only query by the deviceId.  The command zrangebyscore does not support the use of wildcard character (*).  We can, however, change the key and remove the deviceId. This results in a new key with this namespace: device.telemetry. 
By removing the deviceId from the key, we can now run the same operations with a date range, and all the devices that processed information within the date range should be selected. If the data is too massive, we can accept the trade-off of memory usage for performance by creating two sorted sets, one with the deviceId and another one without. This really depends on the particular use case. Always keep in mind, this is a cache, and the data should expire within a time constraint.
It looks like we have done a good job of reproducing our SQL data models into Redis structure that can support the same operations that we do on SQL. We have also been able to work within the specifications of the Redis commands to set our data structure in a format that enables us to retrieve the data with similar operations that we support on SQL.

Originally published by ozkary.com



The https library in Nodejs enables us to make https calls to APIs under SSL/TLS protocols (encrypted channels).  When there is a communication problem in the channel, the channel is dropped from the server and the ECONNRESET error is raised on the client-side.

In Nodejs, this exception usually has the following exception detail with code source (this changes with new versions).

{ Error: read ECONNRESET
   at TLSWrap.onStreamRead (internal/stream_base_commons.js:111:27) 
errno: 'ECONNRESET', 
code: 'ECONNRESET', syscall: 'read' }

This error is confusing because it is created by the TLSWrap.onStreamRead which may lead us to think that there is a TLS problem in communication. This is often a case when the client support version TLS 1.1 and the server only has a more recent version like TLS 1.3.  

Before we start making all kinds of changes, we need to make sure that when we make a request, we do not leave the request open.  When a request is made, we must always call the end() method of the ClientRequest object. This method is inherited (extended in TypeScript) from the WritableStream class.

When we make a request and do not call the end method, a socket connection is left open (internal implementation of HTTPS with TLS writable stream). Since there is not an explicit socket connection sending data to the server, the request will just hang, and the server will eventually just reset the connection thus the ECONNRESET exception on the client.

Take a look at this code snippet with the corresponding operations to terminate the request and handle other possible communication errors with the server.  Take notice of the timeout and end operations.

So if you see this error, just make sure to end the request. If this is already done, and the error continues to show, we need to know what version of TLS is supported by the client, so the server can accept the connection properly. Review the secureProtocol option on the http.RequestOption options and read about tls.createSecureContext here.   

Thanks for reading.

Originally published by ozkary.com


JSON Web Token

A JSON Web Token (JWT) is commonly used to package information that grants users with claims to a system. This includes user information and permissions to a resource. The token is often exchange between the server and client view header information. 

 JSON Web Token Format:

  • A JWT token consists of three main segments
    • Header
    • Payload with claims
    • Signature
  • These three segments are encoded using Base64, then concatenated with periods as separators.
  • The header segment provides information on the token type and algorithm
  • The payload segment contains an expiration date and the claims associated to the user
    • The claims provide information about the user and permissions
  • The signature is used to verify the token
  • The token is NOT encrypted so anyone with it can read all the properties
  • The token is signed by the server so if any of the values are changed, the server will reject it

Decoding a Token:

The image below shows a token with the base64 string on the left, and the the three decoded segments on the right.

What is a Claim?

  • Claims are statements about a subject
    • User information like name, email, address
    • Organization departments, groups
    • Roles or permissions to areas of a system
    • Contain claims groups for an application to enable button, menus, routes
  • Claims are issued by a provider (Security Token Service - STS)
    • Packaged in a security token
    • Applications use this token and parse the claims
    • Claims are mapped to areas of the application to enable the permissions

Authorization Header

The token is exchanged between the server and client as an authorization header. The server sends the base64 string. The client needs to process this information, and when the client application needs to send a request to the server, it must add the Authorization Bearer header as shown below. This is what enables the access to the application.

This is just an overview of what a security token is and its purpose. There are other areas to learn about how to decode and apply those claims to secure the different areas of an application.

Thanks for reading.

Originally published by ozkary.com


Could not load assembly or one of its dependencies

We running a .NET application that uses an unmanaged dynamic-link library (the old famous DLL), we may encountered the following error:

Could not load file or assembly or one of its dependencies. An attempt was made to load a program with an incorrect format.

The error indicates that the program is not able to load the assembly, but when we look at our project references, we can see the assembly exists.  If we take a look at the message carefully, we can also see that the error tells us that perhaps it is not able load a dependency. But what does this really mean? To understand this, let’s first do a refresh on .NET interoperability

Interoperating with Unmanaged Code

The .NET framework provides interoperation with native C and C++ dynamic-link libraries (DLL) for common Windows API and COM components. These DLLs do not run under the .NET runtime, so this is the reason why they are called unmanaged code. Many products are still written using the native Windows API not the .NET framework. When we try to use these DLL on a .NET application, we need to first import the API using the following directives.

 using System.Runtime.InteropServices;

 // Use DllImport to import the Win32 MessageBox function.
 [DllImport("user32.dll", CharSet = CharSet.Unicode)]
 private static extern int MessageBox(IntPtr hWnd,
                                            string text,
                                            string caption,
                                            uint type);

As an example, this import directive allows us to call a Windows API message box function. Off course, when using the .NET framework, we use the framework API MessageBox.Show instead of importing the native API. Use the following gist for a running example:

How does that help on resolving our missing dependency?

It is important to know how interops work with .NET because when we do a DLLImport on an DLL, we are only importing the external library methods, but we do not know if there are downstream dependencies associated to that DLL. To be able to discover our dependencies, we can use the binary file dumper (dumpbin.exe) utility by running the following command:

dumpbin.exe /dependent  myfile.dll

Note: This utility is part of the Visual Studio installation, and it can only run under the Visual Studio command prompt not the system command prompt.

After running this command, we should be able to see a list of DLL dependencies.  These DLLs should be in our system path or the local directory path. The missing unmanaged DLL are usually related to the C++ run time libraries that are installed on the system when we install Visual Studio or SDK packages.. By default, these DLLs are deployed in the system32 folder. We can search for the files listed on the output to make sure they exist. If they don’t, we have found a missing dependency, and we need to install the Visual C++ redistributable package that contains the missing DLL.

Background on Visual C++ Redistributable:

These DLLs are usually deployed by the Visual C++ redistributable packages within Visual Studio. They are part of the Windows system components for reusability, performance and memory management purposes. A big problem with these packages is that some software may have been built with an older version of Visual C++, and it may not exist on the current system. This causes the dependency problem.

To address this problem, we need to search for the missing DLL on the Microsoft web site. This should enable us to download and install the correct package. For example, the package for the Visual C++ 2005 redistributable package can be found at this location:


When we start a new project that requires the use of some unmanaged DLL, we need to inspect those libraries using the dumpbin utility and make sure we have those dependencies installed on the development environment. When we deploy these solutions, we need to also remember to package those dependencies within our solution. Otherwise, the application will crash with the Unable to load assembly error.

Originally published by ozkary.com


PowerShell SharePoint API with Invoke-RestMethod Return String

With PowerShell, we can use the invoke-RestMethod cmdlet to make Restful API calls to get JSON data. When using the SharePoint Restful APIs, the cmdlet returns the JSON data in string format which is not the intended result. To handle this problem, we can convert the string to a JSON object, so we can work with the structured data instead of a string by using the following script:

We should note that when we try to convert a SharePoint JSON string using the ConvertFrom-Json cmdlet, we get an error that the string is not a valid JSON because there are duplicate attributes (Id, ID) which are returned by default on the payload.  To address this, we can replace one of the attributes while the JSON is still in a string type.  We can then convert the string again and all should work.

Please keep in mind that this is a behavior that is applicable to PowerShell. If you use the JavaScript parser, this error is not raised.

I hope this is able to help you consume SharePoint JSON data using PowerShell.

Originally published by ozkary.com


Building a Modern Data Warehouse From Different Sources SQL Server and Spark

A data warehouse (DW) is a core component of business intelligence and a central data repository for different sources. This poses many challenges as the schema definition for those sources may be completely different from one another. In this presentation, we discuss how to create a data warehouse model that can support data for disparate data sources. We look at how to build a dimensional model which can enable us to import the data with different shapes into a data warehouse. We then create processes to transform and load the data using Apache Spark. We finally use PowerBI to visualize the data from our data warehouse.

Modern Data Warehouse

Learning Objectives:

Challenges when using disparate data sources (different models)

  • The number of fields on the payload can be different. Some system can send more data readings/facts than another.
  • The fields can have a different name but the data represent the same reading.
  • The model can change independently of the other system.

Define a common model to support the different model shapes

  • We need to identify the common fields across all the data models that can identify a unique entry. For example, the date, a serial number or device id, a regional location. 
  • Identify the dimension from the actual entry from all these models. The dimensions are used to build other tables that can provide structured information about the readings.
  • We need to identify the measurement or readings and support their different names.
  • For performance, we would like to be able to read the entries without the overhead of processing all the measurements. 
  • Star Schema design can be used to associate the dimension tables to the fact table.

Strategy for processing the different models into the common model with Apache Spark

  • Load the data source into data frames and map their fields into a data set with well defined data types for the fields.
  • Parse the dimension fields into a different data set. Create a unique hash numeric key that can be used as a foreign key on the the relational data model.
  • Map all the measurements from their data source name to the corresponding field on the common model.
  • Create a dimension for the field name to catalog the measurement fields thus avoiding having to load all the data and get unique names.
  • Transpose all the measurements from columns to rows. This enables us to support different number of columns as well as support field changes on the data source.

Data Pipeline for Loading the Data into the Data Warehouse with Apache Spark
  • Load the new unique dimension records from the Spark database into the dimension tables in the data warehouse first.
  • Load the fact entry with all the dimension relationships. Create a unique hash id for the entry.
  • Load the fact measurements with the entry hash id relationship. This maps the reading to a unique entry.
Sample Project:

The sample project with the Spark and SQL code as well as demo data can be found at this location:


By following this high level plan, we should be able to create a data warehouse that can support disparate data sources with different models using modern data processing tooling.

Originally published by ozkary.com


Magic Leap One No device is connected

When developing apps for Magic Leap One, we can use the Magic Leap Remote app to start the device connectivity and deploy our apps directly from the Unity Editor. Even when the ML1 device has been configured for development purpose by enabling the creator mode, installing the certificate and allowing MLDB access, there may be a “No Device is Connected” error. In this article, we take a look at how to use the MLDB CLI to address device connectivity problems on a Windows computer.

What is MLDB?

The Magic Leap Device Bridge (MLDB) CLI is used to provide direct connectivity to the device via USB and WIFI.  We can also use this CLI to directly deploy apps onto the device. It is during the deployment process when we can notice connectivity problems with our computers.

Is the device connected?

Assuming the ML1 device has been configured properly for MLDB connectivity and that we have a verified USB connection, we are going to start to troubleshoot the connection error by starting the MLDB CLI and typing the following commands after the > prompt:

>mldb devices

List of devices attached

If we get no device serial number on the output then we know that we have a problem with the device connecting to our PC via USB. To double check, let’s take a look at our Device Manager panel and check the Universal Serial Bus Devices configuration.  If we find our ML1 listed there (see image below), we know that the physical connection via USB is working. 

Let’s Repair our Registry Settings

We know that a physical connection is working, but we still do not see the device via MLDB CLI. We need to check that our device registration in our PC as USB connection is valid by using the CLI and checking our registry settings with the following command at the > prompt:

>mldb usbreg

Found connected ML device with serial number: ###
GOOD: Device has a container node.
GOOD: Device has an interface GUID for the non-composite device.
BAD: Device is missing an interface GUID for the first composite interface
BAD: Device is missing an interface GUID for the second composite interface
Created registry file that should fix the issue: \mlsdk\v0.20.0\tools\mldb\mldb-fix.reg
Double click the file in Windows Explorer to update your registry, then unplug
and replug your ML device. MLDB should now recognize the device.

We should now see some messages indicating if there are any problems. If any problem is found, a new registry setting file is created, and we need to import it into our system. But before we import the file, we can take a look at our registry settings and compare it to the values in the newly created file.  This can allow us to understand what is going on.

We can open the new registry file to learn the path for our registry. We can then type regedit command on the CLI to open the registry database in our computer. By comparing the file with the values in our registry, we can see that the DeviceInterfaceGUID on our registry is no longer matching what our device is reporting. This make sense, so we need to update our GUID value with the one provided by the output of the mldb usbreg command.


After updating our registry settings and reconnecting our device via the USB port, we should be able to run the mldb devices command again and see our device listed on the output. If we restart the Magic Leap Remote app and press the Start Device button, we should see that our device connects and it is ready for us to deploy our apps via the Unity editor or the MLDB CLI.
I hope this provides some insight on how to address the No Device is Connected error when using the Magic Leap Remote app.

Thanks for reading

Originally published by ozkary.com