Azure Machine Learning with Visual Studio Code

This is a presentation on how to use Azure Machine Language with Visual Studio Code. Most of the ml.azure.com tasks are also available via Visual Studio Code.  To enable this extension for Azure Machine Language on Visual Studio, we need to install the following dependencies:
  • Azure Machine Language subscription  
  • Python or Anaconda
  • Python Extension for VS Code
  • Azure ML Extension for VS Code
  • Git - for version control and GitHub integration.

Visual Studio Code Azure ML Extension Menu

Originally published by ozkary.com


Nodejs CORS Handling No Access-Control-Allow-Origin

When accessing API calls from another domain or port number, we have to be mindful of the possible CORS policy restrictions.  CORS stands for Cross-Origin Resource Sharing. This is used to prevent access from one domain to another without having the policy to enable that access.  A common use case is one an application hosted on one domain tries to access the API hosted on a remote domain. When the calling domain is not whitelisted for access, an error is raised and the call is denied.

Access Error

To illustrate this problem, we can look at an app hosted on our localhost but with a different port number. We should notice that port number makes the policy applicable even when the domain is the same.

Access to XMLHttpRequest at 'http://localhost:5000/api/data' from origin 'http://localhost:8080' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

When making a CORS request to a different domain, there would be two requests made to the server, the preflight and actual requests. For each of these requests, the server must respond with the Access-Control-Allow-Origin header set with the name of the domain of origin (calling app) or a wildcard ‘*’ to allow all domains. Wild card are a bit too open, so this is typically not used for secured apps.

Preflight Request

A preflight or OPTIONS (HTTP verb) request is created by the browser before the actual request (PUT, POST) is sent for a resource in different domains. The goal is to have the browser and server validate that the other domain has access to that particular resource. This is done by setting the Access-Control-Allow-Origin header with the client host domain.

Actual Request

Once the preflight request has a response with the corresponding headers and HTTP 200 status, the browser sends the actual request. For this request, the server also checks the CORS policies and adds the Access-Control-Allow-Origin header with the client host domain.
NodeJS Middleware

Now that we understand about this security concern, we can take a look at how we can build a middleware with NodeJS to help us manage this integration.

Let’s start by defining a common NodeJS Express application. When we build the bootstrap code for the server, we can include a middleware module (auth-cors) which is our implementation to handle the CORS concerns. This is done by loading the module using the require directive and loading the module script relative to where the loading script is found. Let’s review the code to see how we can do that.

const express = require("express");

// middleware for cors
const cors = require("./middleware/auth-cors");

const PORT = process.env.port || config.port;
const app = express();

// initialize modules
    extended: true

// add the cors middleware to the http pipeline

// start the server listener
app.listen(PORT, () => console.log(`Listening on ${PORT}`));

After we load the CORS module, we can associate it with the application by calling app.use(cors). This is a way to tell the server application to pass the context references to each module in the HTTP  pipeline.

Now that we understand how to include the middleware module, we can take a look at the actual implementation below. We start by exporting the module definition and implementing the RequestHandler interface contract which enables the module to receive the HTTP pipeline context with the request, response, and next references. The next context enables the code to continue the pipeline request execution to complete. This is the context that all middleware modules should return to chain the other middleware modules in the pipeline.

module.exports = (req, res, next) => {
  if (req.headers.origin) {

    // add the remote domain or only the environment
    // listed on the configuration
      process.env.domain || req.headers.origin

  // Request methods you wish to allow
  // remove the methods you wish to block

  // Request headers you wish to allow

  // Set to true if you need the website to include cookies in the requests sent
  // to the API (e.g. in case you use sessions)
  res.setHeader("Access-Control-Allow-Credentials", true);

  if ("OPTIONS" === req.method) {
    return res.sendStatus(200);

  return next(); // Pass to next layer of middleware

What is the module really doing?

The module basically intercepts the request from the HTTP pipeline to add the headers that can enable a client application to send a request from another domain. 

The Access-Control-Allow-Origin header is added to the response header to include the remote domain. This is the area where we can whitelist some domains and not allow others. In this example, we are just adding the remote domain which should not be the normal case. The same approach is taken to Allow Methods and Headers that our application supports. For example, to block the Delete method, we do not add that value in the header. 

The other interesting header to include or exclude based on your application requirements is the  Access-Control-Allow-Credentials header. This is used for cases when you want the client application to send back credential information in the header. This is useful for session management state and authorization token information.


It is important to understand that we run an application and APIs on different domains, we have some security concerns that we need to plan for. This can enable our apps to safely use remote APIs and prevent malicious attacks to our servers. With NodeJS, building middleware components to handle this concern is fairly simple. Most of the understanding is around how the HTTP protocol works to enable us to handle this integration. 

Please let me know your feedback, and  I will add clarification to the article.

Thanks for reading

Originally published by ozkary.com


TypeScript Interface JSON Type Inference

The JavaScript Object Notation (JSON) is used to build complex data structures or documents which make the data model easy to manage with JavaScript. The biggest benefit of JSON is that it uses schemaless and dynamic data types which provide the flexibility to support changes in the data model. This flexibility however also provides a drawback in the fact that by not using strongly typed models, we can run into runtime errors because of the lack of static verification (compile) of some of the types. 

TypeScript is a JavaScript framework that encourages the used of strongly typed data models. By enabling us to create classes and interfaces with well-defined types, we can have type inference from JSON and static verification of the code. Even though this is ideal to have, it deviates from the principals of dynamic datatypes which JavaScript so well used. This poses a new challenge to developers as we now have to understand how to map a JSON document into a TypeScript interface. To help us understand this better, let’s take a look at a JSON document and define the interface that can enable our code with type inference and static type verification.

JSON Document 

We start the process by looking at a vehicle inventory JSON document.  The document is a list of vehicles that have these properties (year, make, model, sold and new).  By inspecting the document visually, we can infer the data types like string, number, date, and boolean, but in the code, we need to provide annotations to be able to support type inference.

var list = [{
  'year': 2018,
  'make': 'nissan',
  'model': 'xterra',
  'sold': '2018-07-01'
}, {
  'year': 2018,
  'make': 'nissan',
  'model': 'altima',
  'new': true
}, {
  'year': 2018,
  'make': 'nissan',
  'model': 'maxima',
  'new': true

Building the Vehicle Interface

In order to build our interface,  let’s start by looking at all the possible properties that are available on the document.  By looking carefully, we can notice that there are three main properties available on all the records (year, make, model). There are other optional properties that may be present (sold, new). The optional properties are nullable types as they may exist for a record or not. With that information we can create this interface:

interface Vehicle {
  year: number;
  make: string;
  model: string;
  sold ? : Date;
  new ? : boolean;

We should notice that we define strongly typed properties with string, number, date, and boolean types.  We should also notice that the sold and new properties have a question mark (?) notation which is used to denote the nullable types.

Using the Interface 

Now that we have the interface for each object in the list, we need to define a variable that we can use to assign the JSON document. If we recall, our JSON is an array of vehicles. Each vehicle can be accessed via a numeric index. We can use our interface to map our JSON document with the following code snippet:

const inventory: Vehicle[] = list as Vehicle[];

inventory.forEach((item: Vehicle) => {
  console.log(typeof(item.year), typeof(item.make), typeof(item.sold));

In our snippet, we assign the JSON document as an array of Vehicles. We then iterate the list and print the type for some of the properties.


By adding the interface, we can now support type inference and statically validate the code. If for some reason the JSON model changes one of its properties to a different type, the mapping will not take place and during the compilation of the code, an error would be raised.

With this article, we want to be able to describe the process of creating interface annotations when it comes to mapping JSON documents. We also want to be able to describe the benefits of JavaScript type inference and static code validation, so we can avoid runtime errors on the code. 

I hope this was helpful and provides some insights on how to map JSON documents to a TypeScript interface.  

Originally published by ozkary.com


Angular NgFor binding to Iterables Explained

When building Angular components, we often use data structures like arrays and hashtable to build a collection of objects.  This enables us to use HTML templates to iterate the collection and display the object properties using the ngFor directive. It is here that we come across a behavior that seems to confuse some developers. In some cases, ngFor raises the only support iterables error when iterating over these collections. The question is why do we get this error? 

To provide an answer, we must first talk about the differences between an array and a hashtable. Let’s first look at the following code snippets:

interface Vehicle {
   id: number
   year: number;
   make: string;
   model: string;

interface VehicleList {
   [index: number]: Vehicle;

 inventoryArray:Vehicle[] = [
 inventoryList:VehicleList = {

In the previous code, we first define the Vehicle interface. This is the object that we display on the template. We also define a VehicleList interface which provides us the ability to use a Hashtable with an number as key as we are using the id property.

Once the interfaces are defined, we can create collections with two different data structures. We need to look at this in detail as the differences may not be too clear. We first declare the inventoryArray which is of type Array of Vehicles (items in an array [] square brackets). We also create the inventoryList (items in an object notation {} brackets) which is an object that has keys matching the vehicle id.

Both of the collections have the same objects, but the way to iterate them is different. Let’s take a look at the concept a bit closer.

What are Iterables?

Iterable is a data structure that allows access to its elements in a sequential way by providing an iterator which acts as a pointer to the elements. This is supported by Arrays, so we can access its elements using a for...of  (notice not a for...in) loop. Hashtable entries are accessible as object properties, so they are not iterable natively.

Angular ngFor uses the for...of implementation to iterate the elements. This is why when an object is used with that directive, the error “only supports binding to Iterables” is raised. We can see that by looking at the template implementation in which we use the ngFor directive with the component inventory property.

    <tr *ngFor="let car of inventory">
    <td>{{car.year}} </td>
    <td>{{car.make}} </td>
    <td> {{car.model}}</td>
HomeComponent {

Not that we understad more about iterables and for...of loop, we can take a look at our code and identify areas where this problem can surface. If we work with object properties instead of arrays of objects, how can we address this problem without having to refactor a lot of code. Well, we can do this by a simple approach on the component code. Let’s review that solution.

Component Approach 

The approach here is to transform the Hashtable data structure into an array.  This can be done by using the Object constructor values method which basically does a for...in loop and returns the object property values without the keys. This essentially changes the data structure to an array which we can assign to inventory property that is used on the template to display the data.

this.inventoryList = {
this.inventory  =

See in Action


As we implement new solutions, we need to be mindful of the framework specifications and define our data structures in a way that is compliant with the framework. When we try to refactor code from previous frameworks like AngularJS, we need to identify some of the areas that can cause problems and refactor them tactically with minimum code changes. 

Thanks for reading.

Originally published by ozkary.com