Cloud comes in many Forms, Sizes & Shapes

There is cloud for everything these days! As a matter of fact, a form of cloud we all know is called “The Internet”.  Hold on, but then, isn’t it true that Cloud has been in existence for a very long time already? (rhetorical). Well, then why so much hype around it now? That’s the intent of this article – to discuss what Cloud means to it’s different consumers. I think “Cloud” that we know today has grown organically and at a very fast pace causing there to be varied opinions out there of ‘what cloud is’. My goal here is to try and build some common perspective to this topic.

 What cloud is and is not?

Cloud is not one entity where users would go to and seek services (although it is true that all large players are providing a variety of services under one roof). Speaking of services, back in my early days of web development, the most appropriate services I needed were servers and DNS. We would call it “hosting” back then (ring a bell?). That was quite popular form of cloud and it still exists today. Today there are service providers who perform payment transactions without you requiring to actually store customer credit card data. Yet another one was where you would write your application logs to a third-party provider who would retain them so you could go back to it to perform analytics on the errors encountered eventually allowing you to improve your services to your customers. Even as a end consumer, most of us store pictures from our mobile phone on some sort of cloud. Heck! we even watch movies from the cloud!!

The point here is that Cloud is not just hardware / infrastructure nor it is just software / bunch of applications hosted online. If an Enterprise Application is comprised of multiple layers, such as Infrastructure, Platform, Application Services & Middleware and finally, the User Interface, then each layer can be thought of as Cloud. That is so because each layer provides some sort of services to the layer on the top. For example, Infrastructure provides computing services to the platform; Platform provides necessary basic application components to Application Services; while Application Services & Middleware actually execute business functions / rules that are accessed from some sort of UI. Thus, on that lines of thought, it often appears that each layer can act like a blackbox to the layer above. The blackbox is usually then so self contained that innovative services can be built around the blackbox. That’s when you might have heard of terms often referred as “X … as a Service” (commonly denotes as “XaaS”). There are service providers who provide one or a combination of services labelled as IaaS (Infrastructure as a Service), PaaS (Platform as a Service) or SaaS (Software as a Service). Recently the  XaaS concept is so popular that almost anybody providing blackbox service(s) comes up with one or the other substitution for ‘X’. As an example, recently BPO (Business Process Outsourcing) providers are found to be labeling their services as BPaaS (Business Process as a Service).

Thus, as much as it is difficult to gain one common definition of Cloud, I think one general understanding will make it easier for us to consume any Cloud. That is that of a blackbox. According to, blackbox is defined as –

Deviceprocess, or system, whose inputs and outputs (and the relationships between them) are known, but whose internal structure or working is (1) not well, or at all, understood, (2) not necessary to be understood for the job or purpose at hand, or (3) not supposed to be known because of its confidential nature.

Read more:

Yes, indeed that sounds like Cloud because also most often it is assumed to be the silver bullet for almost anything that nobody wants to do or understand, but there are guarantees put in place such that it continues to work as designed.

So, what makes cloud – Cloud?

It would be unfair to not place the credit at the right place for the form in which Cloud has evolved to today.

  • Recent advancements in infrastructure virtualization has allowed the physical infrastructure to shrink while the virtual infrastructure to grow. A common measure of success of implementing Infrastructure as a Cloud is “oversubscription”. Read more on oversubscription via a great paper on
  • From Web 1.0 which was mostly static in nature to Web 2.0 which enabled application integration, made it possible on one end for service provides to build the web as a platform while on the other end for service consumers to break down their monolithic applications into modular service oriented applications. Refer this great article on O’Reilly Media.
  • As I mentioned very early on, the Internet and widespread adoption of it cannot be forgotten. The spread of internet from wired connections to over the air and constant connections made the most impact in my opinion. Certainly, there is a catch-22 in terms of opinions around whether internet pushed development of mobile development or vice-versa and we can leave that discussion for sometime later.
  • I’m not sure how much this will be accepted, but Open-source software also deserves credit as it allowed the brainpower from around the globe solve some hard problems and most importantly made them accessible on large scale. An example of this is the Chrome V8 JavaScript engine that led to development of todays fastest growing server-side JavaScript framework called Node.js and its online package management repository called npm.
  • Last, but not the least, credit is also due to some early adopters who pushed technology to create innovative cloud based offerings either to the development community or to end consumers.


Cloud may mean a lot of different things for different people. A lot of services can also be provided via the cloud, thus it also serves as a platform for service providers to reach wider audience. However, if there was one thing I would like to leave behind here is that cloud has many use cases for both service consumers as well as service providers. In the modern world today, we all use cloud … and we don’t have a choice sometimes!


Role of no-SQL Databases in Modern Applications

Data Models drive how efficiently an app can not only be built, but also modified to incorporate new features. They are the bridge between the “real-world”  problem that is being solved and the software world which has to emulate that real-world behavior. (Yes, software can do wonders, but no one needs software if it is not simplifying a user’s life!). This article discusses the role that no-SQL Databases play in modern software applications (a.k.a. Apps).

The Drivers

Over the last few years, there has been a huge shift in the choice of application development platform stacks. Traditional WAMP and LAMP stacks are being phased out with stacks like MEAN, CEAN, etc. There are many reasons for such shift. The fundamental reason, I believe, is the expectations from the modern Web. Recent expectations from the web applications has not just been limited to information delivery. Information processing and content analysis is a very key part of our interactions with web applications today. This is also often referred to as Web 2.0. Future expectations are much more as smart devices and sensors connected to internet and leverage data generated by application users to provide intelligent value-added interactions (a.k.a. Web 3.0).

This shift in paradigm of web applications demands rich data. At the same time, making data available for consumption is equally important and how just unavailable data hampers the expected user experience and application development is a topic for another day! But what’s worth mentioning here is most user-oriented applications consume and process data from multiple (data-) sources. As an extreme example, a travel booking site might have to rely on flight data as well as ticketing from multiple airline companies, credit card processing from another third party and itinerary publishing to yet another place, not to mention they also may lead users to share their booking experience on social media right from their own web application just as a part of the user’s end-to-end experience.

Well, I still haven’t brought it all together for you, have I? You may still be wondering how does no-SQL play into all of this? So, let’s get there! However it is important to understand the change that is leading us today as we start embracing new solutions from today which are in most cases created or developed because – either – it is very much required today or simply because it is possible today (in comparison to yesterday – blame it on the evolution of computer science!).

No-SQL allows complex structure

SQL databases are structured. But they also lead to some level of rigidity in handling application requirements due to its key fields, foreign key relationships, normalization techniques, etc. For example, a customer order object is often split up into header and details types of normalized table structures. No-SQL on the other hand can handle both header and details in a single structure. So, although the structure of the data model may be complex, it allows building them close to what would resemble a “real-world” entity. Of course, the cost of this ability is that the data integrity management is pushed up into the application layer.

No-SQL aligns with REST based architecture

If you have used Web Services or API’s chances are you default to JSON for the API responses (if not yet, you should try that!). No-SQL databases such as MongoDB and CouchDB store data in the JSON format (called as document). This enables coding API responses much easier in comparison to receiving them as arrays (that SQL database drivers usually return). With the higher adoption of API’s leading to highly integrated applications, no-SQL databases fit very well in terms of storing, providing and consuming information.

No-SQL brings scalability

No-SQL databases were designed ground up as multi-node databases which in turn provides great scalability features. For example, MongoDB currently can scale out to over 100 nodes spread across different data centers or locations. Many no-SQL databases also have started supporting data partitioning over multiple nodes which helps selectively scale larger data sets with larger computing resources while also reducing need for unnecessary replication and thus reducing data duplication at the cost of scalabiltiy.

These are not all, but I feel some of the key differentiators right now and while SQL databases are still catching up (which they are by the way!)

So, To-SQL or Not-to-SQL?

That really depends on  the application and the use case. No, it really does, because of many factors, such as:

  • The development tools and technology may not support no-SQL’s (yet!)
  • The preferred vendor (preferred could be for many reasons such as strategic partnership) in your organization may still be a traditional SQL database
  • The preferred database vendor may provide some no-SQL like features in the traditional database that may satisfy your current application need
  • The data models may be such that there really would be no difference made by the choice
  • Your position on Open Source (from enterprise support point of view)
  • Your people (developers, testers, etc.) may not be skilled / trained yet

Thus, it is an architectural decision for your application to make on what kind and even which specific database you need to use. Sometimes, being cutting-edge just because…, may turn out to be an expensive affair. So, by no means the intention of this article is to influence the choice but it is more to raise awareness about this “new” widely accepted option (no-SQL) and mainly highlight the role no-SQL’s play in modern applications.


It is important to understand the expectations from today’s web applications while adapting modern technologies. Balancing the trendiness of no-SQL with the  requirements, roadmaps and expectations from users (especially if it is direct user interaction) is very important.  Finally, remember – requirements drive data models and data models drive choice of SQL or no-SQL. There is no wrong answer for what one may be trying to achieve!

Quickly create a dropdown of world currencies on your HTML form

The Code

You can use the following utility JavaScript function that will list all currencies in a dropdown that can be placed on your HTML page.

Note: This requires JQuery libraries to be imported on your page.

 HTML element is the id of the <SELECT> tag on the HTML page.
function listCurrencies(HTMLelement) {
   type : "GET",
   url : "",
   success: function(curr) {
     var options = "";
     for(var i=0; i<curr.length; i++) {
       options += "<option value='"+curr[i].currency+"'>"+curr[i].currency+" ["+curr[i].name+"]</option>";
 }); // ajax - get currencies
} // listCurrencies()


The HTML code would look something like this.

< script src=''></script>
<select id="currencies"></select>
< script>






  • Programmable web for the API
  • Open Exchange Rates contains alternative API’s (need signup). However, use of their API will need changing the code in the listCurrencies() function

Creating Smart Initiatives with Internet of Things


Unlike the other articles of this blog, which are mainly geared towards coding and code snippets, this article is intended to discuss some of the key angles in terms of creating Smart Initialtives with the use of IoT technologies.

There has been a lot of development already around Smart Homes via gadgets like switches, lights, thermostats, leak sensors, even lately – refrigerators. IoT is also a widely discussed topic in Industrial Automation. There is also a lot of buzz out there about Internet of Things (IoT) and some exciting concepts around Smart Cities. And why shouldn’t opportunities be looked for in improving citizens’ lives? Luckily many governments are interested today already that are taking vested interest in delivering that to the residents.


Typically, the availability of Internet, Devices (including sensors within devices) and Communication Tools are by far the key ingredients of any Smart Initiative.

Always available internet ensures constant data flow between devices, sensors and consumers. Devices that can seamlessly connect and stay connected to internet ensure that the required data points are accurately captured and timely delivered. By the way, it is also getting critically important that these devices use low power. Communication tools and methods provide frameworks to develop various models of data transmission such as – on-demand connections, message queuing, etc.

Finally, what makes all of this “Smart” are the use-cases and the execution of these use cases. For example, just because it is possible, what improvement to the quality of life would a Smart Toaster or Smart Coffee machine provide? I want to argue there is none! However, Smart Lights that turn themselves ON and OFF, or Smart Thermostats that set themselves based on time of day and weather conditions is a smart value add to the quality of life. How many times have we forgotten to turn lights off before leaving our homes or how many times have we not set the right temperatures in winter that have led to bursting the water pipes? Not having to deal with that on a day-to-day basis is an improvement in quality of life, not to mention the financial benefits coming from the use of smart devices.

So, is anything that is Automated, Smart?

Few years ago, as the “Smart” concepts were being coined, I always used to wonder how is Smart different from Automation? Or was it one and the same thing?

Well, let’s take a factory. Factories have machines. Some machines are fully automated (a.k.a. CNC) and require attention only on an exception basis. So if they are already automated, are they Smart? or if not, what keeps them away from being Smart?

In my opinion what differentiates Automation from being Smart is simply the ways the amount of and method in which metric data is collected, analyzed and used in making a decision or taking an action (by both, humans as well as machines themselves).

A digital (non-internet-connected) thermostat at our homes is capable of providing automation to start and stop the heater or air conditioner based on the sensors in-built within that unit. We can tell that is automation right away. But that is definitely not Smart as it is not able to store and analyze the historical data easily. We cannot know whether it is ON or OFF from far away, i.e. without physical inspection. We cannot control it from far away unless pre-programmed already. Bottom line, the digital thermostat in this example, improved our quality of life minimally by providing ability to read out digital numbers from the LCD panel. An internet-connected thermostat on the other hand, because of its ability to fill the gaps mentioned above, drastically improves the quality of our life.

Thus, Automation and Smart are not the same!


Internet provides a very big boost to our abilities to develop Smart Initiatives. Internet already has a reputation of breaking the barriers related to knowledge & information and this benefit gets extended to Smart Initiatives as well. Concepts like Smart Cities, Smart Homes, Smart Offices and (although not an official term) Smart Industries are possible today and are just going to grow in the future. Possibilities are endless and it’s all in the ideas. Speaking of idea, imagine the power your Smart Thermostat will deliver if it can not only read data from its sensors and save for your analytics but also call external weather API’s to adjust its settings based on external weather conditions. Now, that would certainly improve our quality of life and is worth calling a Smart Initiative.

Control humidity with Node.js on Raspberry Pi

The End Game

I won’t claim this to be an IoT (Internet of Things) article just because you are going to read about Raspberry Pi. But what I will certainly call it is an Internet Automation project that solves one small problem without me needing to take any action.

Throughout this article I am going to provide a lot of references to external resources that I found useful which will enable us to quickly get to the node.js and JavaScript part of the project.

The Problem Statement

With negative winter temperatures, we want to use heaters to maintain a cozy warm temperature in our homes / offices. However, closed and artificially heated spaces impact humidity levels. We use Humidifiers to the resolve! In the context of this article, I am referring to the portable humidifiers that are connected to an electric outlet.

My goal was to automatically turn the humidifier ON or OFF based on the humidity or temperature levels of the room.

How to even achieve this?


This can be simply achieved if we could automatically control the electric outlet (or the switch) the humidifier would be connected to. So, we need an outlet that can controlled preferably via the internet. For that purpose I used one such internet enabled switch called Wemo Switch from Belkin.

(Please Note: Most portable humidifiers also come with an automated switch or a timer, but the idea is to DIY. So please keep that context as you read this article further)

IFTTT provides a platform called Maker that allows (obviously you will need to signup to their service):

  1. Connecting Wemo Switch to them
  2. Controlling the Wemo Switch state using the Maker API (created specifically for your account and devices)

We obviously need sensors that will be connected to Raspberry Pi. Instead of getting individual sensors and connecting them to the Raspberry Pi, I strongly recommend using Raspberry Pi SenseHAT which is a “hat” on top of Raspberry Pi and contains a lots of sensors already installed. They also provide a rich Python library to get the sensor readings.

Finally, we need Raspberry Pi to be connected to internet also. This is simply achieved by any WiFi dongle that plugs into a Raspberry Pi’s USB port. Of course, WiFi needs to be configured on the Raspberry Pi. (Hint: Use Linux commands or IDE).


Since Raspberry Pi can run an operating system, such as Linux and Windows, we can easily  install Node.js on it. Thus, node.js provides us with the core backbone for getting readings from the sensors and calling API’s to control the state (ON or OFF) of the Wemo Switch via IFTTT Maker.

As mentioned earlier,  we will also be relying on a lot of python scripts provided by SenseHAT. We will not be covering the python scripts in this article because they are already available on the SenseHAT API reference. We will however be covering how to invoke these python scripts from the node.js application.

Node.js App

The app performs three-stage functions:

  1. Invoke the python scripts that read the sensor data (in this case temperature, but it could also be based on humidity) at frequent intervals
  2. Assess the reading against condition that would lead to whether Wemo Switch needs to be turned ON or OFF
  3. Actually call the IFTTT Maker API discussed earlier to actually turn the connected Wemo Switch ON or OFF

Let’s look at the code for each of these stages below:

Stage 1: Get sensor readings

Node.js provides a module called child_process that provides a method called exec to execute commands from the operating system. This stage relies on this module to invoke the python script. Remember, because we want to invoke the python script at a regular frequency, we wrap this around the setInterval function of JavaScript.

   require('child_process').exec(command, function(error, stdout, stderr) {
      if (error == null) {
          var data = stdout.replace("\n","");
          // ... Call Stage 2
      else {
        console.log("Error occured. " + error);
    }); child_process.exec
}, 60000); // frequency = 60 seconds

Stage 2: Assess Condition

This would be a simple comparison of the reading obtained from Stage 1. However, this is also where you can get all fancy with advanced features like storing the data in database, store last few readings, determine if reading is consistently increasing or decreasing over last few readings, etc. For the sake of this article, we are going to assume the condition is going to be evaluated based on the latest sensor reading.

// assumption: We are looking for Temperature in Stage 1
var YOUR_API_KEY = "?????"; // obtain this from IFTTT
var wemoState = "off"; // or as defined in IFTTT Maker
if(data > 35) { // deg. Celsius is what SenseHat APIs returns
    wemoState = "on";
// ... Call Stage 3

Stage 3: Call IFTTT Maker API

As mentioned earlier, we have not covered how to create an IFTTT Maker API that would control the state of an existing device that is registered with IFTTT or yet call another API. You can refer to the maker reference or this blog.

Having said that, now to actually call the API from Node.js, we will use the https module from node.js which provides a request method.

var makerAPI_host = "";
var makerAPI_path = "/trigger/"+wemoState+"/with/key/"+YOUR_API_KEY;
var https = require('https');
 var optionsget = {
 method : "GET",
 host : makerAPI_host, 
 port : 80,
 path : makerAPI_path
var reqGet = https.request(optionsget, function(resp) {
 var str = "";
 resp.on('data', function(d) { // data chunk
 str += d;
 resp.on('end', function() { // all data sent
 console.log(str); // We are Done!
reqGet.on('error', function(e) {
 error = {
 message : "Error occured",
 error : e

alternatively, can also use the same child_process.exec() method from Stage 2 as below

var callAPI = "curl -X GET"+wemoState+"/with/key/"+YOUR_API_KEY;    
require('child_process').exec(callAPI, function(error, stdout, stderr) {
   if (error == null) {
      console.log(stdout); // We are Done!
  else {
     console.log("Error occured. " + error);
}); child_process.exec


Further Scope

To summarize, while the scope of this article was to demonstrate use of Node.js on Raspberry Pi using various node.js modules, the DIY project also leaves scope for further expansion. One re-use I envision is utilizing the same apparatus to control any other internet-connected  device, such as Nest Thermostat, etc. to create many custom and fancy Internet Automation DIY projects.

Please feel free to comment, share your experience, provide feedback or call out if anything obvious was missed or if any additional perspective needs to be covered.


Recursion to the Rescue in Asynchronous JavaScript

Callbacks within Loops

While callbacks provide capability to serialize execution of code, which is sometimes required in an asynchronous framework that javascript offers (e.g. nodeJS), they can also get daunting very easily. Throw in a loop (such as for-loop or a while-loop) within the callback hell and now it’s a completely elevated level of complexity.

There are already a lot of alternatives to simplify the callback hell. One popular solution is use of Promises. There are also some good discussions on this topic on Quora.

This article is one of such attempt to simplify callbacks that are called within loops (such as for-loop or while-loop). The problem specifically in this case is that if an asynchronous function is called within the loop, each iteration is executed asynchronously. Now, if further processing is required after the entire loop is executed, it becomes difficult to track the last iteration. Why? Because, what if although the last iteration is complete and the one before the last takes longer to return? So, just tracking the last iteration is not good enough!

Before we look at how we can use recursion in this situation as one possible alternative, let’s take a quick dive into Recursion.

A little-bit about Recursion

Divide and conquer is perhaps the most common algorithm used to solve some complex programming problems with optimal efficiency. Recursion is one method under divide and conquer algorithms. Recursion really reduces down a large problem into smaller simpler problems and the idea is that solving each small unit will eventually lead to have solved the larger original problem. Recursion is achieved by writing recursive functions, which we will take a look at in a minute.

Recursive Functions (quick refresher)

To put it simply, a recursive function is one that calls itself and looks for whether a base condition has been reached or not. If not, the problem is chopped down further by calling itself again. Let’s look at an example. In this example, we are going to add numbers between 1 and 10.
Hint: Recursion involves a base case which is the last case in recursive function after which the recursion is complete and the problem has been solved.

var add = function(n, sum) {
if(n<1) {
return sum;
}// base case
else {
sum = sum + n;
return add(n, sum); // recursive call
} // chop down problem further

// calling recursive function
var result = 0; // this is where the final result will be stored
result = add(10,result); // call recursive function

Just take a few minutes to look at where we applied the “Divide” from the Divide & Conquer.

What we did here is that we divided the larger problem of adding all numbers between 1 and 10 into smaller problem of performing only one ‘addition’ operation per function call, meanwhile decrementing the number to be added by 1 before it was passed as a parameter to the subsequent function call.

similarly, at the start of each function call, we determined whether the exit condition was reached, i.e. whether the number in question to be added was less than 1 or not. When the exit condition was reached, we simply returned the result, i.e. sum, back to the main program.

Now, let’s get back to the Solution of the Asynchronous Loop Problem

The Solution via Recursion

Assume a requirement similar to pricing a shopping cart. When a product is added to the shopping cart, we need to do the following for each product in the cart:

  1. Check if the product and quantity is eligible for a promotion. If eligible, price it accordingly, else price as usual.
  2. When all products are priced, calculate the total.
  3. Calculate the tax based on the total.
  4. Finally, calculate the total cost of the shopping cart.

Below is the high level code for achieving the above steps using recursion and callbacks. As mentioned earlier, the challenge is that steps 1 and 2 are performed for each product in the pricing cart, however steps 3 and 4 are computed on the results from steps 1 and 2.

var calculatePrice = function(productid, quantity, callback){
// ... check promotion eligibility

var calculateTotal = function(total, productid, quantity, finalcallback, callback){
// ...
calculatePrice (productid, quantity, function(price){
total = total + price;
callback(total, finalcallback);

var calculateTax = function(total, tax_rate){
return (total + (total * tax_rate));

// And here is the recursive function
function forEachProductInCart(cartIndex, shoppingCart, total, callback){
if(cartIndex < 0) {
} // base case
else {
calculateTotal(total, shoppingCart[cartIndex].productid, shoppingCart[cartIndex].quantity, callback, function(total, callback){
forEachProductInCart(cartIndex, shoppingCart, total, callback);
} // forEachProductInCart: recursive function

/* .....................................................
And we call it here
..................................................... */
// var shoppingCart = [];
// shoppingCart contains all products added to the shopping cart

var subtotal = 0; // initial total cost
var finalTotal = 0; // initial final cost
var cartIndex = shoppingCart.length - 1;

// We call the recursive function here
forEachProductInCart(cartIndex, shoppingCart, subtotal, function(total){
var final_with_tax = calculateTax(total, 0.75); // final with tax


JavaScript provides solid asynchronous framework to build highly complex web applications already. It also provides ability to create synchronous execution logic via “callbacks”. Add them up with algorithms like recursions and what you now have is ability to solve any complex computing requirement.

Please feel free to comment, provide feedback or add if anything obvious was missed or if any additional perspective needs to be covered.

Object Oriented JavaScript

For a long time, JavaScript has served as an event based scripting language and often its object oriented capabilities get left out (or ignored when using it as a client side scripting language). In this blog we are going to see the OOP aspect of JavaScript. Obviously you are expected to have some basic understanding of dealing with instantiating an object and calling its methods and properties, scope of variables, etc.


This is probably the simplest section of this entire blog because defining a Class is as simple as defining a function. In other words, a function can be instantiated using the well-known new operator. Below is an example of a JavaScript Class called Person which is instantiated as object johndoe.

function Person() {
// ... later

var johndoe = new Person();

Attributes & Method

Let’s build upon our class Person here.

function Person() {

this.setAge = function(a) { // set age
this.age = a;
this.setGender = function(g) { // set gender
this.gender = g;


var johndoe = new Person();

In this example, Person has 2 attributes, namely – age & gender. The this operator is not new here as in any other OOP language. It signifies that the instantiated object retains the scope of these attributes and methods.

Further, johndoe also contains methods, namely – setAge() & setGender().

Finally, we are setting the age and gender for our object johndoe using the methods setAge() & setGender().


You have already seen above how we created an object by instantiating a function with the new operator. So there is nothing much to talk about an Object now, you’d think!. But this is where I intend to spend some time to clarify how JavaScript thinks of Objects.

Defining a Class with its attributes and methods that are instantiated for each object is kind-a an obvious nature of OOP and which is very valid when developing an OO application. However, the concept of an Object is slightly extended in JavaScript. Let’s look at some examples:

/* 1.*/ var box = {id:'BX109', width:10, height:5, length:15};
/* 2.*/ var boxIDs = ['BX109', 'BX112', 'BX124'];
var Box = function(id) { = id;
this.width = 0; this.height = 0; this.length = 0;
this.setDims = function(w,h,l) {
this.width = w; this.height = h; this.length = l;
/* 3.*/ var bx109 = new Box('BX109').setDims(10,5,15);

Each of the lines labelled 1, 2 and 3 above are objects. Yes, even an array is an object!. We can validate that with the typeof statement. Let’s see below:

typeof box; // object
typeof boxIDs; // object
typeof bx109; // object

Now, although all the 3 case are objects, only object bx109 contains attributes and methods. (Remember, we are ignoring Box in our discussion at this point only because it is a Class, or in real JavaScript technical terms, a function. Try the typeof yourself!).

Another interesting point to note here is among the 3 objects, only boxIDs has length. It’s so because an Array is an object with multiple elements, which themselves can be an object. See an example? Here you go:

var boxes = [
{id:'BX109', width:10, height:5, length:15}, // element 1: typeof => object
{id:'BX112', width:15, height:10, length:25}, // element 2: typeof => object
{id:'BX124', width:20, height:15, length:40} // element 3: typeof => object
]; // array: typeof => object, with length => 3

As you can see, boxes contains multiple objects (3 to be precise) and referencing each element will yield an object.
boxes[1].id will yield BX112. (Notice the ‘dot’ notation!)


The point of this article is that because:

  • JavaScript treats complex declarations within brace brackets {}, as objects (a.k.a. JSON)
  • allows us to store an array of objects as an object itself, while
  • allowing to instantiate function declaration (declared as classes with attributes and methods)

developing large complex applications becomes fairly easy. I am not comparing it with other powerful OOP languages but just highlighting the OO concept in JavaScript.

Please feel free to comment, provide feedback or add if anything obvious was missed or any additional perspective needs to be covered.