Why I still use VIM (with exceptions)
Jun 18, 2017

Why i still use VIM for most of my coding needs?

Well, i started off as a system administrator and for most of the time i would shell into the server and run my favorite editor.

I recall once when i was starting to use Linux, around 1997, i used the now obselete pico and then my co-worked introduced me to vi

While the overhead of vi over network was good, i started to write code on my local machine in C using vi.

Many years later, i still use a variant of vi called vim.

But okay, enough for the history as to how i got to use vim. Vim to be is flexible as i can write in PHP, Ruby, Python, Node and Go with vim flawlessly supporting the languages the needs of coding in almost every language i want to write it without purchasing an editor for each language.

Example: IntelliJ has PyCharm, PHPStorm, RubyMine and i don’t want to purchase IDEs that make me “Buy” for a specific programming langauge. Do not get me wrong, they are GREAT IDEs and helps my co-workers to manage their code but i hated the idea of buying something that i will use 10% of the time. (I write in various languages - i don’t stick to one language)

The only exception of the product IntelliJ had created that i use as an IDE - Android Studio (no choice, dude, it’s Java) and there isn’t an option to help me write Java code well. (even so, i hated Android Studio, especially when it suggests a wrong UI variable, for example). You have absolutely no choice but to use an IDE in Java to be honest.

I could use Sublime, but i still have problems getting the hang of it. Or maybe TextMate (it’s clean and nice but it is macOS only - i do use Linux as my workstation at times when i don’t have a Mac).

Maybe i will use Sublime for my primary editor but i still love having to work on a terminal because of it’s flexibility.

Structuring Grape API
May 24, 2017

Most of the information on Grape gem (Ruby) is very limited and they do not show you how to structure your API versions.

But some of you would probably ask me, “Why use Grape gem when you have the new rails-api integration with Rails 5”?

As far as a understood, if you want to use rails-api (the gem, not the`baked in rails 5 –api flag) with a server-side render so you have both an API and WebApp running as a monilithic app. But i prefer grape gem as it is more flexible (AFAIK) and has a great community.

Assuming you have the Grape gem installed in your Rails 5 app:

Add this code into config/application.rb and restart your app:

    config.paths.add File.join('app', 'api'), glob: File.join('**', '*.rb')
    config.autoload_paths += Dir[Rails.root.join('app', 'api', '*')]

Now,

  1. Create the directory api in app
  2. Create a file called api.rb in the api directory.
  3. In this api.rb file you should have at least the code:
    class API < Grape::API
      insert_after Grape::Middleware::Formatter, Grape::Middleware::Logger
      format :json # required to set default all to json
      mount V1::Base => '/v1'
    end
    

In the future, if you figured you want to have a second version (ie. V2), you can directly mount the new API with new V2 code.

Next, create V1 directory within the API directory: mkdir v1

Now, at the v1 directory, we will have base.rb which is the root where you specity all your API endpoints for V1.

Your base.rb should be configured like below:

module V1
  class Base < Grape::API
    mount Products::Data
  end
end

You can mount as many endpoints as you wish, but i am adding one only for demonstration.

Next, create a “products” directory (assuming you have a model called Product) WITHIN the v1 directory: mkdir products

In the products directory, create a file called data.rb with the following contents:

module V1
  module Products
    class Data < Grape::API

      resource :products do
        desc 'List all Products'
        get do
          Product.all
        end

        desc 'Show specific product'
        get ':id' do
          Product.find(params[:id])
        end
      end
    end
  end
end

That’s it! You can try the good old curl to test it out!

curl http://localhost:3000/api/v1/products

Fat Models, Skinny Controller vs Separation of concerns, Part 2
Mar 20, 2017

In this part 2 of Fat Models and Skinny Controller vs Separation of Concern, i am going to focus more on getting code from “fat models” to concern.

For example, for omniauth-facebook with login gem Devise.

In a “fat model” configuration, the business logic code sits in the model, like below, naming the method by adding self. to from_omniauth.

class User < ApplicationRecord

    def self.from_omniauth(auth)
     if user = find_by_email(auth.info.email)  # search your db for a user with email coming from fb
       return user  #returns the user so you can sign him/her in
     else
       user = create(provider: auth.provider,    # Create a new user if a user with same email not present
                          uid: auth.uid,
                          email: auth.info.email,
                          password: Devise.friendly_token[0,20])
       user.create_account(name: auth.info.name, # you need to check how to access these attributes from auth hash by using a debugger or pry
                           address: auth.info.location,
                           image: auth.info.image
                           )
       return user
     end
    end
end

So you would access this method (usually in the controller) by User.from_omniauth as the code is in the User class (model).

To move this code to the concern you will have to add a new file, in this example in models/concerns/omniauth.rb.

You will need module Omniauth with extend ActiveSupport::Concern to extend the model, and you will need to add the business logic code in module ClassMethods and removing self..

module Omniauth
  extend ActiveSupport::Concern

  module ClassMethods
    def from_omniauth(auth)
     if user = find_by_email(auth.info.email)  # search your db for a user with email coming from fb
       return user  #returns the user so you can sign him/her in
     else
       user = create(provider: auth.provider,    # Create a new user if a user with same email not present
                          uid: auth.uid,
                          email: auth.info.email,
                          password: Devise.friendly_token[0,20])
       user.create_account(name: auth.info.name, # you need to check how to access these attributes from auth hash by using a debugger or pry
                           address: auth.info.location,
                           image: auth.info.image
                           )
       return user
     end
    end
  end
end

while in the model the code is refactored to include only one line which is include Omniauth.

class User < ApplicationRecord
  include Omniauth
end

You can still call User.from_omniauth from the controller as normal and now we had moved from having “fat models” setup to concerns.

Fat Models, Skinny Controller vs Separation of concerns, Part 1
Mar 17, 2017

I started with Rails 3.2.13 and during this days, the rails community had recommended making the Rails Controller skinny (with only request code) and making the model fat which includes the business logic.

In Rails 4.1, “concerns” was introduced to separate out the business logic from controller or model and concerns helps you to build application based on the single responsibility principle. I did not pay attention to concerns until recently where i had a lot of business logic in my model.

So i had been doing a lot of refactoring of the code for a job listing site.

An example is the following code without concerns :

class JobsController < ApplicationController

  def index
    if params[:query].present? && params[:location].present?
      @jobs = Job.search(params[:query], fields: [ { title: :word_start }, { state: :word_start }],
                                           where: { state: params[:location], published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:query], fields: [ { title: :word_start }, { state: :word_start }],
                                           where: { state: params[:location], published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
    elsif params[:query].blank? && params[:location].present?
      @jobs = Job.search(params[:location], where: { published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:location], where: { published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
    elsif params[:query].present? && params[:location].blank?
      @jobs = Job.search(params[:query], where: { published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:query], where: { published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
    else
     @jobs = Job.where(published: true).where("published_date >= ?", 1.month.ago).page(params[:page]).per(7)
      @jobs_sponsored = Job.where(published: true).where("published_date >= ?", 1.month.ago).where("jobs.listing_type = ? OR jobs.listing_type = ?", 2, 3).order("RANDOM()").limit(2)
    end

  end
end

Bad innit? Code in the controller?

After re-factoring, i have the business logic code in a concern (ie. controller/concerns/jobs_query.rb) rather than the controller like below:

module JobsQuery
  extend ActiveSupport::Concern

   def jobs_with_job_name_and_location
      @jobs = Job.search(params[:query], fields: [ { title: :word_start }, { state: :word_start }],
                                           where: { state: params[:location], published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:query], fields: [ { title: :word_start }, { state: :word_start }],
                                           where: { state: params[:location], published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
   end

    def jobs_with_location
      @jobs = Job.search(params[:location], where: { published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:location], where: { published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
    end

    def jobs_with_job_name
      @jobs = Job.search(params[:query], where: { published: true, published_date: {gte: 1.month.ago} }, page: params[:page], per_page: 7)
      @jobs_sponsored = Job.search(params[:query], where: { published: true, listing_type: [2,3], published_date: {gte: 1.month.ago} }, limit: 2, page: params[:page], per_page: 7)
    end

    def all_other_jobs
      @jobs = Job.where(published: true).where("published_date >= ?", 1.month.ago).page(params[:page]).per(7)
      @jobs_sponsored = Job.where(published: true).where("published_date >= ?", 1.month.ago).where("jobs.listing_type = ? OR jobs.listing_type = ?", 2, 3).order("RANDOM()").limit(2)
    end
end

Do note include JobsQuery which Rails will autoload based on the concerns filename in the path based on Ruby On Rails convention.

class JobsController < ApplicationController
  include JobsQuery

  def index

    if params[:query].present? && params[:location].present?
      jobs_with_job_name_and_location
    elsif params[:query].blank? && params[:location].present?
      jobs_with_location
    elsif params[:query].present? && params[:location].blank?
      jobs_with_job_name
    else
      all_other_jobs
    end

  end
end

Looks short and sweet, innit?

Re-discovered a history of my internet
Mar 16, 2017

I do search about myself on search engine but a few days back i started googling “Muhammad Nuzaihan FreeBSD” (you are not likely to find it with just “Muhammad Nuzaihan”) and found my old blog at https://polycompute.wordpress.com/2014/07/ and discovered a really important picture of where i was in early 2000s.

Here is the glorified image of me running (from right), FreeBSD on Intel Pentium as webserver, OpenBSD on a 486-DX2-66 as a mail server (because of the advanced anti-spamming feature they had with OpenBSD’s SPAMD/PF and NetBSD on 486-DX2-66 as a mere testing server.

I could host all those stuff below on a static IP which my ISP provides then (the now defunct Singtel Magix). :-)

Polyglot programming and the benefits
Mar 16, 2017

Polyglot programming is the understanding and knowledge of a wide range and paradigms of programming.

I had been enjoying writing code in Ruby, Python, NodeJS, GoLang and PHP and while doing so i am exposed to certain conditions when writing any of the languages.

Even though i had been a fan of C programming for many years which limits my scope more into systems development, it was in the recent years (circa 2011-2012), i had been exploring with Ruby and it was enjoyable for a person who just started into Web Development.

With Ruby, i used Ruby On Rails framework (which i still use today to prototype - fast enough to make an MVP, or Minimum-Viable Product).

Differences in the languages helped me to be flexible in solving a problem with the paradigms it shows and apply to any other languages besides the language i am working on.

For example, while working with Python, we use try and catch a lot to make errors more forgiving. I learned Python after dabbling with Ruby and i applied what i learned in Python with Python’s equivalent of try and catch with Ruby’s own begin and except.

In other applications where we required to re-engineer a monolithic application to micro-services, i knew i had to scale the code that collects analytics data with GoLang and once we could not scale our PHP reporting app, so i rewrite it in Python. I had been using Python for generating reports using Pandas python library which was quick!

GoLang is the language that i enjoy writing most because i came from a C language background and i loved performance and that Golang had concurrency as a first-class citizen.

I had been dabbling with Haskell for a while but while i liked a language that is pure functional the lazy evaluation (have you even ran out of memory?) and having no-side effects is what i loved most but i couldn’t apply it in production and it remains as a more hobby language for me and Haskell is good when you are learning functional language.

After 15 years as a systems and network engineer (yes, network as well) with not much software development knowledge but now, it makes me more flexible in taking development roles and make it possible to architecture the infrastructure at the lowest level.

Sometimes you have to be more pragmatic in allowing yourself to take different approches when developing software, knowing that you can write the same code in the language you can only work on with less time and even lines of code to achieve the same result.

MySQL High-Availability and Load Balancing
Feb 6, 2017

Introduction

In cases where the failure of the database is not acceptable, there are ways to protect the data from database failures, especially for Single-point of failure in a scenario is where there is only one database server running.

MySQL has a lot of options on having a failover and load balancing setup from MySQL cluster, MySQL master-slave setup with MySQL router or even using a lower level load balance with Linux Virtual Server (which is a Load balancer and Fail-Over router using VRRP protocol) and MySQL master-master setup.

Choosing a Setup

MySQL Cluster is intended for large MySQL installations (ie. a server farm with 10-20 MySQL servers) which is too redundant for our installation and MySQL Master-Slave setup can only have one write (usually Master only) and one read (Slaves only).

Our choice is to use Master-Master MySQL and Linux Virtual Server (LVS/KeepaliveD) design is to allow writes to any Master (eg. 2 Master MySQL server).

The Web application can write to any one of the Master MySQL server but we needed to know and redirect connections to either one Master MySQL server when it is down or having high load.

This is where LVS (KeepaliveD) comes in. KeepaliveD is a LVS ‘router’ software which keeps track of the health of the real servers (this case the Master MySQL servers) and redirects the MySQL client connection from the Web application to either one of the MySQL server.

If in event where any of the MySQL server fails when KeepaliveD runs the health check, it will be removed from the KeepaliveD internal routing and redirect to the other MySQL Master server.

KeepaliveD keeps a floating IP (also called a Virtual IP) which the Web application connects to and KeepaliveD redirects the connection to either MySQL Server.

Assuming we have two KeepaliveD router where one is MASTER and one is BACKUP sharing this Virtual IP where if one KeepaliveD router fails (eg. MASTER), the other KeepaliveD instance (eg. BACKUP) will take over the Virtual IP where the Web application is connecting to.

NOTE: In this post there are two types of Master, MySQL Master and LVS MASTER.

Diagram

High-Availability Diagram

Configuration Steps

MySQL Server Setup

Let’s assume the MySQL server IP is as follows:

Current running MySQL Server: 192.168.1.10
New MySQL Server: 192.168.1.11

Modifications to the current server:

  1. Stop all database activity by shutting down the Web application
  2. Dump the MySQL file from the running database, $ mysqldump –u<youruser> -p mydatabase > database_dump.sql
  3. In the CURRENT running MySQL Server, backup a copy of mysql.cnf $ sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf.orig
  4. In the CURRENT running MySQL server, change the configuration /etc/mysql/my.cnf, add some options under [mysqld]

    bind-address  = 0.0.0.0
    log-bin = /var/log/mysql/mysql-bin.log
    binlog-db-db=mydatabase # this is the database we will replicate
    binlog-ignore-db=mysql
    binlog-ignore-db=test
    server-id = 1
    
  5. Restart the MySQL Server
  6. Go to mysql console, type in ‘SHOW MASTER STATUS;IMPORTANT! Take note of File and Positon. Example output:

    mysql> show master status;
    +------------------+----------+--------------+------------------+
    | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
    +------------------+----------+--------------+------------------+
    | mysql-bin.000010 |     1193 | mydatabase   |   mysql,test     |
    +------------------+----------+--------------+------------------+
    
  7. Then in the MySQL console, add a new grant for replication.

    mysql> grant replication slave on *.* to 'replication'@'%' identified by ‘your_replication_password';
    
  8. In the NEW MySQL server, import the MySQL dump we got from the running MySQL Server `$ mysql –u -p mydatabase < database_dump.sql
  9. In the NEW MySQL Server, backup a copy of my.cnf $ sudo cp /etc/mysql/my.cnf /etc/mysql/my.cnf.orig
  10. Edit /etc/mysql/mysql.cnf in the new MySQL Server with the following under [mysqld] and save the file.

    bind-address  = 0.0.0.0
    log-bin = /var/log/mysql/mysql-bin.log
    binlog-db-db=mydatabase # this is the database we will replicate
    binlog-ignore-db=mysql
    binlog-ignore-db=test
    server-id = 2 # id is different than the CURRENT MySQL server
    
  11. Restart the MySQL on the New server.

  12. Go to the new MySQL server’s (192.168.1.11) console, we will sync this new MySQL server with the current one (IMPORTANT! YOU NEED TO SET THE MASTER_LOG_FILE and MASTER_LOG_POS ACCORDING TO THE CURRENT RUNNING SERVER’S File and Position VALUES OR IT WILL NOT BE IN SYNC):

    mysql> SLAVE STOP;
    mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.10', MASTER_USER='replication', MASTER_PASSWORD='your_replication_password', MASTER_LOG_FILE=’<the File value from running server>', MASTER_LOG_POS=<the Position value from the running server>;
    mysql> SLAVE START;
    
  13. Check the Status on the New MySQL server:

    mysql> SHOW SLAVE STATUS\G;
    
  14. Make sure these two values set to ‘YES’ with waiting master to send event and NO ERRORS:

    Slave_IO_State: Waiting for master to send event
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    
  15. On the NEW MySQL server, run the following command: mysql> grant replication slave on *.* to 'replication'@'%' identified by ‘your_replication_password';
  16. Then on the NEW MySQL Server’s Console, type in: mysql> SHOW MASTER STATUS;
  17. Take note of the File and Position of the above command in the NEW MySQL Server.

  18. On the CURRENT Running MySQL Server (192.168.1.10), we will Sync up with the NEW MySQL Server one (IMPORTANT! YOU NEED TO SET THE MASTER_LOG_FILE and MASTER_LOG_POS ACCORDING TO THE NEW SERVER’S __File__ and __Position__ VALUES OR IT WILL NOT BE IN SYNC)

    mysql> SLAVE STOP;
    mysql> CHANGE MASTER TO MASTER_HOST='192.168.1.11', MASTER_USER='replication', MASTER_PASSWORD='your_replication_password', MASTER_LOG_FILE=’<the File value from NEW server>', MASTER_LOG_POS=<the Position value from the NEW server>;
    
  19. Check the Status of the CURRENT Running Server mysql> SHOW SLAVE STATUS\G;
  20. Make sure these two values set to ‘YES’ and waiting for master to send event and NO Errors:

    Slave_IO_State: Waiting for master to send event
    Slave_IO_Running: Yes
    Slave_SQL_Running: Yes
    
  21. GRANT privileges for normal database access for the Web App on the NEW server: Example:

    mysql> GRANT ALL PRIVILEGES ON mydatabase.* to ‘<the db user>’@’<the web app server IP>’ IDENTIFIED BY ‘<the password for the db user>’;
    mysql> FLUSH ALL PRIVILEGES;  
    

LVS (KeepaliveD)

We will need two servers (each can be low spec server with 2 Cores and at least 2GB RAM) which will be used for network high-availability and load balancing.

Assuming the IP address are as follows:

MASTER LVS: 192.168.1.20
BACKUP LVS: 192.168.1.21
CURRENT MYSQL SERVER: 192.168.1.10
NEW MYSQL SERVER: 192.168.1.11
Virtual (Floating IP) – **No need to be configured on any server’s interfaces, will be managed by keepalived**: 192.168.1.30

Change the IP in the configuration according to your infrastructure setup!

In both MASTER and BACKUP LVS server, download and install keepalived and ipvsadm $ sudo apt-get install keepalived ipvsadm

Add this configuration to MASTER LVS (192.168.1.20) in /etc/keepalived/keepalived.conf:

global_defs {
    router_id LVS_MYPROJECT
}
vrrp_instance VI_1 {
    state MASTER
    # monitored interface
    interface eth0
    # virtual router's ID
    virtual_router_id 51
    # set priority (change this value on each server)
    # (large number means priority is high)
    priority 101
    nopreempt
    # VRRP sending interval
    advert_int 1
    # authentication info between Keepalived servers
    authentication {
        auth_type PASS
        auth_pass mypassword
    }

    virtual_ipaddress {
        # virtual IP address
        192.168.1.30 dev eth0
    }
}
virtual_server 192.168.1.30 3306 {
    # monitored interval
    delay_loop 3
    # distribution method
    lvs_sched rr
    # routing method
    lvs_method DR
    protocol TCP

    # backend server#1
    real_server 192.168.1.10 3306 {
        weight 1
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 3306
        }
    }

    # backend server#2
    real_server 192.168.1.11 3306 {
        weight 1
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 3306
        }
    }
}

Add this configuration to BACKUP LVS (192.168.1.21) in /etc/keepalived/keepalived.conf:

global_defs {
    router_id LVS_MYPROJECT
}
vrrp_instance VI_1 {
    state BACKUP
    # monitored interface
    interface eth0
    # virtual router's ID
    virtual_router_id 51
    # set priority (change this value on each server)
    # (large number means priority is high)
    priority 100
    nopreempt
    # VRRP sending interval
    advert_int 1
    # authentication info between Keepalived servers
    authentication {
        auth_type PASS
        auth_pass mypassword
    }

    virtual_ipaddress {
        # virtual IP address
        192.168.1.30 dev eth0
    }
}
virtual_server 192.168.1.30 3306 {
    # monitored interval
    delay_loop 3
    # distribution method
    lvs_sched rr
    # routing method
    lvs_method DR
    protocol TCP

    # backend server#1
    real_server 192.168.1.10 3306 {
        weight 1
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 3306
        }
    }
    # backend server#2
    real_server 192.168.1.11 3306 {
        weight 1
        TCP_CHECK {
        connect_timeout 10
        nb_get_retry 3
        delay_before_retry 3
        connect_port 3306
        }
    }
}

Lastly, start the KeepaliveD services on both MASTER LVS and BACKUP LVS: $ sudo service start keepalived

Final setup

In this final setup, we will configure the Web app to use KeepaliveD’s Virtual IP (192.168.1.30) and create some firewall rules on both the MySQL DATABASE SERVER, NOT LVS SERVER.

  1. $ sudo iptables -t nat -A PREROUTING -d 192.168.1.30 -j REDIRECT
  2. edit /etc/rc.local on the MySQL Database servers to make the firewall rule persistent

    #!/bin/sh -e
    #
    # rc.local
    #
    # This script is executed at the end of each multiuser runlevel.
    # Make sure that the script will "exit 0" on success or any other
    # value on error.
    #
    # In order to enable or disable this script just change the execution
    # bits.
    #
    # By default this script does nothing.
    /sbin/iptables -t nat -A PREROUTING -d 192.168.1.30 -j REDIRECT
    exit 0
    
  3. $ chmod 755 /etc/rc.local
  4. Edit the .env file to use the Virtual IP (Example in our setup: 192.168.1.30)
  5. Start the Web Application

Notes

Make sure that either MySQL servers are shut down safely (eg. shutdown –h now) and do not perform a HARD shutdown/reset as it will make the database out of sync with each other.

We can now shutdown either one MySQL database or one of the LVS and the other will keep running.

Parsing cookies stored in JSON
Feb 2, 2017

Sometimes, you will have to store cookie data and you would store it in JSON with JSON.stringify.

How would you parse out the cookie in JSON format back into a string separated by semicolons?

Here is how you would parse it back to a string (requires Node.JS):

Assuming your cookie_file.txt contains the follows:

[{"domain":".domain.com","httponly":false,"name":"presence","path":"/","secure":true,
  "value":"EDvF3EtimeF1486031553EuserFA2616400242A2EstateFDutF148asdasd1231235CEchFDp_5f616400242F3CC"},
  {"domain":".domain.com","httponly":false,"name":"p","path":"/","secure":false,"value":"-2"}]

The code to parse back the cookie that is stored in JSON:

var fs = require('fs');

fs.readFile('./cookie_file.txt', function read(err, data){
   var cookies = JSON.parse(data);
   baked_cookies = [];
   cookies.forEach(function(cookie){
     var add_ingredients = cookie.name + '=' + cookie.value + ';';
     baked_cookies.push(add_ingredients);
   });

   baked_cookies = baked_cookies.join(' ').slice(0, -1);
   console.log(baked_cookies);
});

Assuming the above code is saved as cookie_parser.js, you can run it like this:

node cookie_parser.js

output: presence=EDvF3EtimeF1486031553EuserFA2616400242A2EstateFDutF148asdasd1231235CEchFDp_5f616400242F3CC; p=-2