Tutorial on Hadoop with VMware Player


Map Reduce (Source: google)

Map Reduce (Source: google)

Functional Programming
According to WIKI, In computer science, functional programming is a programming paradigm that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It emphasizes the application of functions, in contrast to the imperative programming style, which emphasizes changes in state. Since there is no hidden dependency (via shared state), functions in the DAG can run anywhere in parallel as long as one is not an ancestor of the other. In other words, analyze the parallelism is much easier when there is no hidden dependency from shared state. Map/reduce is a special form of such a directed acyclic graph which is applicable in a wide range of use cases. It is organized as a “map” function which transform a piece of data into some number of key/value pairs. Each of these elements will then be sorted by their key and reach to the same node, where a “reduce” function is use to merge the values (of the same key) into a single result.
Map Reduce

A way to take a big task and divide it into discrete tasks that can be done in parallel. Map / Reduce is just a pair of functions, operating over a list of data.

MapReduce is a patented software framework introduced by Google to support distributed computing on large data sets on clusters of computers.

The framework is inspired by map and reduce functions commonly used in functional programming,[3] although their purpose in the MapReduce framework is not the same as their original forms.
Hadoop
A Large scale Batch Data Processing System.

It uses MAP-REDUCE for computation and HDFS for storage.

Apache Hadoop is a software framework that supports data-intensive distributed applications under a free license. It enables applications to work with thousands of nodes and petabytes of data. Hadoop was inspired by Google’s MapReduce and Google File System (GFS) papers.

It is a framework written in Java for running applications on large clusters of commodity hardware and incorporates features similar to those of the Google File System and of MapReduce. HDFS is a highly fault-tolerant distributed file system and like Hadoop designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.

Hadoop is an open source Java implementation of Google’s MapReduce algorithm along with an infrastructure to support distributing it over multiple machines. This includes it’s own filesystem ( HDFS Hadoop Distributed File System based on the Google File System) which is specifically tailored for dealing with large files. When thinking about Hadoop it’s important to keep in mind that the infrastructure it has is a huge part of it. Implementing MapReduce is simple. Implementing a system that can intelligently manage the distribution of processing and your files, and breaking those files down into more manageable chunks for processing in an efficient way is not.

HDFS breaks files down into blocks which can be replicated across it’s network (how many times it’s replicated it determined by your application and can be specified on a per file basis). This is one of the most important performance features and, according to the docs “…is a feature that needs a lot of tuning and experience.” You really don’t want to have 50 machines all trying to pull from a 1TB file on a single data node, at the same time, but you also don’t want to have it replicate a 1TB file out to 50 machines. So, it’s a balancing act.

Hadoop installations are broken into three types.

v  The NameNode acts as the HDFS master, managing all decisions regarding data replication.

v  The JobTracker manages the MapReduce work. It “…is the central location for submitting and tracking MR jobs in a network environment.”

v  Task Tracker and Data Node, which do the grunt work

Hadoop - NameNode, DataNode, JobTracker, TaskTracker

Hadoop – NameNode, DataNode, JobTracker, TaskTracker

The JobTracker will first determine the number of splits (each split is configurable, ~16-64MB) from the input path, and select some TaskTracker based on their network proximity to the data sources, then the JobTracker send the task requests to those selected TaskTrackers.

Each TaskTracker will start the map phase processing by extracting the input data from the splits. For each record parsed by the “InputFormat”, it invoke the user provided “map” function, which emits a number of key/value pair in the memory buffer. A periodic wakeup process will sort the memory buffer into different reducer node by invoke the “combine” function. The key/value pairs are sorted into one of the R local files (suppose there are R reducer nodes).

When the map task completes (all splits are done), the TaskTracker will notify the JobTracker. When all the TaskTrackers are done, the JobTracker will notify the selected TaskTrackers for the reduce phase.

Each TaskTracker will read the region files remotely. It sorts the key/value pairs and for each key, it invoke the “reduce” function, which collects the key/aggregatedValue into the output file (one per reducer node).

Map/Reduce framework is resilient to crash of any components. The JobTracker keep tracks of the progress of each phases and periodically ping the TaskTracker for their health status. When any of the map phase TaskTracker crashes, the JobTracker will reassign the map task to a different TaskTracker node, which will rerun all the assigned splits. If the reduce phase TaskTracker crashes, the JobTracker will rerun the reduce at a different TaskTracker.
Let’s try Hands on Hadoop
Objective of the tutorial is to set up multi-node Hadoop cluster using the Hadoop Distributed File System (HDFS) on Ubuntu Linux with the use of VMware Player.

Hadoop and VMware Player

Hadoop and VMware Player

Installations / Configurations Needed:

Laptop

Physical Machine

Laptop with 60 GB HDD, 2 GB RAM, 32bit Support, OS – Ubuntu 10.04 LTS – the Lucid Lynx

IP Address-192.168.1.3 [Used in configuration files]

Virtual Machine

See VMware Player sub section

Download Ubuntu ISO file

Ubuntu 10.04 LTS – the Lucid Lynx ISO file is needed to install on virtual machine created by VMware Player to set up multi-node Hadoop cluster.

Download Ubuntu Desktop Edition

Download Ubuntu Desktop Edition

http://www.ubuntu.com/desktop/get-ubuntu/download

Note: Login with user “root” to avoid any kind of permission issues (In your machine and Virtual Machine).

Update the Ubuntu packages: sudo apt-get update

VMware Player [Freeware]

Download it from http://downloads.vmware.com/d/info/desktop_downloads/vmware_player/3_0

Download VMware Player

Download VMware Player

Select VMware Player to Download

Select VMware Player to Download

VMware Player Free Product Download

VMware Player Free Product Download

Install VMware Player on your physical machine with the use of the downloaded bundle.

VMware Player - Ready to install

VMware Player – Ready to install

VMware Player - installing

VMware Player – installing

Now, create virtual machine with the use of it and install Ubuntu 10.04 LTS on it with the use of ISO file and do appropriate configurations for the virtual machine.

Browse Ubuntu ISO

Browse Ubuntu ISO

Proceed with instructions and let the set up finish.

Virtual Machine in VMware Player

Virtual Machine in VMware Player

Once you are done with it successfully*, Select Play virtual Machine.

Start Virtual Machine in VMware Player

Start Virtual Machine in VMware Player

Open Terminal (Command prompt in Ubuntu) and check the IP address of the Virtual Machine.

NOTE: IP address may change so if Virtual machine cannot be connected by SSH from physical machine then have a look on IP address 1st.

Ubuntu Virtual Machine - ifconfig

Ubuntu Virtual Machine – ifconfig

Apply following configuration in physical & virtual machine for Java 6 and Hadoop installation only.

Installing Java 6

sudo apt-get install sun-java6-jdk

sudo update-java-alternatives -s java-6-sun [Verify Java Version]

Setting up Hadoop  0.20.2

Download Hadoop from http://www.apache.org/dyn/closer.cgi/hadoop/core and place under /usr/local/hadoop

HADOOP Configurations

Hadoop requires SSH access to manage its nodes, i.e. remote machines [In our case virtual Machine] plus your local machine if you want to use Hadoop on it.

On Physical Machine

Generate an SSH key

Generate an SSH key

Generate an SSH key

Enable SSH access to your local machine with this newly created key.

Enable SSH access to your local machine

Enable SSH access to your local machine

Or you can copy it from $HOME/.ssh/id_rsa.pub to $HOME/.ssh/authorized_keys manually.

Test the SSH setup by connecting to your local machine with the root  user.

Test the SSH setup

Test the SSH setup

Use ssh 192.168.1.3 from physical machine as well. It will give same result.

On Virtual Machine

The root user account on the slave (Virtual Machine) should be able to access physical machine via a password-less SSH login.

Add the Physical Machine’s public SSH key (which should be in ) to the authorized_keys file of Vitual Machine (in this user’s ). You can do this manually

(Physical Machine)$HOME/.ssh/id_rsa.pub -> (VM)$HOME/.ssh/authorized_keys

SSH Key may look like (Can’t be same though J)

ssh

rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwjhqJ7MyXGnn5Ly+0iOwnHETAR6Y3Lh3UUKb

aCIP2/0FsVOWhBvcSLMEgT1ewrRPKk9IGoegMCMdHDGDfabzO4tUsfCdfvvb9KFRcB

U3pKdq+yVvCVxXtoD7lNnMtckUwSz5F1d04Z+MDPbDixn6IAu/GeX9aE2mrJRBq1Pz

n3iB4GpjnSPoLwQvEO835EMchq4AI92+glrySptpx2MGporxs5LvDaX87yMsPyF5tutu

Q+WwRiLfAW34OfrYsZ/Iqdak5agE51vlV/SESYJ7OqdD3+aTQghlmPYE4ILivCsqc7w

xT+XtPwR1B9jpOSkpvjOknPgZ0wNi8LD5zyEQ3w== root@mitesh-laptop

Use ssh 192.168.1.3 from virtual machine to verify ssh access and have a feel of it to understand ssh working.

For more understanding, Ping 192.168.1.3 and 192.168.28.136 from each other.

For detail information on Network Settings in VMWare Player visit http://www.vmware.com/support/ws55/doc/ws_net_configurations_common.html VMware Player has similar concepts.

Using 0.0.0.0 for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses of Ubuntu box.

To disable IPv6 on Ubuntu 10.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file:

#disable ipv6

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

Ubuntu - Disable IPv6

Ubuntu – Disable IPv6

 <HADOOP_INSTALL>/conf/hadoop-env.sh -> set the JAVA_HOME environment variable to the Sun JDK/JRE 6 directory.

 

# The java implementation to use.  Required.

export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.20

 

<HADOOP_INSTALL>/conf/core-site.xml ->

 

Configure the directory where Hadoop will store its data files, the network ports it listens to, etc. Our setup will use Hadoop’s Distributed File System,

Hadoop - core-site.xml

Hadoop – core-site.xml

HDFS, even though our little “cluster” only contains our single local machine.

<property>

  hadoop.tmp.dir

  /usr/local/hadoop/tmp/dir/hadoop-${user.name}

</property>

 <HADOOP_INSTALL>/conf/mapred-site.xml ->

<property>

  <name>mapred.job.tracker</name>

  <value>192.168.1.3:54311</value>

</property>

Hadoop - mapred-site.xml

Hadoop – mapred-site.xml

 <HADOOP_INSTALL>/conf/hdfs-site.xml

 

<property>

  <name>dfs.replication</name>

  <value>2</value>

</property>

Physical Machine vs Virtual Machine (Master/Slave) Settings on Physical Machine only

<HADOOP_INSTALL>/conf/masters

The conf/masters file defines the namenodes of our multi-node cluster. In our case, this is just the master machine.

192.168.1.3

<HADOOP_INSTALL>/conf/slaves

 This conf/slaves file lists the hosts, one per line, where the Hadoop slave daemons (datanodes and tasktrackers) will be run. We want both the master box and the slave box to act as Hadoop slaves because we want both of them to store and process data.

192.168.1.3

192.168.28.136

NOTE: Here 192.168.1.3 & 192.168.28.136 are the IP addresses of Physical Machine and Virtual machine respectively which may vary in your case. Just Enter IP Addresses in files and you are done!!!

Let’s enjoy the ride with Hadoop:

All Set for having “HANDS ON HADOOP”.

Formatting the name node

ON Physical Machine and Virtual Machine

The first step to starting up your Hadoop installation is formatting the Hadoop filesystem which is implemented on top of the local filesystem of your “cluster” (which includes only your local machine if you followed this tutorial). You need to do this the first time you set up a Hadoop cluster. Do not format a running Hadoop filesystem, this will cause all your data to be erased.

hadoop namenode -format

hadoop namenode -format

Starting the multi-node cluster

1.    Start HDFS daemons

Run the command /bin/start-dfs.sh on the machine you want the (primary) namenode to run on. This will bring up HDFS with the namenode running on the machine you ran the previous command on, and datanodes on the machines listed in the conf/slaves file.

Physical Machine

Hadoop - start-dfs.sh

Hadoop – start-dfs.sh

VM

Hadoop - DataNode on Slave Machine

Hadoop – DataNode on Slave Machine

1.    Start MapReduce daemons

Run the command /bin/start-mapred.sh on the machine you want the jobtracker to run on. This will bring up the MapReduce cluster with the jobtracker running on the machine you ran the previous command on, and tasktrackers on the machines listed in the conf/slaves file.

Physical Machine

Hadoop - Start MapReduce daemons

Hadoop – Start MapReduce daemons

VM

TaskTracker in Hadoop

TaskTracker in Hadoop

Running a MapReduce job

Here’s the example input data I have used for the multi-node cluster setup described in this tutorial.

All ebooks should be in plain text us-ascii encoding.

http://www.gutenberg.org/etext/20417

http://www.gutenberg.org/etext/5000

http://www.gutenberg.org/etext/4300

http://www.gutenberg.org/etext/132

http://www.gutenberg.org/etext/1661

http://www.gutenberg.org/etext/972

http://www.gutenberg.org/etext/19699

Download above ebooks and store it in local file system.

Copy local example data to HDFS

Hadoop - Copy local example data to HDFS

Hadoop – Copy local example data to HDFS

Run the MapReduce job

hadoop-0.20.2/bin/hadoop jar hadoop-0.20.2-examples.jar wordcount examples example-output

Failed Hadoop Job

Failed Hadoop Job

Retrieve the job result from HDFS

To read the file directly from HDFS without copying it to the local file system. In this tutorial, we will copy the results to the local file system though.

mkdir /tmp/example-output-final

bin/hadoop dfs -getmerge example-output-final /tmp/ example-output-final

Hadoop - Word count example

Hadoop – Word count example

Hadoop - MapReduce Administration

Hadoop – MapReduce Administration

Hadoop - Running and Completed Job

Hadoop – Running and Completed Job

Task Tracker Web Interface

Hadoop - Task Tracker Web Interface

Hadoop – Task Tracker Web Interface

Hadoop - NameNode Cluster Summary

Hadoop – NameNode Cluster Summary

References

http://www.michael-noll.com/wiki/Running_Hadoop_On_Ubuntu_Linux_(Multi-Node_Cluster)

http://www.michael-noll.com/wiki/Writing_An_Hadoop_MapReduce_Program_In_Python

http://java.dzone.com/articles/how-hadoop-mapreduce-works

http://ayende.com/Blog/archive/2010/03/14/map-reduce-ndash-a-visual-explanation.aspx

http://www.youtube.com/watch?v=Aq0x2z69syM

http://www.gridgainsystems.com/wiki/display/GG15UG/MapReduce+Overview

http://map-reduce.wikispaces.asu.edu/

http://blogs.sun.com/fifors/entry/map_reduce

http://www.vmware.com/support/ws55/doc/ws_net_configurations_common.html

http://www.ibm.com/developerworks/aix/library/au-cloud_apache/

 

ssh

rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwjhqJ7MyXGnn5Ly+0iOwnHETAR6Y3Lh3UUKb

aCIP2/0FsVOWhBvcSLMEgT1ewrRPKk9IGoegMCMdHDGDfabzO4tUsfCdfvvb9KFRcB

U3pKdq+yVvCVxXtoD7lNnMtckUwSz5F1d04Z+MDPbDixn6IAu/GeX9aE2mrJRBq1Pz

n3iB4GpjnSPoLwQvEO835EMchq4AI92+glrySptpx2MGporxs5LvDaX87yMsPyF5tutu

Q+WwRiLfAW34OfrYsZ/Iqdak5agE51vlV/SESYJ7OqdD3+aTQghlmPYE4ILivCsqc7w

xT+XtPwR1B9jpOSkpvjOknPgZ0wNi8LD5zyEQ3w== root@mitesh-laptop

Advertisements

How to Configure CloudAnalyst in Eclipse


Create New Java Project

New Java Project in Eclipse

New Java Project in Eclipse

Create Java Project in Eclipse

Create Java Project in Eclipse

New Java Project in Eclipse: Java Settings

New Java Project in Eclipse: Java Settings

Java™ Application Development on Linux® – Free 599 Page eBook

Enterprise Java Virtualization:

Understanding the TCO Implications

InfoWorld’s Java IDE Comparison Strategy Guide:

Java Essential Training

Apache Jakarta Commons: Reusable Java™ Components

Enabling Rapid ROI: With Java™ – Based Business Intelligence Applications:

Go to File->Import

New Java Project in Eclipse: Import Source Code from Existing Project

New Java Project in Eclipse: Import Source Code from Existing Project

New Java Project in Eclipse: Import resources from Local File System

New Java Project in Eclipse: Import resources from Local File System

Run CloudAnalyst in Eclipse

Run CloudAnalyst in Eclipse

Done!!!

CloudAnalyst GUI

CloudAnalyst GUI

Single Sign-On for Java and Web Applications

Bulletproof Java Code: A Practical Strategy for Developing Functional, Reliable, and Secure Java Code

Transforming a Generic Java IDE to Your Application Specific IDE:

The Java Virtual Appliance—No OS Required

BEA WebLogic® Operations Control: Application Virtualization for Enterprise Java

Enabling Rapid ROI: With Java™ – Based Business Intelligence Applications:

Tutorial- Application Development on Force.com from 30 day Free Trial


Force.com is a cloud computing platform as a service offering from Salesforce, the first of its kind allowing developers to build multi-tenant applications that are hosted on their servers as a service.

Features of force.com

The multitenant architecture of Force.com consists of the following features:

•Shared infrastructure. Every customer (or tenant) of Force.com shares the same infrastructure. You are assigned a logical environment within the Force.com infrastructure.

•Single version There is only one version of the Force.com platform in production. The same platform is used to deliver applications of all sizes and shapes, used by 1 to 100,000 users.

•Continuous, zero-cost improvements When Force.com is upgraded to include new features or bug fixes, the upgrade is enabled in every customer’s logical environment with zero to minimal effort required.

•Infrastructure Explosure Force.com is targeted toward corporate application developers and independent software vendors. Unlike the other PaaS offerings, it does not expose developers directly to its own infrastructure

•Integration with other Technologies: FORCE.com integrates with other technologies using open standards such as SOAP and REST, the programming languages and metadata representations used to build applications are proprietary to Force.com.

•Relational Database
–To store and manage the business data. Data is stores in the objects.
•Application Services
–logging, transaction processing, validation
•Declarative Meta-Data
–Customized configured simple XML and documented schema’s
•Programming Languages
–Apex
force.com
force.com - Infrastructure, Application and Operational Services
The layers of technologies and services make up the platform.
force.com - Application Architecture
force.com - How it works?

force.com – How it works?

This slideshow requires JavaScript.

Note:
30 day free trial doesn’t provide Workflow support else we can create full featured application. In Trial, we can create a Visualforce page but cannot enable Sites for our organization nor register our Force.com domain name and expose the Visualforce page we created as a public product catalog on the Web.

Workflow Support is available in Force.com One App: Start with one custom app- for your organization only.

force.com - 30 day Free Trial

force.com – 30 day Free Trial

Related articles

Creating VMware ESX Server Templates


VMware Template Clone to Template and Convert into Template

It’s been a cumbersome activity to create new virtual machines with the software stack installed and configured properly. You can always use tools like KickStart to automatically install the operating system and then install other software as needed.

Configurations of specific application is a complex activity since it may involve manual activity of database configuration or application configuration.

Solution introduced by VMware

Template: a pre-developed VM used to make new VM with a similar Software Stack, Configurations.

VMware ESX Server templates can be a time-saving for virtualization administrators as they allow you to clone, convert(Live VMs) and deploy virtual machines.

You can pick and configure every piece of software you will need into a template, and clone it to new instances whenever needed. It’s not only easier but also much faster.

Convert To Template

Offline Machine

It can be “Clone to Template” and “Converted into Template”.

VMware ESXi - Convert to Template

VMware ESXi – Convert to Template

Clone to Template

VMware ESXi - Clone to Template

VMware ESXi – Clone to Template

VMware ESXi - Clone to Template - Name and Location

VMware ESXi – Clone to Template – Name and Location

VMware ESXi - Clone to Template - Host and Cluster

VMware ESXi – Clone to Template – Host and Cluster

VMware ESXi - Clone to Template - Database

VMware ESXi – Clone to Template – Database

VMware ESXi - Clone to Template - Disk Format

VMware ESXi – Clone to Template – Disk Format

VMware ESXi - Clone to Template - Summary

VMware ESXi – Clone to Template – Summary

Live Virtual Machine

Live VM can only be “Clone to Template”; it cannot be Converted into Template.

Convert Template into VM

Select a Template->Right Click-> Convert to a Virtual Machine

VMware ESXi - Convert Template into VM - Host and Cluster

VMware ESXi – Convert Template into VM – Host and Cluster

VMware ESXi - Convert Template into VM - Resource Pool

VMware ESXi – Convert Template into VM – Resource Pool

VMware ESXi - Convert Template into VM - Summary

VMware ESXi – Convert Template into VM – Summary

Related articles

CloudSwing-Flexible PaaS (Sample Application Deployment Demo)


Related Blogs:

What is PaaS?

CloudBees~ Java Platform as a Service (Sample Application Deployment Process)

CloudSwing-Flexible Platform as a Service

CloudSwing is a Cloud within a Cloud; Its a PaaS available as a SaaS 🙂 Bit complicated as we hear it but extremely easy when we really understand it.

Its a Completely Flexible PaaS solution with a pre-build templates for various stacks.

Available Platforms:

On top of it, Flexibility will drive the user crazy since user can use Open-source and proprietary software as component and use it easily.

Application Monitoring:

CloudSwing also helps you manage and monitor all of your applications across multiple clouds. OpenLogic has partnered with New Relic to provide application monitoring within CloudSwing. New Relic monitoring agents which are by default installed on pre-built stack,  collect information about the performance of your applications, but do not collect any identifiable data processed by or stored in your application.

Supported Public Clouds:

  • AWS
  • Rackspace
  • Windows Azure (In Rodmap)
  • Private Clouds (In Roadmap)
  • User can use his private Cloud account as well.

Free Plan:

Up to 3 members and deploy up to 5 concurrent applications.

When you are done using your deployed applications, you will want to make sure you Stop the application to avoid using up your cloud time. Go to the Applications page, click the Application and click Stop to shut it down.

Let’s Deploy a Sample Application of Struts-Spring-Hibernate with Tomcat Stack.

1) 1 Step Registration and You are on the way to the CloudSwing Journy.

Registration for CloudSwing PaaS

Registration for CloudSwing PaaS

2) Dashboard, Click on the New Application

CloudSwing PaaS Dashboard

CloudSwing PaaS Dashboard

3) Click on the START.

CloudSwing Services

CloudSwing Services

4) Select the appropriate technology STACK.

CloudSwing - Select a Platform - Tomcat

CloudSwing – Select a Platform – Tomcat

CloudSwing - Select a Platform - Tomcat -verify details

CloudSwing – Select a Platform – Tomcat -verify details

5) Verify the Components in the pre-built stack and ADD new if required.

CloudSwing - customize the tomcat platform

CloudSwing – customize the tomcat platform

6) Select a Public Cloud on which you want to deploy your sample application.

CloudSwing - Select Publci Cloud to deploy an application

CloudSwing – Select Publci Cloud to deploy an application

7) Select appropriate Configurations as required for the Application

CloudSwing - Select Public Cloud Select Server Configuration

CloudSwing – Select Public Cloud Select Server Configuration

8) Verify the Instance Details and Click on LAUNCH!

CloudSwing - Launch the instance

CloudSwing – Launch the instance

9) If any Additional components have been added and you want to save that CUSTOM stack then you can Save the Private Stack as well.

CloudSwing - instance allocation

CloudSwing – instance allocation

10) Once the Instance is ready You get the IP Address and SSH Private Key (.PEM)

CloudSwing - instance details

CloudSwing – instance details

CloudSwing - Default Tomcat Page information

CloudSwing – Default Tomcat Page information

11) Verify the Tomcat.

CloudSwing - Tomcat Page verification

CloudSwing – Tomcat Page verification

12) PuTTY’s Key Generator is broken into three main functions: generating, importing, and exporting keys. If you will be receiving a key from another source, you will import the key into the PuTTY Key Generator and then export a PuTTY key for use with the PuTTY applications.

CloudSwing - Import SSH key into PUTTYZen

CloudSwing – Import SSH key into PUTTYZen

13) Download WINSCP

WinSCP (Windows Secure CoPy) is a free and open source SFTP, SCP, and FTP client for Microsoft Windows. Its main function is secure file transfer between a local and a remote computer. Beyond this, WinSCP offers basic file manager and file synchronization functionality. For secure transfers, it uses Secure Shell (SSH) and supports the SCP protocol in addition to SFTP.

Download WinSCP

Download WinSCP

14) Use Imported Key; Username and Password Ubuntu to login into the instance.

Login into the Tocat Server with the use of Key, Uname and password

Login into the Tocat Server with the use of Key, Uname and password

15) Copy the Application from the Local system to Remote.

Copy war file from the local system to CloudSwing Remote Server

Copy war file from the local system to CloudSwing Remote Server

Copy war file from the local system to CloudSwing Remote Server

Copy war file from the local system to CloudSwing Remote Server

16) If you can’t copy the files and get Permission related. errors then Give permission to destination directory with Recursion attribute and try again.

Permissions in Ubuntu

Permissions in Ubuntu

17) While dealing with MySQL, if you get permission related error then give proper rights to the location where data file is available.

Then Create a DB and RUN an SQL Script.

Create Database with the use of .sql script

Create Database with the use of .sql script

19) Change the Database related configurations in the configuration fileof DB (Host, Username, Password, Port) and RESTART tomcat

Change the Database related Properties in Application Configuration

Change the Database related Properties in Application Configuration

20) Done!!!

Sample Application on CloudSwing

Sample Application on CloudSwing

Example of—————-.pem

—–BEGIN RSA PRIVATE KEY—–
MIIEpQIBAAKCAQEAjcGj98PIJXcQM08TqruFyIul+p9TTyEM73UR74FvRaq8wH22APIwI9Pk8y9Z
cDaAcLWC7N8E7zx8YCHSl0WfzBNWexs7dgZMH31UZIaGoaRlDYT6Gdf6vhp/ohHD88kTjae4nXUe
DuK8qLNXnDCt4jvFvnQXjmGY4slTcdvlbqFaZG/Jr0S4FnPLWV3uiikwdfwtcXQtp6fr3rNAL59R
P/gEA1/UAVCvJYsoa9GQGoiaxTJodkas77/oaarZDF+ZeYT0h+zmR0hUTUBf5I/gvpb0jJq8KzsQ
o0J93MpijZUynLn8m+sa3Gvrp1K+xxnIjaS7+cn/fa3pSEStdvokLQIDAQABAoIBAFKf1mmYxPUJ
Y/j0E3uFV6Ifu3vMF+vcUMTV0MFwCSJrNR9hZo9AmsyXOjCAnbnpGo4XThuwlhi3gasqq6ueWljB
wLt6kPrnCsGj9GevfZOD1Z6+rmQX3j+mBFS71CIpRmtfohys4fs9L0eJWPxh50ghHM44rm4/9rPh
MvD/gcgsBvKJUgygNineWBEaPsU/qo36VPR4EdvFP9XWrSvEFNOT+marzRNkCTWTW0UZxtskcvX7
uI4k4b32QJEz2xO7OdsdEjb7WJxq7SZbVC0UTDbsLJjfxlu9PHYXYxgipM1e6kTmy+5vkEfrrSAl
czptEwVgNioLWbB48550WH0qTIECgYEA0kos56jce4uqd53Ndl71YOxbbVKHgrKrGlLdi7tlqnUK
ZmUbo9Ba/wX6CVTQzmTyUtGowNLKAcA1m1KlcTDzTXkDwONBg5HjAUH8Z7jzIH19qIwOkIjae+ZE
TtC1u5V0viqkYWnC4JnDzjQjKAFIQD+cZul+z3vJevYC17EWnKECgYEArJHYba7eAffk57slkGK0
DMfSJkUoZD3BIPRiosdOGDcfWy+ozaJTYY383c/REvSDrtqORMukE2KwIQJuNmGCcTUxvl54vIwM
k3kClWdsFm/xAKQDmBCp9rqfGtg5bNTdNIu0aXQ58TOO1fNAnZbYCyWNbOL60LWyI/MGjUG2MA0C
gYEAqHUrS9kF5yKXSINtWGnxf9dX1lfKnnSqhMflGk7gvpBL6IMOhUgf3TPYfSkorG5JgPbbjLxP
ft+PEgI+7lIcVe+fhiGHFfMEOrm1jRGoElr1EtQ/xqEbBS4NgmXHb6Hmh4B5dl/W8T28ka9Kin2c
d3t9uyNJpsSvPoVc+ZSvXIECgYEApMSELwWdt6dnCdLoZNm5K9LGVgAGNt+3vK1aWC2P5RMMf1Yc
CGsKzcRyQZ8g8sY/zP2khQ8i09eQb5QQgx/LGig+HJO7F9toTo5l5xzeWPX82C3BuLmAbrF1JH57
JeyAYKFbWqy8fg5KPQGLGmxiTxJF3EdET26MbkKmaMzrFSkCgYEApjZNABx3tOSWE1i5lmWdeHma
gbktKgpSxQYLpWYlNgbsp5llOLOLdhH1wR38PbMdqoOuDf3FBTz10c0VJ43ScFm3u+t3Chhn9500
iHd1/gHz/mClr4rpPnRdZKvp1kYXljvl00J3gpPFcIEKxPJcLzlks03J6dLoKgBEnTUtp5o=
—–END RSA PRIVATE KEY—–

Example of—————-CS.ppk

PuTTY-User-Key-File-2: ssh-rsa
Encryption: none
Comment: imported-openssh-key
Public-Lines: 6
AAAAB3NzaC1yc2EAAAADAQABAAABAQCNwaP3w8gldxAzTxOqu4XIi6X6n1NPIQzv
dRHvgW9FqrzAfbYA8jAj0+TzL1lwNoBwtYLs3wTvPHxgIdKXRZ/ME1Z7Gzt2Bkwf
fVRkhoahpGUNhPoZ1/q+Gn+iEcPzyRONp7iddR4O4ryos1ecMK3iO8W+dBeOYZji
yVNx2+VuoVpkb8mvRLgWc8tZXe6KKTB1/C1xdC2np+ves0Avn1E/+AQDX9QBUK8l
iyhr0ZAaiJrFMmh2Rqzvv+hpqtkMX5l5hPSH7OZHSFRNQF/kj+C+lvSMmrwrOxCj
Qn3cymKNlTKcufyb6xrca+unUr7HGciNpLv5yf99relIRK12+iQt
Private-Lines: 14
AAABAFKf1mmYxPUJY/j0E3uFV6Ifu3vMF+vcUMTV0MFwCSJrNR9hZo9AmsyXOjCA
nbnpGo4XThuwlhi3gasqq6ueWljBwLt6kPrnCsGj9GevfZOD1Z6+rmQX3j+mBFS7
1CIpRmtfohys4fs9L0eJWPxh50ghHM44rm4/9rPhMvD/gcgsBvKJUgygNineWBEa
PsU/qo36VPR4EdvFP9XWrSvEFNOT+marzRNkCTWTW0UZxtskcvX7uI4k4b32QJEz
2xO7OdsdEjb7WJxq7SZbVC0UTDbsLJjfxlu9PHYXYxgipM1e6kTmy+5vkEfrrSAl
czptEwVgNioLWbB48550WH0qTIEAAACBANJKLOeo3HuLqnedzXZe9WDsW21Sh4Ky
qxpS3Yu7Zap1CmZlG6PQWv8F+glU0M5k8lLRqMDSygHANZtSpXEw8015A8DjQYOR
4wFB/Ge48yB9faiMDpCI2nvmRE7QtbuVdL4qpGFpwuCZw840IygBSEA/nGbpfs97
yXr2AtexFpyhAAAAgQCskdhtrt4B9+TnuyWQYrQMx9ImRShkPcEg9GKix04YNx9b
L6jNolNhjfzdz9ES9IOu2o5Ey6QTYrAhAm42YYJxNTG+Xni8jAyTeQKVZ2wWb/EA
pAOYEKn2up8a2Dls1N00i7RpdDnxM47V80CdltgLJY1s4vrQtbIj8waNQbYwDQAA
AIEApjZNABx3tOSWE1i5lmWdeHmagbktKgpSxQYLpWYlNgbsp5llOLOLdhH1wR38
PbMdqoOuDf3FBTz10c0VJ43ScFm3u+t3Chhn9500iHd1/gHz/mClr4rpPnRdZKvp
1kYXljvl00J3gpPFcIEKxPJcLzlks03J6dLoKgBEnTUtp5o=
Private-MAC: ee593a11e05d0aa146ebbb524167437f7a25190a
Related articles

VMware: How to get Full Screen in Virtual Machine Console (ESXi)


Full Screen in Virtual Machine (VMware)

Full Screen in Virtual Machine (VMware)

I want to install Oracle Client 32 BIT on Windows 2008 R2 VM but I can’t see the “Next” button in the installation steps due to small window 😦

VMware - Virtual Machine Console

VMware – Virtual Machine Console

Lets Try Full Screen Mode:

VMware - Virtual Machine Console - Full Screen

VMware – Virtual Machine Console – Full Screen

But it doesn’t work as well…

So Now what??

Right Click on Desktop -> Screen Resolution

Windows 2008 - Screen Resolution

Windows 2008 – Screen Resolution

Select Higher Select Resolution -> Apply ->Ok

Windows - Select Resolution

Windows – Select Resolution

Now…

VMware - Virtual Machine Console - Full Screen

VMware – Virtual Machine Console – Full Screen

At first sight this look obvious but trust me none wants to waste time on this kind of issue because you tend to use Full Screen Option from View Menu which is not going to serve the purpose 🙂

How to Install VMware Tools on Windows Guest OS?


Install VMware Tools on Windows Guest OS

Install VMware Tools on Windows Guest OS

The most current version of VMware Tools must be installed on the virtual machine or template to customize the guest operating system during cloning or deployment.

Installing VMware Tools in the guest operating system is vital. Although the guest operating system can run without VMware Tools, you lose important functionality and convenience.

When you install VMware Tools, you install the following components:

  • The VMware Tools service (vmtoolsd.exe on Windows guests or vmtoolsd on Linux and Solaris guests).
  • SVGA display driver, the vmxnet networking driver for some guest operating systems, BusLogic SCSI driver, memory control driver, the sync driver to quiesce I/O for Consolidated Backup, and the VMware mouse driver.
  • VMware Tools control panel,
  • A set of scripts that helps you to automate guest operating system operations.
  • The VMware user process

How to install?

Right Click on a VM

Install/Upgrade VMware Tools on Virtual Machine

Install/Upgrade VMware Tools on Virtual Machine

Select the Auto Run Option for the VMware Tools in VM.

VMware Tools - Installation Wizard

VMware Tools – Installation Wizard

Select Complete Installation

VMware Tools - Complete Installation

VMware Tools – Complete Installation

Click on Install

VMware Tools - Click to install

VMware Tools – Click to install

Installing VMware Tools

Installing VMware Tools

Installing VMware Tools Completed

Installing VMware Tools Completed

VMware Tools Restart the Computer

VMware Tools Restart the Computer

Check the status of VMware tools installation:

Click on VM in vSphere Client and click on summary tab

status of VMware tools installation

status of VMware tools installation

Done!!!

Reference:

vSphere Virtual Machine Administration Guide
Related articles