Install MongoDB on Ubuntu

MongoDB is an open-source document database, and the leading NoSQL database. Written in C++.

This example is using MongoDB 2.4.6, running on Ubuntu13.4, both MongoDB client and server console are run on localhost, same machine.

  • Download MongoDB from here.
  • Enter into the directory where the MongoDB is downloaded. By default it downloads in “Downloads” directory
$ cd Downloads/
  • Unzip the tar file.
$ tar xzf mongodb-linux-i686-2.4.6.tgz
  • Move mongodb-linux-i686-2.4.6 to mongodb
$ sudo mkdir /usr/lib/mongodb

$ sudo mv mongodb-linux-i686-2.4.6 /usr/lib/mongodb/

  • Before you start mongod for the first time, you will need to create the data directory. By default, mongod writes data to the /data/db/ directory. To create this directory, and set the appropriate permissions use the following commands:
# mkdir -p /data/db

# chmod 777 /data/*
 

1st command prompt: Start mongodb server

$ cd /usr/lib/mongodb/mongodb-linux-i686-2.4.6/bin/

$ ./mongod

mongo

2nd command prompt: Start the client

$ cd /usr/lib/mongodb/mongodb-linux-i686-2.4.6/bin/

$ ./mongo

mongo1

mongo2

Reference:

http://docs.mongodb.org/manual/tutorial/install-mongodb-on-os-x/

http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/

Note: The information provided here is best of my knowledge and experiences if at all any modifications are to be made please help me with your valuable suggestions which are always welcome…. :)

Advertisements

Mount and Unmount USB drive in Ubuntu

Mount the Drive

Step 1 : Go to media  

$ cd /media

Step 2 : Create the Mount Point

Now we need to create a mount point for the device, let’s say we want to call it “usb-drive “. You can call it whatever you want. Create the mount point:

$ sudo mkdir usb-drive

 Step 3 : Mount the Drive

We can now mount the drive. Let’s say the device is /dev/sdb1, the filesystem is FAT16 or FAT32 , and we want to mount it at /media/usb-drive (having already created the mount point)

 $ sudo mount -t vfat /dev/sdb1 /media/usb-drive

 Step 4 : Check USB drive contents

 $ ls /media/usb-drive/

Unmount the Drive

$ sudo umount /dev/sdb1

            Or

$ sudo umount /media/usb-drive

—————————-*———————*——————–*———————————–

NOTE
Error: umount: /media/usb-drive: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Solution: It means that some process has a working directory or an open file handle underneath the mount point. The best thing to do is to close the file before unmounting.

Note: The information provided here is best of my knowledge and experiences if at all any modifications are to be made please help me with your valuable suggestions which are always welcome…. 🙂

Database Access with Apache Hadoop

The DBInputFormat and DBOutputFormat component provided in Hadoop 0.19 finally allows easy import and export of data between Hadoop and many relational databases, allowing relational data to be more easily incorporated into your data processing pipeline.

To import and export data between Hadoop and MySQL, you surely need Hadoop, MySQL installation on your machine.

  • My System Configuration

UBUNTU 13.4

JAVA 1.7.0_25

HADOOP 1.1.2

MySQL

Download mysql-connector-java-5.0.5.jar file and copy it to in $HADOOP_HOME/lib and restart the Hadoop ecosystem.

  • Database and table creation in MySQL
mysql> use testDb;

mysql> create table studentinfo (  id integer ,  name varchar(32) );

mysql> insert into studentinfo values(1,'archana');

mysql> insert into studentinfo values(2,'XYZ');

mysql> insert into studentinfo values(3,'archana');

  • Project Structure

The program contains the following java files.

Main.java
Map.java
Reduce.java
DBInputWritable.java
DBOutputWritable.java

To access the data from DB we have to create a class to define the data which we are going to fetch and write back to DB. In my project I created a class namely DBInputWritable.java and DBOutputWritable.java to accomplish the same.

DBInputWritable.java

package example;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;

public class DBInputWritable implements Writable, DBWritable
{
   private int id;
   private String name;

   public void readFields(DataInput in) throws IOException {   }

   public void readFields(ResultSet rs) throws SQLException
   //Resultset object represents the data returned from a SQL statement
   {
     id = rs.getInt(1);
     name = rs.getString(2);
   }

   public void write(DataOutput out) throws IOException {  }

   public void write(PreparedStatement ps) throws SQLException
   {
     ps.setInt(1, id);
     ps.setString(2, name);
   }

   public int getId()
   {
     return id;
   }

   public String getName()
   {
     return name;
   }
}

This class “DBInputWritable” will be used in our Map class. Now let’s write our Mapper class.

Map.java

package example;

import java.io.IOException;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;

public class Map extends Mapper<LongWritable, DBInputWritable, Text, IntWritable>
{
   private IntWritable one = new IntWritable(1);

   protected void map(LongWritable id, DBInputWritable value, Context ctx)
   {
     try
     {
        String[] keys = value.getName().split(" ");

        for(String key : keys)
        {
           ctx.write(new Text(key),one);
        }
     } catch(IOException e)
     {
        e.printStackTrace();
     } catch(InterruptedException e)
     {
        e.printStackTrace();
     }
   }
}

DBOutputWritable.java

package example;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;
import java.sql.ResultSet;
import java.sql.PreparedStatement;
import java.sql.SQLException;
import org.apache.hadoop.io.Writable;
import org.apache.hadoop.mapreduce.lib.db.DBWritable;

public class DBOutputWritable implements Writable, DBWritable
{
   private String name;
   private int count;

   public DBOutputWritable(String name, int count)
   {
     this.name = name;
     this.count = count;
   }

   public void readFields(DataInput in) throws IOException {   }

   public void readFields(ResultSet rs) throws SQLException
   {
     name = rs.getString(1);
     count = rs.getInt(2);
   }

   public void write(DataOutput out) throws IOException {    }

   public void write(PreparedStatement ps) throws SQLException
   {
     ps.setString(1, name);
     ps.setInt(2, count);
   }
}

This class “DBOutputWritable” will be used in our Reduce class. Now let’s write our Reducer class.

Reduce.java

package example;

import java.io.IOException;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;

public class Reduce extends Reducer<Text, IntWritable, DBOutputWritable, NullWritable>
{
   protected void reduce(Text key, Iterable<IntWritable> values, Context ctx)
   {
     int sum = 0;

     for(IntWritable value : values)
     {
       sum += value.get();
     }

     try
     {
     ctx.write(new DBOutputWritable(key.toString(), sum), NullWritable.get());
     } catch(IOException e)
     {
       e.printStackTrace();
     } catch(InterruptedException e)
     {
       e.printStackTrace();
     }
   }
}

Main.java

package example;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.db.DBConfiguration;
import org.apache.hadoop.mapreduce.lib.db.DBInputFormat;
import org.apache.hadoop.mapreduce.lib.db.DBOutputFormat;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.NullWritable;

public class Main
{
   public static void main(String[] args) throws Exception
   {
     Configuration conf = new Configuration();
     DBConfiguration.configureDB(conf,
     "com.mysql.jdbc.Driver",   // driver class
     "jdbc:mysql://localhost:3306/testDb", // db url
     "root",    // user name
     "hadoop123"); //password

     Job job = new Job(conf);
     job.setJarByClass(Main.class);
     job.setMapperClass(Map.class);
     job.setReducerClass(Reduce.class);
     job.setMapOutputKeyClass(Text.class);
     job.setMapOutputValueClass(IntWritable.class);
     job.setOutputKeyClass(DBOutputWritable.class);
     job.setOutputValueClass(NullWritable.class);
     job.setInputFormatClass(DBInputFormat.class);
     job.setOutputFormatClass(DBOutputFormat.class);

     DBInputFormat.setInput(
     job,
     DBInputWritable.class,
     "studentinfo",   //input table name
     null,
     null,
     new String[] { "id", "name" }  // table columns
     );

     DBOutputFormat.setOutput(
     job,
     "output",    // output table name
     new String[] { "name", "count" }   //table columns
     );

     System.exit(job.waitForCompletion(true) ? 0 : 1);
   }
}

Now you are ready to run the program. Convert the code to .jar file and run it.

Execute jar file

$ hadoop jar /home/hduser/DbIpOp.jar

Result

mysql> select * from output;

+-----------+-------+

| name      | count |

+-----------+-------+

| archana |     2    |

|  XYZ      |     1  |

+----------+------=--+2 rows in set (0.19 sec)

Reference

http://blog.cloudera.com/blog/2009/03/database-access-with-hadoop/

Sqoop:Exporting Data From HDFS to MySQL

Step 1: Install and start MySQL if you have not already done so

MySQL Installation Tutorial for instructions of how to install MySQL.

Step 2: Configure the MySQL Service and Connector

Downloadmysql-connector-java-5.0.5.jar file and copy it to $SQOOP_HOME/lib directory.

Step 3: Sqoop Installation

Sqoop Installation Tutorial for instructions of how to install Sqoop.

  • Database and table creation in MySQL

First connect to MySQL

$ mysql -u root -p

Enter password:

Create database ‘testDb’ and use ‘testDb’ database as a current database.

mysql> create database testDb;

mysql> use testDb;

Create table ‘stud1’

mysql> create table stud1(id integer,name char(20)); 

mysql> exit; 
  • HDFS File ‘student’
$hadoop dfs -cat /user/hduser/student 

1,Archana 

2,XYZ 

Sqoop Export

$sqoop export --connect jdbc:mysql://localhost/testDb --table stud1 -m 1 --export-dir /user/hduser/student

This example takes the files in /user/hduser/student and injects their contents in to the “stud1” table in the testDb” database. The target table must already exist in the database.

Note :

If you will get this

Error

com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Access denied for user ''@'localhost' to database 'testDb' 

Solution

Grant all privileges on testDb database to user:

mysql> grant all privileges on testDb.* to ''@localhost ;

Table Contents in MySQL

mysql> use testDb; 

mysql> select * from stud1; 

+------+----------+ 
| id   | name     | 
+------+----------+ 
| 1    | Archana  | 
| 2    | XYZ      | 
+------+----------+ 
2 rows in set (0.00 sec) 

Reference:

http://sqoop.apache.org/docs/1.4.0-incubating/SqoopUserGuide.html

Note: The information provided here is best of my knowledge and experiences if at all any modifications are to be made please help me with your valuable suggestions which are always welcome….  🙂

Sqoop:Importing data from MySQL into HDFS

Step 1: Install and start MySQL if you have not already done so

MySQL Installation Tutorial for instructions of how to install MySQL.

Step 2: Configure the MySQL Service and Connector

Download mysql-connector-java-5.0.5.jar file and copy it to $SQOOP_HOME/lib directory.

Step 3: Sqoop Installation

Sqoop Installation Tutorial for instructions of how to install Sqoop.

  • Database and table creation in MySQL

First connect to MySQL

$ mysql -u root -p

Enter password:

Create database ‘testDb’ and use ‘testDb’ database as a current database.

mysql> create database testDb;

mysql> use testDb;

Create table ‘student’

mysql> create table student(id integer,name char(20));

Add following 2 records to the table

mysql> insert into student values(1,'Archana');

mysql> insert into student values(2,'XYZ');

 Exit from MySQL

mysql> exit;

Sqoop import
1. Importing a table into HDFS

  • 1st way

Command for import

sqoop import --connect jdbc:mysql://localhost/databasename --username $USER_NAME --password $PASSWORD$ --table tablename --m 1

Execute the sqoop import

Here we are using database ‘testDb’ , username ‘root’, password ‘hadoop123’, and table student.

$ sqoop import --connect jdbc:mysql://localhost/testDb --username root --password hadoop123 --table student --m 1

——————- NOTE——————–

If you have not defined primary key for your table then you have to give ‘-m 1′ option for import.
Otherwise it gives error
ERROR tool.ImportTool: Error during import: No primary key could be found for table student1. Please specify one with --split-by or perform a sequential import with '-m 1'.
  • 2nd Way

Create a config file $HOME/import.txt add following to the config file

              import.txt

import
--connect
jdbc:mysql://localhost/testDb
--username
root
--password
hadoop123

Execute the sqoop import

sqoop --options-file /home/hduser/import.txt --table student -m 1

Once import is done you can find student.jar, student.class and student.java at following location /tmp/sqoop-hduser/compile/—-/student.jar

Files created in HDFS

$ hadoop dfs -ls -R student

Found 3 items

-rw-r--r--   1 hduser supergroup          0 2013-09-13 15:38 /user/hduser/student/_SUCCESS

drwxr-xr-x   - hduser supergroup          0 2013-09-13 15:38 /user/hduser/student/_logs

-rw-r--r--   1 hduser supergroup         16 2013-09-13 15:38 /user/hduser/student/part-m-00000

Data file contents

$ hadoop dfs -cat /user/hduser/student/part-m-00000

1,Archana
2,XYZ

2 Import all rows of a table in MySQL, but specific columns of the table

$ sqoop import --connect jdbc:mysql://localhost/testDb --username root --password hadoop123 --table student --columns "name" -m 1

Data file contents

$ hadoop dfs -cat  /user/hduser/student/part-m-00000

Archana
Xyz

3 Import all columns, filter rows using where clause

$ sqoop import --connect jdbc:mysql://localhost/testDb --username root --password hadoop123 --table student --where "id>1" -m 1 --target-dir /user/hduser/ar

Data file contents

$ hadoop dfs -cat  /user/hduser/ar/part-m-00000
2,XYZ

——————- NOTE——————– 

If you are facing FileExistsException then use “--target-dir /user/hduser/give_diff_name”
Error:
org.apache.commons.io.FileExistsException: Destination '/home/hduser/./student.java' already exists

Reference

http://sqoop.apache.org/docs/1.4.0-incubating/SqoopUserGuide.html

Note: The information provided here is best of my knowledge and experiences if at all any modifications are to be made please help me with your valuable suggestions which are always welcome…. :)

Sqoop Installation

The Sqoop installation is done in below versions of Linux and Hadoop

UBUNTU 13.4

HADOOP 1.1.2

I have hduser as a dedicated hadoop system user. I had installed my Hadoop in /home/hduser/hadoop folder. Now I am going to install sqoop in /usr/lib/sqoop folder.

  •  Download sqoop from here
  •  Enter into the directory where the sqoop is downloaded. By default it downloads in “Downloads” directory
            $ cd Downloads/
  • Unzip the tar file.
            $ tar xzf sqoop-1.4.0-incubating.tar.gz
  • Move sqoop-1.4.0-incubating to sqoop
 	    $ sudo mv sqoop-1.4.0-incubating /usr/lib/sqoop
  • Set the SQOOP_HOME path in bashrc file

To open bashrc file use this command

 	    $ gedit ~/.bashrc 

In bashrc file append the below 2 statements

export SQOOP_HOME=/usr/lib/sqoop
export PATH=$PATH:$SQOOP_HOME/bin           
  • Test your installation by typing
            $ sqoop help

Reference:

http://sqoop.apache.org/docs/1.4.0-incubating/SqoopUserGuide.html

 

Note: The information provided here is best of my knowledge and experiences if at all any modifications are to be made please help me with your valuable suggestions which are always welcome…. :)

Connecting Hbase using Java

Some content on this page was disabled on November 9, 2015 as a result of a DMCA takedown notice from Alex Nordeen. You can learn more about the DMCA here:

https://en.support.wordpress.com/copyright-and-the-dmca/

Changing Default metastore [derby] of Hive to MySQL

Hive comes configured to use Derby as the metastore. , but it can support only one active user at a time and is not certified for production use. The solution to supporting multiple users is to use a standalone database. MySQL is a popular choice for the standalone metastore.

  • Step 1: Install and start MySQL if you have not already done so

MySQL Installation Tutorial for instructions of how to install MySQL.

  • Step 2: Configure the MySQL Service and Connector

Download mysql-connector-java-5.0.5.jar file and copy it to $HIVE_HOME/lib directory.

  • Step 3. Create the Database and User

Create a metastore_db database in MySQL database using root user

$ mysql -u root -p

Enter password:

mysql> CREATE DATABASE metastore_db;

Create a User [hiveuser] in MySQL database using root user

mysql> CREATE USER 'hiveuser'@'%' IDENTIFIED BY 'hivepassword';

mysql> GRANT all on *.* to 'hiveuser'@localhost identified by 'hivepassword';

[here the user hiveuser is same as ConnectionUserName in hive-site.xml file.]

mysql>  flush privileges;
  • Step 4: Install Hive if you have not already done so

Hive Installation Tutorial for instructions of how to install Hive.

  • Step 5: Configure the Metastore Service to Communicate with the MySQL Database

Create hive-site.xml file in $HIVE_HOME/conf directory and add the following configurations:

            <configuration>

            <property>

            <name>javax.jdo.option.ConnectionURL</name>

            <value>jdbc:mysql://localhost/metastore_db?createDatabaseIfNotExist=true</value>

            <description>metadata is stored in a MySQL server</description>

            </property>

            <property>

            <name>javax.jdo.option.ConnectionDriverName</name>

            <value>com.mysql.jdbc.Driver</value>

            <description>MySQL JDBC driver class</description>

            </property>

            <property>

            <name>javax.jdo.option.ConnectionUserName</name>

            <value>hiveuser</value>

            <description>user name for connecting to mysql server </description>

            </property>

            <property>

            <name>javax.jdo.option.ConnectionPassword</name>

            <value>hivepassword</value>

            <description>password for connecting to mysql server </description>

            </property>

            </configuration>
  • That’s all. Now hive will create the schema at the backend. Let us check:

Hive console:

hive> create table hivetesting(id string);

MySql console:

There are 2 ways to access metastore_db

1.      mysql -u root -p
        Enter password:
        mysql> use metastore_db;
        mysql> show tables ;

2.     mysql -u hiveuser -p metastore_db
       Enter password:
       mysql> show tables;

metastore

You can query the metastore schema in your MySQL database. Something like:

mysql> select * from TBLS;

On your MySQL database you will see the names of your Hive tables.

Add a new table and verify that the above query returns updated results.

  • Reference:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/4.2.0/CDH4-Installation-Guide/cdh4ig_topic_18_4.html

https://cwiki.apache.org/confluence/display/Hive/AdminManual+MetastoreAdmin#AdminManualMetastoreAdmin-Introduction

Install TeamViewer in Ubuntu 13.4

teamviewer-200x175

  • Open Terminal and execute following commands
  • sudo dpkg -i teamviewer_linux.deb

 team

Installing Pseudo- Distributed HBase on Ubuntu

HBase run modes: Standalone and Distributed

Standalone mode: By default HBase runs in standalone mode. In standalone mode, HBase does not use HDFS. 

Distributed mode: Distributed mode can be subdivided into distributed but all daemons run on a single node is pseudo-distributed— and fully-distributed where the daemons are spread across all nodes in the cluster.

Hadoop version support matrix

HBase-0.92.x HBase-0.94.x HBase-0.95
Hadoop-0.20.205

S

X

X

Hadoop-0.22.x

S

X

X

Hadoop-1.0.0-1.0.2[a]

S

S

X

Hadoop-1.0.3+

S

S

S

Hadoop-1.1.x

NT

S

S

Hadoop-0.23.x

X

S

NT

Hadoop-2.x

X

S

S

[a] HBase requires hadoop 1.0.3 at a minimum; there is an issue where we cannot find KerberosUtil compiling against earlier versions of Hadoop.

Where

S = supported and tested,
X = not supported,
NT = it should run, but not tested enough.

Pseudo- Distributed Installation

The hbase-0.94.8 installation is done in below versions of Linux, Java and Hadoop respectively.

UBUNTU 13.4

JAVA 1.7.0_25

HADOOP 1.1.2

I have hduser as a dedicated hadoop system user. I had installed my Hadoop in /home/hduser/hadoop folder. Now I am going to install hbase in /usr/lib/hbase folder.

  • Download hbase<version>.tar.gz stable version from here
  • Enter into the directory where the stable version is downloaded. By default it downloads in “Downloads” directory
$ cd Downloads/
  • Unzip the tar file.
$ tar -xvf hbase-0.94.8.tar.gz
  • Create directory
$ sudo mkdir /usr/lib/hbase
  • move  hbase-0.94.8 to hbase
$ mv hbase-0.94.8 /usr/lib/hbase/hbase-0.94.8
  • Open your hbase/conf/hbase-env.sh and modify these lines
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_25

export HBASE_REGIONSERVERS=/usr/lib/hbase/hbase-0.94.8/conf/regionservers

export HBASE_MANAGES_ZK=true
  • Set the HBASE_HOME path in bashrc file

To open bashrc file use this command

$ gedit ~/.bashrc

In bashrc file append the below 2 statements

export HBASE_HOME=/usr/lib/hbase/hbase-0.94.8

export PATH=$PATH:$HBASE_HOME/bin

 

  • Update hbase-site.xml in HBASE_HOME/conf folder with required properties.

    hbase-site.xml

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>hbase.rootdir</name>

<value>hdfs://localhost:9000/hbase</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>localhost</value>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2181</value>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/home/hduser/hbase/zookeeper</value>

</property>

</configuration>
  • Now check Hadoop version support matrix. If Hadoop is not supported your hbase version then you will get some exception. To fix this simply copy hadoop-core-*.jar from your HADOOP_HOME and commons-collections-*.jar from HADOOP_HOME/lib folder into your HBASE_HOME/lib folder.
  • Extra steps

In /etc/hosts there are two entries:127.0.0.1 and 127.0.1.1.Change the second entry 127.0.1.1 to 127.0.0.1  otherwise it gives error: org.apache.hadoop.hbase.PleaseHoldException: Master is initializing

  • To start Hbase [ First start hadoop ]
hduser@archana:~$ start-hbase.sh

localhost: starting zookeeper, logging to /usr/lib/hbase/hbase-0.94.8/bin/../logs/hbase-hduser-zookeeper-archana.out
 starting master, logging to /usr/lib/hbase/hbase-0.94.8/logs/hbase-hduser-master-archana.out
 localhost: starting regionserver, logging to /usr/lib/hbase/hbase-0.94.8/bin/../logs/hbase-hduser-regionserver-archana.out

jps command list down all currently running processes

hduser@archana:~$ jps

 4334 HQuorumPeer
 2882 SecondaryNameNode
 4867 Jps
 3207 TaskTracker
 2460 NameNode
 4671 HRegionServer
 4411 HMaster
 2977 JobTracker
 2668 DataNode

Hbase Shell

hduser@archana:~$ hbase shell

HBase Shell; enter 'help<RETURN>' for list of supported commands.
 Type "exit<RETURN>" to leave the HBase Shell
 Version 0.94.8, r1485407, Wed May 22 20:53:13 UTC 2013

hbase(main):001:0> create 't1','c1'
  • To stop HBase
HBASE_PATH$ bin/stop-hbase.sh

stopping hbase...............

To use the web interfaces

http://localhost:60010 for master
http://localhost:60030 for region server

  • Reference :

http://hbase.apache.org/book/standalone_dist.html

http://hbase.apache.org/book/standalone_dist.html#confirm

Note:The information provided here is best of my knowledge and experience if at all any modifications are to be made please help me with ur valuable suggestion which are always welcome…. :)

%d bloggers like this: