Friday, 11 November 2016

Red Hat Storage Server Administration(RH236) Training

The performance based training of Red Hat Storage Server Administration trains one how to install, configure, and maintain a cluster of Red Hat® Storage servers. To earn the certificate one need to demonstrate the ability to implement storage solutions using Red Hat Storage Server and to configure clients to use this storage. The course will also explore highly available common Internet file systems (CIFS) and network file systems (NFS) using CTDB, unified file and object storage, and geo-replication.

Audience

  • Linux system administrators and storage administrators interested in, or responsible for, maintaining large storage clusters using Red Hat Storage
  • Red Hat Certified System Administrator (RHCSA) certification or an equivalent level of knowledge is highly recommended

Prerequisites

  • RHCSA certification or equivalent experience
  • For candidates who have not earned their RHCSA, confirmation of the needed skills can be obtained by passing the online skills assessment.

Outline for this course

1.Introduction to Red Hat Storage
Understand Red Hat Storage server features and terminology.
2 .Explore the classroom environment
Gain familiarity with the classroom environment.
3. Installation
Install Red Hat Storage Server.
4 . Basic configuration
Build a Red Hat Storage server volume.
5 .Volume types
Understand different volume types.
6 .Clients
Access data on Red Hat Storage server volumes from different client types.
7. ACLs and quotas
Implement quotas and Posix access control lists.
8. Extending volumes
Grow storage volumes online.
9. IP failover
Configure IP failover using CTDB.
10.  Geo-replication
Configure geo-replication.
11. Unified file and object storage
Configure Swift object access.
12 .Troubleshooting
Perform basic troubleshooting tasks.
13 .Managing snapshots
Manage snapshots in Red Hat Storage.
14. Hadoop plugin
Learn about the Hadoop plugin configuration.

Tuesday, 11 October 2016

9 Useful Tips For Linux Server Security

Any serious systems can't ignore server security, especially in public Cloud. No doubt there're tons of tips and tutorials available on the Internet. Let's focus on fundamental and general best practices first.
A List Of Security Improvements I Enforce After OS Provisioning.

linux_security.jpg

Here we use Ubuntu 16.04 for instance.

1. Keep Kernel Up-To-Date.

Certainly no blind update for prod envs. But for newly installed servers, it's usually harmless and can guarantee a higher level of security.
One common suggestion is disabling unused services. But I choose to trust my distros provider. Generally speaking, I believe they might make right choices to have what installed and enabled by default.
apt-get -y update

2. Reset Root password.

We need that to access web console of VMs. This happens when ssh doesn't work. e.g. problematic iptables rules block you, OS runs into kernel panic, or machine reboot mysteriously stucks.
root_pwd="DevOpsDennyChangeMe1"
echo "root:$root_pwd" | chpasswd

3. Hardening SSHD.

Only allow ssh by keyfile, thus hackers can't easily break-in by guessing your password. Use another ssh listening port other than 22, which can avoid annoying ssh login attempts.
# Disable ssh by password
sed -i 's/^#PasswordAuthentication yes/PasswordAuthentication no/g' \
      /etc/ssh/sshd_config
sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/g' \
     /etc/ssh/sshd_config
grep PasswordAuthentication /etc/ssh/sshd_config

# Use another ssh port
sshd_port="2702"
sed -i "s/^Port 22/Port $sshd_port/g" /etc/ssh/sshd_config
grep "^Port " /etc/ssh/sshd_config

# Restart sshd to take effect
service ssh restart

4. Restrict Malicious Access By Firewall.

This might be the most important security improvement you shall do.
# Have a clean start with iptables
iptables -F; iptables -X
echo 'y' | ufw reset
echo 'y' | ufw enable
ufw default deny incoming
ufw default deny forward

# Allow traffic of safe ports
ufw allow 22,80,443/tcp

# Allow traffic from certain port
ufw allow 2702/tcp

# Allow traffic from trusted ip
ufw allow from 52.74.151.55

5. Add Timestamp To Command History.

It allows us to review what commands has been issued, and when.
echo export HISTTIMEFORMAT=\"%h %d %H:%M:%S \" >> /root/.bashrc

6. Generate SSH Key Pair.

Never never share the same ssh key pair across servers!
exec ssh-agent bash

# General new key pair
ssh-keygen

# Load key pair
ssh-add

7. Pay Close Attention to var/log.

Use logwatch to automate the check and analysis. It's a userful parsing perl script that analyzes and generates daily reports on your system's log activity. Major log files:
  • /var/log/kern.log
  • /var/log/syslog
  • /var/log/ufw.log
  • /var/log/auth.log
  • /var/log/dpkg.log
  • /var/log/aptitude
  • /var/log/boot.log
  • /var/log/cron.log
  • /var/log/mailog
apt-get install -y logwatch

# Full check. Takes several minutes
logwatch --range ALL

# Only check log of Today
logwatch --range Today

# Check log for last week
logwatch --range "between -7 days and -1 days"

8. Run 3rd Security Check Tools.

Not everyone can or will be a security expert. Better try reliable and versatile tools. lynis is quite handy and straight-forward. Just a single bash file.
apt-get install -y lynis

# Run lynis to check security issues
lynis -c

9. Proper Backup Unrecoverable Data.

Always has plan B. As the last resort, make it's feasible to do a quick system restore in new servers.

Saturday, 17 September 2016

Networking in openstack with Neutron

Neutron:

OpenStack Networking allows you to create and attach interface devices managed by other OpenStack services to networks. Plug-ins can be implemented to accommodate different networking equipment and software, providing flexibility to OpenStack architecture and deployment.

Some  Openstack  Networking  Terms:

Switches

Switches are the devices that enable packets to travel from one node to another. Switches connect hosts that belong to the same layer-2 network. Switches enable forwarding of the packet received on one port (input) to another port (output) so that they reach the desired destination node. Switches operate at layer-2 in the networking model.  


Routers

Routers are special devices that enable packets to travel from one layer-3 network to another. Routers enable communication between two nodes on different layer-3 networks that are not directly connected to each other. Routers operate at layer-3 in the networking model. They route the traffic based on the destination IP address in the packet header.

Firewall:
 
Firewalls are used to regulate traffic to and from a host or a network. A firewall can be either a specialized device connecting two networks or a software-based filtering mechanism implemented on an operating system. Firewalls are used to restrict traffic to a host based on the rules defined on the host. They can filter packets based on several criteria such as source IP address, destination IP address, port numbers, connection state, and so on. It is primarily used to protect the hosts from unauthorized access and malicious attacks. Linux-based operating systems implement firewalls through iptables.


Load balancers

Load balancers can be software-based or hardware-based devices that allow traffic to evenly be distributed across several servers. By distributing the traffic across multiple servers, it avoids overload of a single server thereby preventing a single point of failure in the product. This further improves the performance, network throughput, and response time of the servers.  .


NAT  
 
Network Address Translation (NAT) is a process for modifying the source or destination addresses in the headers of an IP packet while the packet is in transit. In general, the sender and receiver applications are not aware that the IP packets are being manipulated.

NAT is often implemented by routers, and so we will refer to the host performing NAT as a NAT router. However, in OpenStack deployments it is typically Linux servers that implement the NAT functionality, not hardware routers. These servers use the iptables software package to implement the NAT functionality.

There are multiple variations of NAT, and here we describe three kinds commonly found in OpenStack deployments.

SNAT

In Source Network Address Translation (SNAT), the NAT router modifies the IP address of the sender in IP packets. SNAT is commonly used to enable hosts with private addresses to communicate with servers on the public Internet.


DNAT

In Destination Network Address Translation (DNAT), the NAT router modifies the IP address of the destination in IP packet headers.

OpenStack uses DNAT to route packets from instances to the OpenStack metadata service. Applications running inside of instances access the OpenStack metadata service by making HTTP GET requests to a web server with IP address 169.254.169.254. In an OpenStack deployment, there is no host with this IP address. Instead, OpenStack uses DNAT to change the destination IP of these packets so they reach the network interface that a metadata service is listening on.

If anyone want to do RHCA Training, please visit - http://www.rhcatraining.com/

Monday, 12 September 2016

Virtualization facts

1 What is virtualization?
Virtualization (or virtualisation) is the creation of a virtual  version of something, such as a hardware platform, operating system, a storage device or network resources.

2 What are the benefits of virtualization?

Virtualization 
Virtualization Benefits

1. It helps in saving lots of cost and allows easy maintenance.
2. It allows multiple operating systems on one virtualization platform.
3. It removes the dependency of heavy hardware to run the application.
4. It provides servers that are used for crashing of a server purpose
5. It reduces the amount of space being taken by data centres.

3 What is the purpose of a Hypervisor?
Hypervisor is a program that manages the virtual machine.
It allows multiple operating system to share single hardware resource.
Each operating system in this consists of its own defined space consisting of space, memory and processor. It is used as a controller program to control host processors and resources.
4 What is a VMware kernel?

VMWare kernel is a proprietary kernel that means that it is a registered kernel by VMWare Company and it is not based on any other kernel architecture or any other operating system.
5 Features of VMware.

VMware provides web browser Interface.
It provides easy to use wizard to configure the Settings.
It provides tools to easily create hosts and maintain it from one Place.
It provides easy maintenance of Virtual Machines.
It provides easy graphics to configure the VMWare settings for security

Tuesday, 9 August 2016

RHCA Training in Jaipur

RedHat Certified Architect (RHCA) - "Highest Level of Certification in RedHat Family & Biggest Ever Campaign by LinuxWorld" 

The path to Red Hat Certified Architect (RHCA) isn't as straightforward. Apart from being a Red Hat Certified Engineer (RHCE), you need to pass the following exams as well:

A Red Hat® Certified Architect (RHCA) is a Red Hat Certified Engineer (RHCE®) or Red Hat Certified JBoss Developer who attained Red Hat's highest level of certification by passing and keeping current five additional hands-on exams on Red Hat technologies from the respective lists below. If you pass more than five, then you achieve a higher level of RHCA. For example, if you passed six exams beyond RHCE or RHCJD you would be an RHCA Level II. This status would be reflected on the verification page.
In order to retain RHCA status, you must keep at least five credentials beyond either RHCE or RHCJD current from the respective lists on the SKILLS tab below. These need not be the same credentials you used when you first earned RHCA.


  1.     Red Hat Gluster Storage Administration (RH236)
  2.     Red Hat Enterprise Virtualization (RH318)
  3.     Red Hat Server Hardening (RH413)
  4.     JBoss Administration I (JB248)
  5.     RedHat OpenStack Administration (CL210)



 

Friday, 25 December 2015

Learning To Love Learning Linux

As 2015 comes to a close, I have come to realize a great truth: the best thing I ever did was to dump Windows and switch completely to Linux. I speak from my own experience but I also speak for many of my EzeeLinux clients who feel the same way. Of course, your mileage may very.  This article is not intended to sell you on Linux. Chances are good that if you’re reading this you already know about Linux and you may be using it every day. This article is for those who can’t seem to let go of Windows or Mac and are hanging on to those proprietary OS’s out of habit or because you’re not quite comfortable putting your entire computing life into the Linux space.

That being said, I can totally understand this because I was a Windows power user with some solid MS IT training under my belt. I knew the system and I was very confident in my ability to recover from a major Windows meltdown and get things going again, but I wasn’t so sure about Linux. I felt like I wasn’t quite ready to reformat my hard drives and go completely Linux because I was just a tad bit unsure of what I would do if something went wrong. Afraid to jump out of what I thought was my comfort zone, I made excuses every time I entertained the thought of making the break from Windows entirely. This state of mind stayed with me for years, so I kept my desktop machine loaded with Windows 7 and considered it my main PC while I ran Ubuntu on my laptop. I figured if the laptop died it would be no big deal, right? The funny thing was that the laptop ran damn near perfect for all of those years while I constantly chased Windows crashes, viruses and performance issues. The laptop just chugged along, doing its thing and hardly every complained. Still, I hung on to Windows, not quite ready to jump to Linux full time; not quite able to make the jump because I couldn’t get over all the what-ifs.


So, what was it that broke me out of this rut? It was a combination of things, really. First off, Ubuntu 14.04 came along and for the first time, I saw everything working right after I upgraded from 12.04, no tinkering required. The second thing that happened was that I bought a book about Linux Mint 16 called “Linux Mint Essentials” By Jay LaCroix. I eagerly awaited its arrival thinking I would gain tremendous insight into the inner workings of Linux Mint. While I did learn a lot of new stuff, I was kinda surprised at how much I already knew after finishing with it. The book really only whetted my appetite for more knowledge and so I decided that it was time to get serious about this Linux stuff and seek some training.

I didn’t have to look too far because I quickly came across The Linux Foundation’s “Introduction To Linux” course available through edX.  I’m not bragging, but I found the edX class to be somewhat like the book in that I found I knew a lot more than I thought I did. It seems that the previous ten years worth of reading about Linux, watching YouTube videos about Linux and fooling around with Linux myself had taught me a great deal. I felt my confidence rising more and more the further along I got in the course. And when it was done I knew for sure that I was ready to deal with just about any problem a Linux box might throw at me. The course is designed to be the first step toward becoming a certified Linux SysAdmin and there’s much in it that the average home user wouldn’t need on a daily basis but knowing a bit about networking, scripting and the command line certainly helps when you’re trying to figure out how to do something with your computer that goes beyond point and click. The main thing I took from the experience was the feeling that I had the knowledge required to make the system do my bidding and meet my computing needs. Microsoft’s days were numbered on my computer and I wouldn’t have to wait long before I had no choice but to take the leap.

The last straw turned out to be a dying hard drive. My Windows installation started freezing and a quick check of the disk’s stats revealed rising operating temperatures and a growing bad cluster count. I popped some new SSDs into my old Dell and loaded them with Linux and I haven’t looked back since.  All of the pitfalls I was so worried about have turned out to be non-issues. I have found software to do everything I need and in most cases it does a better job than what I was running on Windows. Reloading Linux is no big deal and I spent a few months doing some distro hopping, but now I have settled down into a groove that works great. The laptops run the latest and greatest Ubuntu and my desktops are happy with Linux Mint.
Notice that we have switched from singular to plural here. I now have a whole network of computers and I can honestly say that I wouldn’t have ever bothered with as many machines as I have now if they were running anything other than Linux. The main reasons I say this are because of the expense of buying a copy of Windows for each one and also because I would probably go crazy chasing down malware, viruses, updates and drivers for all of those Windows machines. One machine has five separate user accounts for family members and all I have to do is keep it updated when I log in with my admin account. Other than that, It just keeps on going.

Another thing that happened because of the jump to Linux was that I felt confident enough to convert my word-of-mouth freelance IT consulting side business into an outreach program to help other people get off of the Microsoft Money-go-round and get into Linux. All of that has led me here, writing this article for you and looking forward to what wonderful Linux-related things will come along in 2016.

One more thing I have found out is that I didn’t know Windows nearly as well as I thought I did. Looking back, I remember all of those times I stared at the screen asking myself, “What the &%$! is this thing doing?” It’s not like that with Linux. I usually can figure out what’s going on when my Linux machines have a hiccup or I at least know where to start looking. Never before in the 30 years that I’ve been working with computers have I ever felt so in control. Linux has made computing fun again and pretty much worry free.
If you’re sitting on the fence with Linux, take a look at the Linux Foundation “Introduction To Linux” course. You can audit it for free. What have you got to lose? Make it your New Year’s Resolution. Happy Holidays to you and yours where ever you are and whatever Holiday it is you celebrate. 2016 is going to be a wonderful year for Linux. I’m glad you’re here.

Thursday, 5 November 2015

Why you want a bare metal hypervisor and how to choose

nce upon a time, there was nothing but native, or bare metal, hypervisors (a.k.a. virtual machine managers). In the 1980s, I cut my teeth on IBM System/370 mainframes running VM/CMS, but bare metal’s history goes all the way back to the 1960s. With bare metal hypervisors, the hypervisor runs directly on the hardware. There is no intervening operating system.
The formal definition of bare metal hypervisor, or, as it was called in its day, Type 1 hypervisor, goes back to Gerald Popek and Robert Goldberg’s seminal paper, Formal requirements for virtualizable third generation architectures. They also defined bare metal’s great competitor, the Type 2, or hosted hypervisor.
Today, bare metal virtual machines are still very much with us. VM/CMS evolved into IBM’s z/VM. And there are many other bare metal systems. Chances are you and your crew are using one even now.
Citrix’s open-source XenServer powers Amazon Web Services (AWS). Oracle VMfor SPARC and x86 are both based on Xen. There’s VMware’s ESX and ESXi,Microsoft Hyper-V, and HP’s Integrity VM.
While the implementations are quite different, the name of the game is to provide a minimal operating system that provides just what’s needed to run virtual machines. No more, no less. If you have an extra layer, like an operating system, between your VM and the hardware, that opens the door to performance, latency, security, scalability, and VM isolation problems.
There are corner cases of course. For example, you can still get a hot argument going in some circles if you suggest that KVM isn’t really a bare metal hypervisor.
For your servers, whether it’s just one Xeon box in the server closet, a thousand servers in your data center, or ten-thousand in your private cloud, what you really want is a bare metal hypervisor.

How to choose

For the mainstream operating systems there are four main choices: KVM, ESX/ESXi, Hyper-V, and Xen, in one form or another. You can argue until you’re blue in the face about which one is “better,” but generally speaking they all do an excellent job. More to the point of justifying the purchase to your CFO, each has its own role to play.